Feed aggregator
Six from MIT awarded 2026 Paul and Daisy Soros Fellowships for New Americans
Six MIT affiliates — Denisse Córdova Carrizales SM ’26; Ria Das ’21, MNG ’22; Ronak Desai; Stacy Godfreey-Igwe ’22; Arya Rao; and Ananthan Sadagopan ’24 — have been named 2026 P.D. Soros Fellows. In addition, P.D. Soros Fellow Avinash Vadali will begin a PhD in condensed-matter physics at MIT this fall.
The fellowship provides immigrants and the children of immigrants up to $90,000 in tuition and stipend support for up to two years of graduate studies. Interested students should contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.
Denisse Córdova Carrizales
Córdova Carrizales SM '26 is a PhD student in nuclear science and engineering in the lab of Professor Mingda Li, where she completed her master's work earlier this year. She is working on synthesizing and characterizing quantum materials with the goal of bridging fundamental science and industry to make our technology more energy-efficient and sustainable.
Córdova Carrizales, who is of Mexican descent, grew up in Houston, Texas, before attending Harvard University, where she graduated in 2023 with a BA in physics. At Harvard, she dove into experimental condensed-matter research. She also conducted research with the Princeton Plasma Physics Laboratory, Commonwealth Fusion Systems, and VEIR, spanning computational plasma physics and high-temperature superconducting magnet and cable engineering.
Her work includes coauthored papers in Nature Physics, Nature Materials, and Advanced Materials, as well as lead-author publications in Nano Letters and Physical Review Materials. In 2023, she received the LeRoy Apker Award from the American Physical Society.
Beyond research, Córdova Carrizales has advocated in Congress for nuclear disarmament and risk reduction and has written a piece on the nuclear stockpile stewardship program. At Harvard, she founded an organization to support first-generation college students studying physics. In a completely different arena, she performed as the lead in an off-Broadway show in New York.
Ria Das
Das ’21, MNG ’22 is a PhD student in the MIT Department of Electrical Engineering and Computer Science. She graduated from MIT in 2021 with a BS dual degree in mathematics along with electrical engineering and computer science, and received her master of engineering degree in 2022.
The daughter of Indian immigrant parents, Das grew up in Nashua, New Hampshire, where she struggled with issues of belonging and identity. These questions came to the forefront during her PhD studies at Stanford University. Das decided to step off the academic treadmill by taking a leave from her PhD to think more deeply about these topics.
During her leave, she traveled around the country before moving to New York to work at Basis Research Institute, an AI research nonprofit. As a research associate, Das developed an urban data team that worked with federal and municipal government agencies on issues of economic and housing equity, blending her interests in science and social problems. She then returned to MIT to complete her doctoral studies.
Today, Das works with Professor Joshua Tenenbaum in the Department of Brain and Cognitive Sciences to study how people undergo conceptual change to build more robust, accessible systems for automated (social) science and improved educational design. Looking ahead, she hopes to become a professor, collaborating closely with policy practitioners.
Ronak Desai
Desai is currently a student in the Harvard/MIT MD-PhD program, where his PhD focuses on chemistry. The son of immigrants from Gujarat, India, Desai was born in Tyler, Texas, and grew up in nearby Lindale. He earned his undergraduate degree at the University of Texas at Austin.
Desai spent a semester interning at the U.S. House of Representatives as a Bill Archer Fellow. He also completed biomedical research focused on studying and engineering novel polyketide synthases, aspiring to produce next-generation antibiotics by harnessing such newly engineered synthases.
Desai graduated with degrees in chemistry and biochemistry as a first-generation college student, Health Science Scholar, and Dean’s Honored Graduate, receiving nine scholarships throughout college. His research has resulted in publications in journals such as Cell and Nature Communications.
Desai hopes to combine his passions for medicine, science, and public policy in his career to advance the treatment of infectious diseases. He is conducting his doctoral research under Professor James J. Collins in the MIT Department of Biological Engineering and the Harvard-MIT Program in Health Sciences and Technology. Desai’s research centers on using artificial intelligence to discover and design novel antibiotics, an opportunity to advance treatments for patients worldwide.
Stacy Godfreey-Igwe
Godfreey-Igwe ’22 attended MIT as a QuestBridge and Gates Scholar, graduating in 2022 with a BS in mechanical engineering and a concentration in sustainable design. A Burchard Scholar, she also became the first student at MIT to complete a major in African and African diaspora studies. After graduating, she pursued a science policy fellowship in Washington and interned at the U.S. Department of Energy’s Building Technologies Office, where she worked to broaden adoption of heat pump technologies across diverse stakeholders.
Growing up in Richardson, Texas, as the daughter of Nigerian immigrants, Godfreey-Igwe developed an early awareness of structural inequality, particularly in how families like hers managed the burden of the severe Texas heat and high electricity costs. These experiences formed the basis of her lifelong journey seeking to address systemic inequities embedded in everyday systems.
Godfreey-Igwe is currently a doctoral student in the joint engineering and public policy - civil and environmental engineering program at Carnegie Mellon University (CMU), where she was selected for the inaugural CMU Rales Fellowship cohort. At CMU, she studies the impact of extreme heat on household energy use, particularly in vulnerable communities.
Beyond her research, Godfreey-Igwe organizes outreach and programming for local underrepresented students in STEM and participates in institutional efforts to expand access and belonging among graduate students. She aims to be a scholar and advocate whose work, drawing on her personal experiences, informs equitable energy solutions in a warming world.
Arya Rao
Rao is a student in the Harvard/MIT MD-PhD program. She completed her undergraduate degrees in biochemistry and computer science at Columbia University. Working with professors Pardis Sabeti (Harvard University) and Sangeeta Bhatia (MIT), Rao uses evolution as a lens for therapeutic design, developing artificial intelligence methods that read the genetic record and guide new intervention strategies.
Leveraging her dual training in medicine and computer science, Rao also leads the MESH AI Research Group at Mass General Brigham, where she develops simulation-based tools that test clinical AI systems in realistic educational settings before they reach patients.
Rao has been recognized for her work with a Forbes 30 Under 30 honor, the Massachusetts Medical Society Information Technology Award, the Harvard Presidential Public Service Fellowship, a Harvard Medical School Dean’s Innovation Award, and a Ladders to Cures Accelerator Award. She has published more than 30 manuscripts in publications including JAMA, Nature, and NEJM AI.
Growing up in rural northern Michigan, Rao was inspired by her parents, Konkani immigrants from India, who served as two of the area’s only physicians. She has always imagined a career that could leverage scientific innovation to improve patient care, especially for communities without access like her own. Going forward, she envisions a career as a surgeon-scientist that keeps her close to patients while taking on leadership that shapes how new technologies are evaluated, implemented, and made usable in the places that need them most.
Ananthan Sadagopan
Sadagopan ’24 grew up in Westborough, Massachusetts, as the child of immigrants from Chennai, India. He participated in chemistry competitions, winning the You Be the Chemist Challenge in middle school and earning a gold medal at the International Chemistry Olympiad for the United States in high school. He attended MIT for college, graduating in three years in 2024 with a bachelor’s degree in chemistry and biology.
At MIT, Sadagopan worked with Srinivas Viswanathan on computational biology projects and with William Gibson, Matthew Meyerson, and Stuart Schreiber on chemical biology projects. He led projects characterizing somatic perturbations of X chromosome inactivation in cancer, developing a machine-learning tool for cancer dependency prediction, using small molecules to relocalize proteins in cells, and creating a generalizable strategy to drug the most mutated gene in cancer, TP53. Sadagopan’s work has been patented and published in journals such as Cell and Nature Chemical Biology.
Sadogopan was president of the chemistry undergraduate association and led the events committee for MIT Science Olympiad. He is currently pursuing a PhD in biological and biomedical science at Harvard University as a Hertz Fellow and Herchel Smith Fellow. He is interested in de-risking new therapeutic strategies and hopes that his work will inspire pharma companies to bring first-in-class therapies to patients.
The GUARD Act Isn’t Targeting Dangerous AI—It’s Blocking Everyday Internet Use
Lawmakers in Congress are moving quickly on the GUARD Act, an age-gating bill restricting minors’ access to a wide range of online tools, with a key vote expected this week. The proposal is framed as a response to alarming cases involving “AI companions” and vulnerable young users. But the text of the bill goes much further, and could require age gates even for search engines that use AI.
Tell Congress: oppose the guard act
If enacted, the GUARD Act won’t just target a narrow category of risky chatbots. It would require companies to verify the age of every user — then block anyone under 18 from interacting with a huge range of online systems. It would block minors from everyday online tools, undermine parental guidance, and force adults to sacrifice their privacy. In the process, it would require services to implement speech-restricting and privacy-invasive age-verification systems for everyone—not just kids.
Under the GUARD Act’s broad definitions, a high school student could be barred from asking homework help tools questions about algebra problems. A teenager trying to return a product could be kicked out of a standard customer-service chat.
The concerns behind this bill are serious. There have been troubling reports of AI systems engaging in harmful interactions with young users, including cases involving self-harm. Those risks deserve attention. But they call for targeted solutions, like better safeguards and enforcement against bad actors, not sweeping restrictions. The bill’s sponsors say they’re targeting worst-case scenarios — but the bill regulates everyday use.
The GUARD Act’s Broad Definitions Reach Everyday ToolsThe problem starts with how the bill defines an “AI chatbot.” It covers any system that generates responses that aren’t fully pre-written by the developer or operator. Such a broad definition sweeps in the basic functionality of all AI-powered tools.
Then there’s the definition of an “AI companion,” which minors are banned from using entirely. An AI companion is any chatbot that produces human-like responses and is designed to “encourage or facilitate” interpersonal or emotional interaction. That may sound aimed at simulated “friends” or therapy chatbots. But in practice, it’s much fuzzier.
Modern chatbots are designed to be conversational and helpful. A homework helper might say “good question” before walking a student through a problem. A customer service chatbot may respond empathetically to a complaint (“I’m sorry you’re having this problem.”) A general-purpose assistant might ask follow-up questions. All of these could be seen as facilitating “interpersonal” interaction — and triggering the GUARD Act.
Faced with steep penalties and unclear boundaries, companies are unlikely to take chances on letting young people use their online tools. They’ll block minors entirely or strip their tools down to something less useful for everyone. The result isn’t a narrow safeguard—it’s a broad restriction on everyday online interactions.
Homework Question? Show ID And Call Your ParentsStart with a student getting help with homework. Under the GUARD Act, the service must verify the user’s age using more than a simple checkbox—it must rely on a “reasonable age verification” measure, which could require a government ID or a third-party age-checking system. If the system decides a user is under 18, the company must decide if its tool qualifies as an “AI Companion.” If there’s any risk it does, the safest move is to block access entirely.
The same logic applies to everyday customer service. A teenager trying to fix an order issue gets routed to a chatbot, and the company faces a choice: build a full age-verification system for a routine interaction, or restrict access to avoid liability. Many will choose the latter.
This isn’t a narrow restriction aimed at a few risky products. It’s a compliance regime that pushes companies to block or limit any product that generates text for minors, across the board.
ID Checks for EveryoneThe GUARD Act doesn’t just affect minors. The bill takes a big step towards an internet that only works when users are willing to upload a valid ID or comply with other invasive age-verification schemes. Companies must verify the age of every user—not through a simple self-declaration, but through a “reasonable age verification” system tied to the individual.
In practice, that means collecting sensitive personal information: government IDs, financial data, or biometric identifiers. Companies can outsource verification, but they remain legally responsible. And the law requires ongoing verification, so this isn’t a one-time check. Worse, studies consistently show that millions of people have outdated information on their IDs, such as an old address, or do not have government ID. Should services require ID, many folks without current or any ID will be shut out.
And for those who do have compliant ID, turning over this information repeatedly creates obvious risks. Databases of sensitive identity information become targets for breaches. Anonymous or pseudonymous use of online tools becomes harder or impossible.
To keep minors away from certain chatbots, the GUARD Act would require everyone to prove who they are just to use basic online tools. That’s a steep tradeoff. And it doesn’t actually address the specific harms the bill is supposed to solve.
Vague Definitions, Huge PenaltiesThe GUARD Act’s broad scope is enforced with steep penalties. Companies can face fines of up to $100,000 per violation, enforced by federal and state officials. At the same time, key terms like “AI companion” rely on vague concepts such as “emotional interaction.” That combination will lead to overblocking. Faced with legal uncertainty and serious liability, companies won’t parse small distinctions. They’ll restrict access, limit features, or block minors entirely.
That is the unfortunate result of the GUARD Act, even though the concerns animating it are worthy of fixing. But the GUARD Act’s broad terms will apply far beyond the concerning scenarios.
In the end, that means a more restricted and more surveilled internet. Teenagers would lose access to tools they rely on for school and everyday tasks. Everyone else faces new barriers, including ID checks. Smaller developers, who aren’t able to absorb compliance costs and legal risk, would be pushed out, leaving the largest companies even more dominant.
Young people — and all people — deserve protection from genuinely harmful products. But this bill doesn’t do that. It trades away privacy, access, and useful technology in exchange for a blunt system that misses the mark.
Congress could act soon. Tell them to reject the GUARD Act.
Tell Congress: say no to mandatory online id checks
Congress Must Reject New Insufficient 702 Reauthorization Bill
Speaker Johnson has introduced a new fig leaf over the American surveillance state, the Foreign Intelligence Accountability Act. Introduced with only days to go before Section 702 of the Foreign Intelligence Surveillance Act (FISA) expires and the U.S. government loses one of its most invasive surveillance programs, the bill does nothing to make any of the substantial changes privacy advocates have been asking for --- most notably, it fails to give us a real warrant requirement for the FBI to snoop through the private conversations of people on U.S. soil.
Section 702 needs to be reauthorized by Congress every few years. These reauthorizations give us a chance to tinker with the language of the law and introduce some much-needed reforms. This attempt at reauthorization has been particularly fraught, but there is still time for Congress to include real protection for Americans’ civil liberties and rights. We need to make sure that when an FBI agent wants to look through Americans’ conversations scooped up as part of a national security intelligence program, they need a warrant signed by a judge just as if they were trying to search your email account or your house.
This new bill mandates that a civil liberties protection officer at the Director of National Intelligence review all queries of U.S. persons made by the FBI under this program to make sure no laws have been broken. It’s bad enough to let the intelligence community police itself, and what’s more, the assessment for illegality would be made after a U.S. person has already been spied on. This is hardly the reform we need and will likely just lead to continued abuse with no real accountability or consequences.
The bill “prohibits targeting United States persons,” but so does current law. This “change” does absolutely nothing to address what’s really happening—which is that surveillance of people in the United States is usually justified as “incidental” because Americans aren’t the “target” of the surveillance. The bill does not create a warrant requirement, it does not create any new transparency requirements, and it does not protect Americans’ privacy.
We urge Congress, and we urge you to write to your Congresspeople, to tell them this: Reject the surveillance state’s latest smokescreen known as the Foreign Intelligence Accountability Act and keep pushing for real reforms.
The Internet Still Works: SmugMug Powers Online Photography
SmugMug is a family-owned photo hosting and e-commerce platform that helps professional photographers run their businesses online. Founded in 2002, the company provides tools for photographers to show their work, deliver client galleries, sell prints, and manage payments.
In 2018, SmugMug purchased Flickr, the long-running photo-sharing community, which added tens of millions of active hobbyist photographers to the company’s user base.
Ben MacAskill is President and COO of SmugMug’s parent company, Awesome, which he co-founded with his family. Awesome also includes the media network This Week in Photo and the nonprofit Flickr Foundation, which focuses on preserving publicly available photography. MacAskill has been an active voice in policy discussions around Section 230 and online platform regulation. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team.
Joe Mullin: How would you explain Section 230 to a SmugMug photographer who hasn't heard of it but relies on you to share their work, run their business.
Ben MacAskill: Section 230 allows us to run our business. We are a small, family run business. We don’t have the resources to police every single upload, every single comment, or every single engagement that happens on the site.
That includes photographers who have comments on their sites. Anywhere there’s interaction online, Section 230 protects us.
It doesn't absolve us of liability. We can't run rampant and do anything we want. It just helps protect us and make it scalable so that we can run our business.
What would you have to change if Section 230 were eliminated or significantly narrowed?
Honestly, there's a high chance that it would bankrupt platforms like ours. They're not wildly profitable. If Section 230 is done away with, we have to [check] content that goes online to make sure we’re not liable. That means policing tens of millions of uploads per day.
That would kill the business of a lot of photographers. Can you imagine—you just got married, and you’re waiting for your wedding photos for a week or two because they’re in some moderation queue?
If we don’t have legal protections, and we get one nefarious customer—if something goes sideways—then I’m liable for that.
I don't, and can't possibly know, whether every single photo is appropriate or legal, as it's uploaded. We would literally have to moderate everything before it goes online. I don’t think any business can afford that, period. I guess you could have an offshore call-center type thing. Still, it would change the entire nature of the real-time internet. Imagine posting something to Instagram and having the platform say, “Cool, we’ll get back to you in 8 to 12 days.”
What kind of content moderation do you do on SmugMug?
If a user uploads something illegal, we will report them as soon as we find it. We're not protecting them. We don’t condone or allow illegal behavior. We work very closely with organizations, nonprofits and governmental agencies to detect CSAM—child exploitative material—and we report that to the National Center for Missing and Exploited Children. We will report users, we eliminate illegal content on our platforms—which is one reason we have such a low prevalence of that problem.
But that does take effort and time to find, and there is currently no perfect solution. The tech solutions that exist can’t detect it at 100% accuracy, or anywhere close. And with tens of millions of uploads a day, going through them one by one is impossible.
How do you think more generally about protecting user speech and creative expression?
On SmugMug, we’re really focusing on professionals running their business. So we don’t have to [weigh in] on content too much.
On Flickr, we are big proponents of expression and artistic creativity. Photographers have opinions! But we do draw the line at things like hate speech and harassment. We aggressively maintain a friendly platform. Our community guidelines are very specific, that you cannot harass other customers, you cannot upload stuff classified as hate speech, or threats, or anything along those lines.
Those rules are generally policed by the community. We do have some text analysis tools, but when community members feel harassed or threatened, reports will come in. We’ll address them on a one-by-one basis and remove harassing material from our platform.
Our ability to moderate is one of the things that makes Flickr what it is. If we lose the ability to enforce our own moderation rules—or have that legislated for us—then it changes the entire nature of the community. And not in a good way. Losing the ability to moderate would permanently and forever change what we've built.
What kind of complaints or takedown requests do you receive, and how do you handle it, both in the U.S. and abroad?
Flickr is often referred to as the friendliest community online. You know, we're not dealing with a lot of hate. We're not dealing with a lot of threats. Under other frameworks, like the DMCA, we do takedowns on copyrighted material.
We’re able to handle it with a fully internal team, and we have a great track record. But the user base and the content base is so large that, if we had to assume that those tens of millions of uploads a day are problematic, the burden would be extreme.
We have a robust Trust and Safety Team, and we operate in every non-embargoed country on Earth. So we are subject to a lot of different laws and regulations: “likeness” rules and privacy rules in certain countries that don't exist here in the United States. Even state to state, there’s some varying laws. It’s a complicated framework, but we pay attention to it.
The globe responds in much the same way that Section 230 is working. That is, we operate on reports and discovery, not on pre-screening everything.
What do you think that policy makers most often misunderstand about how platforms like yours operate?
One misconception is that we are not beholden to any laws. That Section 230 absolves us of any responsibility and any liability, and we can just do whatever we want. They talk about it as “reining in tech companies,” or “holding tech companies accountable.” But I am accountable for the content on my platform. We’re not given this “get out of jail free” card.
And I think they assume all platforms don’t really care about this, that anything that is done is done begrudgingly. But we’re very proactive about keeping a clean, polite, and friendly community. We are already very aggressively policing our platform.
And even legal content gets moderated, because it might just not be appropriate for a particular community.
We enforce our rules, and much the way that other private in-person businesses will enforce their rules. If you start screaming hateful things at patrons in a coffee shop, they’re going to throw you out. They want a quiet, chill vibe where people can sip their lattes. We’re doing the same sort of things.
As an independent family owned company you’re in an ecosystem dominated by much larger platforms. How are these issues different for you as a smaller service?
I think it's a much more existential threat for middle and small tech companies. It also shuts off the next generation of these platforms. The computer science student in a dorm room right now won't have the legal protections to launch, to even try to build something new. At least not here in the United States.
Medieval Encrypted Letter Decoded
Sent by a Spanish diplomat. Apparently people have been working on it since it was rediscovered in 1860.
Oil industry’s Supreme Court win spills into climate lawsuits
Power plant repeal: Coming soon, in two parts
Marine heat wave could fuel more extreme weather in the West
Pro-renewables Republicans file bill to revive credits
Georgia blaze shows climate change spurring more Eastern wildfires
Camp Mystic warned of safety plan problems as it seeks to reopen
Britain’s finance institution sets up $1.5B Asia energy strategy
Fail on climate action and miss out on promotion, China warns
Self-organizing “pencil beam” laser could help scientists design brain-targeted therapies
MIT researchers discovered a paradoxical phenomenon in optical physics that could enable a new bioimaging method that’s faster and higher-resolution than existing technology.
They discovered that, under the right conditions, a chaotic mess of laser light can spontaneously self-organize into a highly focused “pencil beam.”
Using this self-organized pencil beam, the researchers captured 3D images of the human blood-brain barrier 25 times faster than the gold-standard method, while maintaining comparable resolution.
By showing individual cells absorbing drugs in real-time, this technology could help scientists test whether new drugs for neurodegenerative disease like Alzheimer’s or ALS reach their targets in the brain, with greater speed and resolution.
“The common belief in the field is that if you crank up the power in this type of laser, the light will inevitably become chaotic. But we proved that this is not the case. We followed the evidence, embraced the uncertainty, and found a way to let the light organize itself into a novel solution for bioimaging,” says Sixian You, assistant professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the Research Laboratory for Electronics, and senior author of a paper on this imaging technique.
She is joined on the paper by lead author Honghao Cao, an EECS graduate student; EECS graduate students Li-Yu Yu and Kunzan Liu; postdocs Sarah Spitz, Francesca Michela Pramotton, and Federico Presutti; Zhengyu Zhang PhD ’24; Subhash Kulkarni, an assistant professor at Harvard University and the Beth Israel Deaconess Medical Center; and Roger Kamm, the Cecil and Ida Green Distinguished Professor of Biological and Mechanical Engineering at MIT. The paper appears today in Nature Methods.
A surprising finding
The discovery began with an observation that initially puzzled the researchers.
The team previously developed a precise fiber shaper, a device that enables them to carefully tune the laser light shining through a multimode optical fiber. This type of optical fiber can carry a significant amount of power.
Cao was pushing the multimode fiber toward its limit to see how much power it could take.
Typically, the more power one pumps into the laser, the more disordered and scattered the beam of light becomes due to imperfections in the fiber.
But Cao observed that, as he increased the power almost to the point where it would burn the fiber, the light did the opposite of what was expected: It collapsed into a single, needle-sharp beam.
“Disorder is intrinsic to these fibers. The light engineering you typically need to do to overcome that disorder, especially at high power, is a longstanding hassle. But with this self-organization, you can get a stable, ultrafast pencil beam without the need for custom beam-shaping components,” You says.
To replicate this phenomenon, the researchers found they had to satisfy two simple, but precise conditions.
First, the laser must enter the fiber at a perfect, zero-degree angle. This is a more rigorous requirement than is usually used for these types of fibers. Second, the power must be dialed up until the light begins to interact with the glass of the fiber itself.
“At this critical power, the nonlinearity can counter the intrinsic disorder, creating a balance that transforms the input beam into a self-organized pencil beam,” Cao explains.
Typically, researchers conduct these experiments at much lower power levels for fear of destroying the fiber, in which case they wouldn’t see this self-organization. In addition, such precise on-axis alignment isn’t typically necessary since a multimode fiber can carry so much power.
But taken together, these two techniques can generate a stable pencil-beam without any complicated light engineering methods.
“That is the charm of this method — you could do this with a normal, optical setup and without much domain expertise,” You says.
A better beam
When the researchers performed characterization experiments of this pencil beam, it was more stable and high-resolution than many similar beams. Other beams often suffer from “sidelobes” — blurry halos of light that can distort images.
Their beam was more pristine and tightly focused.
Building on those experiments, the researchers demonstrated the use of this pencil-beam in biomedical imaging of the human blood-brain barrier.
This barrier is a tightly packed layer of cells that protects the brain from toxins, but it also blocks many medicines. Scientists and clinicians often want to see how drugs flow inside the vasculature of the blood-brain barrier and whether they reach their targets within the brain.
But with standard optical settings, the best one can do is capture one 2D section of the vasculature at a time, and then repeat the process multiple times to generate a fuller image, You explains.
Using this new technique, the researchers created an ultrafast, high-precision pencil beam that enabled them to dynamically track how cells absorb proteins in real-time.
“The pharmaceutical industry is especially interested in using human-based models to screen for drugs that effectively cross the barrier, as animal models often fail to predict what happens in humans. That this new method doesn’t require the cells to have a fluorescent tag is a game-changer. For the first time, we can now visualize the time-dependent entry of drugs into the brain and even identify the rate at which specific cell types internalize the drug,” says Kamm.
“Importantly, however, this approach is not limited to the blood-brain barrier but enables time-resolved tracking of diverse compounds and molecular targets across engineered tissue models, providing a powerful tool for biological engineering,” Spitz adds.
The team captured cellular-level 3D images that were higher quality than with other methods, and generated these images about 25 times faster.
“Usually, you have a tradeoff between image resolution and depth of focus — you can only probe so far at a time. But with our method, we can overcome this tradeoff by creating a pencil-beam with both high resolution and a large depth of focus,” You says.
In the future, the researchers want to better understand the fundamental physics of the pencil-beam and the mechanisms behind its self-organization. They also plan to apply the technique to other scenarios, such as imaging neurons in the brain, and work toward commercializing the technology.
“You’s group realized this beam that concentrates energy in time and space could be valuable for microscopy techniques that depend on the intensity of the light that illuminates the sample. They demonstrated just that and found advantages over ordinary laser beams for imaging. It will be scientifically interesting to fully understand the creation of the new pencil beams, which could find use in a variety of imaging applications,” says Frank Wise, the Samuel B. Eckert Professor of Engineering Emeritus at Cornell University, who was not involved with this work.
This work was funded, in part, by MIT startup funds, the National Science Foundation (NSF), the Silicon Valley Community Foundation, Diacomp Foundation, the Harvard Digestive Disease Core, a MathWorks Fellowship, and the Claude E. Shannon Award.
A faster way to estimate AI power consumption
Due to the explosive growth of artificial intelligence, it is estimated that data centers will consume up to 12 percent of total U.S. electricity by 2028, according to the Lawrence Berkeley National Laboratory. Improving data center energy efficiency is one way scientists are striving to make AI more sustainable.
Toward that goal, researchers from MIT and the MIT-IBM Watson AI Lab developed a rapid prediction tool that tells data center operators how much power will be consumed by running a particular AI workload on a certain processor or AI accelerator chip.
Their method produces reliable power estimates in a few seconds, unlike traditional modeling techniques that can take hours or even days to yield results. Moreover, their prediction tool can be applied to a wide range of hardware configurations — even emerging designs that haven’t been deployed yet.
Data center operators could use these estimates to effectively allocate limited resources across multiple AI models and processors, improving energy efficiency. In addition, this tool could allow algorithm developers and model providers to assess potential energy consumption of a new model before they deploy it.
“The AI sustainability challenge is a pressing question we have to answer. Because our estimation method is fast, convenient, and provides direct feedback, we hope it makes algorithm developers and data center operators more likely to think about reducing energy consumption,” says Kyungmi Lee, an MIT postdoc and lead author of a paper on this technique.
She is joined on the paper by Zhiye Song, an electrical engineering and computer science (EECS) graduate student; Eun Kyung Lee and Xin Zhang, research managers at IBM Research and the MIT-IBM Watson AI Lab; Tamar Eilam, IBM Fellow, chief scientist of sustainable computing at IBM Research, and a member of the MIT-IBM Watson AI Lab; and senior author Anantha P. Chandrakasan, MIT provost, Vannevar Bush Professor of Electrical Engineering and Computer Science, and a member of the MIT-IBM Watson AI Lab. The research is being presented this week at the IEEE International Symposium on Performance Analysis of Systems and Software.
Expediting energy estimation
Inside a data center, thousands of powerful graphics processing units (GPUs) perform operations to train and deploy AI models. The power consumption of a particular GPU will vary based on its configuration and the workload it is handling.
Many traditional methods used to predict energy consumption involve breaking a workload into individual steps and emulating how each module inside the GPU is being utilized one step at a time. But AI workloads like model training and data preprocessing are extremely large and can take hours or even days to simulate in this manner.
“As an operator, if I want to compare different algorithms or configurations to find the most energy-efficient manner to proceed, if a single emulation is going to take days, that is going to become very impractical,” Lee says.
To speed up the prediction process, the MIT researchers sought to use less-detailed information that could be estimated faster. They found that AI workloads often have many repeatable patterns. They could use these patterns to generate the information needed for reliable but quick power estimation.
In many cases, algorithm developers write programs to run as efficiently as possible on a GPU. For instance, they use well-structured optimizations to distribute the work across parallel processing cores and move chunks of data around in the most efficient manner.
“These optimizations that software developers use create a regular structure, and that is what we are trying to leverage,” explains Lee.
The researchers developed a lightweight estimation model, called EnergAIzer, that captures the power usage pattern of a GPU from those optimizations.
An accurate assessment
But while their estimation was fast, the researchers found that it didn’t take all energy costs into account. For instance, every time a GPU runs a program, there is a fixed energy cost required for setting up and configurating that program. Then each time the GPU runs an operation on a chunk of data, an additional energy cost must be paid.
Due to fluctuations in the hardware or conflicts in accessing or moving data, a GPU might not be able to use all available bandwidth, slowing operations down and drawing more energy over time.
To include these additional costs and variances, the researchers gathered real measurements from GPUs to generate correction terms they applied to their estimation model.
“This way, we can get a fast estimation that is also very accurate,” she says.
In the end, a user can provide their workload information, like the AI model they want to run and the number and length of user inputs to process, and EnergAIzer will output an energy consumption estimation in a matter of seconds.
The user can also change the GPU configuration or adjust the operating speed to see how such design choices impact the overall power consumption.
When the researchers tested EnergAIzer using real AI workload information from actual GPUs, it could estimate the power consumption with only about 8 percent error, which is comparable to traditional methods that can take hours to produce results.
Their method could also be used to predict the power consumption of future GPUs and emerging device configurations, as long as the hardware doesn’t change drastically in a short amount of time.
In the future, the researchers want to test EnergAIzer on the newest GPU configurations and scale the model up so it can be applied to many GPUs that are collaborating to run a workload.
“To really make an impact on sustainability, we need a tool that can provide a fast energy estimation solution across the stack, for hardware designers, data center operators, and algorithm developers, so they can all be more aware of power consumption. With this tool, we’ve taken one step toward that goal,” Lee says.
This research was funded, in part, by the MIT-IBM Watson AI Lab.
Act Now to Stop California’s Paternalistic and Privacy-Destroying Social Media Ban
California lawmakers are fast-tracking A.B. 1709—a sweeping bill that would ban anyone under 16 from using social media and force every user, regardless of age, to verify their identity before accessing social platforms.
That means that under this bill, all Californians would be required to submit highly sensitive government-issued ID or biometric information to private companies simply to participate in the modern public square. In the name of “safety,” this bill would destroy online anonymity, expose sensitive personal data to breach and abuse, and replace parental decision-making with state-mandated censorship.
A.B. 1709 has already passed out of the Assembly Privacy and Judiciary Committees with nearly unanimous support. Its next stop is the Assembly Appropriations Committee, followed by a floor vote—likely within the next week.
Tell Your Representative to OPPOSE A.B. 1709
California Is About to Set a Dangerous Precedent for Online CensorshipBy banning access to social media platforms for young people under 16, California is emulating Australia, where early results show exactly what EFF and other critics predicted: overblocking by platforms, leaving youth without support and even adults barred from access; major spikes in VPN use and other workarounds ranging from clever to desperate; and smaller platforms shutting down rather than attempting costly compliance with these sweeping bills.
California should not be racing to replicate those failures. After all, when California leads—especially on tech—other states follow. There is no reason for California to lead the nation into an unconstitutional social media ban that destroys privacy and harms youth.
Tell Your Representative to OPPOSE A.B. 1709
What’s Wrong With A.B. 1709?Just about everything.
A.B. 1709 weaponizes legitimate parental concerns by using them to hand over even more censorship and surveillance power to the government. Beneath its shiny “protect the children” rhetoric, this bill is misguided, unconstitutional, and deeply harmful to users of all ages.
A.B. 1709 Recklessly Violates Free Speech RightsThe First Amendment protects the right to speak and access information, regardless of age. But by imposing a blanket ban on social media access, A.B. 1709 would cut off lawful speech for millions of California teenagers, while also forcing all users (adults and kids alike) to verify their ages before speaking or accessing information on social media. This will immensely and unconstitutionally chill Californians’ exercise of their First Amendment.
These mandates ignore longstanding Supreme Court precedent that protects young people’s speech and consistently find these bans unconstitutional. Banning young people entirely from social media is an extreme measure that doesn’t match the actual risks of online engagement. California simply does not have a valid interest in overriding parents’ and young people’s rights to decide for themselves how to use social media.
After all, age-verification technology is far from perfect. A.B. 1709’s reliance on imperfect age-verification technology will disproportionately silence marginalized communities—those whose IDs don’t match their presentation, those with disabilities, trans and gender non-conforming folks, and people of color—who are most likely to be wrongfully denied access by discriminatory systems.
Finally, many people will simply refuse to give up their anonymity in order to access social media. Our right to anonymity has been a cornerstone of free expression since the founding of this country, and a pillar of online safety since the dawn of the internet. This is for good reason: it allows creativity, innovation, and political thought to flourish, and is essential for those who risk retaliation for their speech or associations. A.B. 1709 threatens to destroy it.
AB 1709 Needlessly Jeopardizes Everyone’s PrivacyA.B. 1709’s age verification mandate also creates massive security risks by forcing users to hand over immutable biometric data and government IDs to third-party vendors. By creating centralized "honeypots" of sensitive information, the bill invites identity theft and permanent surveillance rather than actual safety. If we don’t trust tech companies with our private information now, we shouldn't pass a law that mandates we give them even more of it.
We’ve already seen repeated data breaches involving age- and identity-verification services. Yet A.B. 1709 would require millions more Californians—including the youth this bill claims to protect—to feed their most sensitive data into this growing surveillance ecosystem.
This is not the answer to online safety.
Tell Your Representative to OPPOSE A.B. 1709
AB 1709 Harms the Youth It Claims to ProtectWhile framed as a safety measure, this bill serves as a blunt instrument of censorship, severing vital lifelines for California’s young people. Besides being unconstitutional, banning young people from the internet is bad public policy. After all, social media sites are not just sources of entertainment; they provide crucial spaces for young people to explore their identities—whether by creating and sharing art, practicing religion, building community, or engaging in civic life.
Social science indicates that moderate internet use is a net positive for teens’ development, and negative outcomes are usually due to either lack of access or excessive use. Social media provides essential spaces for civic engagement, identity exploration, and community building—particularly for LGBTQ+ and marginalized youth who may lack support in their physical environments. By replacing access to political news and health resources with state-mandated isolation, A.B. 1709 ignores the calls of young people themselves who favor digital literacy and education over restrictive government control.
Young people have been loud and clear that what they want is access and education—not censorship and control. They even drafted their own digital literacy education bill, A.B. 2071, which is currently before the California legislature! Instead of cutting off vital lifelines, we should support education measures that would arm them (and the adults in their lives) with the knowledge they need to explore online spaces safely.
AB 1709 Is Misguided and Won’t WorkIn case you needed more reasons to oppose this bill.
- A.B. 1709 Replaces Parenting With Government Control. Families know there is no one-size-fits-all solution to parenting. But AB 1709 imposes one anyway, overriding parental decision-making with a blanket censorship prohibition. Parents who want to actively guide their children’s online experiences should be empowered, not relegated to the sidelines by a blunt state mandate.
- A.B. 1709 Strengthens Big Tech Instead of Challenging It. Supporters claim that this bill will rein in the major tech companies, but in fact, steep fines and costly compliance regimes disproportionately harm smaller platforms. Where large corporations can afford to absorb legal risk and shell out for expensive verification systems, smaller forums and emerging platforms cannot. We’ve already seen platforms shut down or geoblock entire states in response to age-gating laws. And when the small platforms shutter, where do all of those users—and their valuable data—go? Straight back to the biggest companies.
- A.B. 1709 Creates Expensive and Shady Bureaucracy During a Budget Crisis. California is facing a massive deficit, but A.B. 1709 would waste taxpayer dollars to fund a shadowy new "e-Safety Advisory Commission" to enforce this ban and dream up new ways to censor the internet. In addition, lawmakers in support of A.B. 1709 have already admitted that this bill is likely to follow the same path as other recent "child safety" laws that were struck down or blocked in court for First Amendment and privacy reasons. With A.B. 1709, taxpayers are being asked to hand over a blank check for millions in legal fees to defend a law that is unconstitutional on its face.
A.B. 1709 is not an inevitability, as some supporters want you to believe. But we need to act now to support our youth and their right to participate in online public life.
Your representatives could vote on A.B. 1709 as soon as next week. If you’re a Californian, email your legislators now and tell them to vote NO on AB 1709.
EFF Challenges Secrecy In Eastern District of Texas Patent Case
Clinic students Emily Ko and Zoe Lee at the Technology Law and Policy Clinic at the NYU School of Law were the principal authors of this post.
Courts are not private forums for business disputes. They are public institutions, and their records belong to the public. But too often, courts forget that and allow for massive over-sealing, especially in patent cases.
EFF recently discovered another case of this in the Eastern District of Texas, where key court filings about Wi-Fi technology used by billions of people every day were hidden entirely from public view. The public could not see the parties’ arguments about patent ownership, the plaintiff’s standing in court, or licensing obligations tied to standardized technologies.
EFF Seeks to Uncover Sealed Information in WilusThe case Wilus Institute of Standards and Technology Inc. v. HP Inc., highlights a recurring transparency problem in patent litigation.
Wilus claims to own standard essential patents (SEPs) related to Wi-Fi 6 — technology embedded in everyday devices. Wilus sued Samsung and HP for patent infringement. HP argued that Wilus failed to offer licenses on Fair, Reasonable, and Non-Discriminatory (FRAND) terms, which are required to prevent SEP holders from exploiting their position, by blocking fair access to widely used technologies.
In reviewing the docket, EFF found that many filings were improperly sealed under a lenient protective order without the required, specific justification needed in a proper motion to seal. Because there is a presumption of public access to court filings, litigants must file a motion to seal and demonstrate compelling reasons for secrecy. This typically requires a document-by-document and line-by-line justification.
In the Eastern District of Texas, that standard is often not enforced. Instead, district judges allow litigants to hide information using boilerplate justification in a protective order without explaining why specific documents or specific parts in a document should be hidden.
In Wilus, two sets of documents stood out.
First, Samsung moved to dismiss the case, arguing Wilus may not have validly obtained the patents — raising doubts about whether they had standing to sue at all. Wilus’s opposition to that motion was filed completely under seal, with no redacted public version available at all. That briefing likely addresses the patent assignment agreements that underpin Wilus’s business model — information the public has an interest in, especially in cases involving non-practicing entities (NPEs) like Wilus.
Second, filings related to HP’s supplemental briefing on FRAND obligations were also sealed in full, with no redacted versions available to the public. Whether Wilus is bound by FRAND has implications far beyond this case. Companies subject to FRAND must adhere to reasonable licensing terms, while those that are not can charge significantly higher licensing fees.
In both instances, the public was shut out of arguments that bear directly on how essential technologies are licensed and controlled.
EFF Pushes For Public AccessEFF raised these concerns with Wilus’s counsel and pressed for public access to the sealed records. Wilus ultimately agreed to file redacted versions of several documents now available as Document Numbers 387, 388, and 389.
That result is progress, but it shouldn’t require outside intervention. Public versions of court filings should be the default, not something negotiated after outside pressure.
Even now, these newly filed redacted versions conceal significant portions of the parties’ arguments. The public still cannot fully see how this case about technologies that are used every day is being litigated.
Why Public Access MattersSealing court records is designed to be rare. To overcome the presumption of public access, litigants must show compelling reasons for secrecy. That’s because open courts are a distinguishing feature of American democracy. The public, journalists, and policymakers all have the right to observe proceedings and hold both government actors and private litigants accountable.
Some filings do contain trade secrets or commercially sensitive information. But that doesn’t mean litigants should be able to hide information without explaining why. The Eastern District of Texas allows litigants to bypass the requirement to explain why.
EFF confronted this very same issue in its attempt to intervene in another Eastern District of Texas case, Entropic v. Charter. The same pattern appeared again in Wilus: instead of narrowly tailored redactions supported by specific reasoning, filings were withheld wholesale.
Courts Must Enforce the StandardCourts, not third parties, are responsible for protecting the public’s right of access.
That means enforcing the “compelling reasons” standard, as a matter of course. Parties seeking to seal sensitive information should be required to justify each proposed redaction. The Eastern District of Texas’ current approach falls short. By allowing broad, unsupported sealing through expansive protective orders, it effectively treats judicial records as confidential by default.
Heavy caseloads don’t change the rule. Administrative burden cannot override constitutional and common law rights. Judicial records are presumptively public. Courts, including the Eastern District of Texas, should enforce that presumption.
Other Federal Courts Get It RightThe Eastern District of Texas is an outlier. In the Northern District of California, judges routinely reject overbroad sealing requests. As Judge Chhabria’s Civil Standing Order explains:
[M]otions to seal . . . are almost always without merit. . . . Federal courts are paid for by the public, and the public has the right to inspect court records, subject only to narrow exceptions.
The filing party must make a specific showing explaining why each document that it seeks to seal may justifiably be sealed . . . Generic and vague references to “competitive harm” are almost always insufficient justification for sealing.
This approach reflects the law: sealing must be narrowly tailored and specifically justified.
Court Transparency is FundamentalAt first glance, secrecy in patent litigation may not seem alarming. But it signals a broader erosion of transparency. The widespread use of expansive protective orders in the Eastern District of Texas is a practice that risks spreading if courts do not enforce the law.
These practices allow private parties to obscure information about disputes involving technologies that shape modern life. That undermines a core principle of a free society: transparency regarding the actions of powerful actors.
Courts are not private forums for business disputes. They are public institutions, and their records belong to the public.
So long as these practices continue, EFF will keep advocating for transparency and working to vindicate the public’s right to access court records.
Friday Squid Blogging: How Squid Survived Extinction Events
Science news:
Scientists have finally cracked a long-standing mystery about squid and cuttlefish evolution by analyzing newly sequenced genomes alongside global datasets. The research reveals that these bizarre, intelligent creatures likely originated deep in the ocean over 100 million years ago, surviving mass extinction events by retreating into oxygen-rich deep-sea refuges. For millions of years, their evolution barely changed—until a dramatic post-extinction boom sparked rapid diversification as they moved into new shallow-water habitats. ...
California Coastal Community Must Reject CBP's AI-Powered Surveillance Tower
Customs and Border Protection (CBP) is seeking permission from the California city of San Clemente to install an Anduril Industries surveillance tower on a cliff that would allow for constant monitoring of entire coastal neighborhoods.
The proposed tower is Anduril's Sentry, part of the Autonomous Surveillance Tower (AST) program. While CBP says it will primarily monitor the coastline for boats carrying migrants, it will actually be installed 1.5 miles inland, overlooking the bulk of the 62,000-resident city. By CBP's own public statement, the system–which combines video, radar, and computer vision–is "constantly scanning" for movement and identifying and tracking objects an AI algorithm decides are of interest. Depending on the model–the photos provided by CBP indicate it is a long range maritime model–the camera could see as far as nine miles, which would cover the entire city and potentially see as far as neighboring Dana Point.
"The AST utilize advanced computer vision algorithms to autonomously detect, identify, and track items of interest (IoI) as they transit through the towers field of view," CBP writes in a privacy threshold analysis. "The system can determine if an IoI is a human, animal, or vehicle without operator intervention. The system then generates and transmits an alert to operators with the location and images of the IoI for adjudication and response."
On April 28, local residents and Oakland Privacy, a privacy- and anti-surveillance-focused citizens’ coalition, are holding a town hall to inform the public about the dangers of this technology. We urge people to attend to better understand what's at stake.
"The planned deployment of an Anduril tower along a heavily used Orange County coastline 75 miles from the border demonstrates that the militarization of the border region is rapidly moving northwards and across the entire state," writes Oakland Privacy.
City officials raised concerns about resident privacy and proposed that a lease agreement include a prohibition on surveilling neighborhoods. CBP rejected that proposal, instead saying that they would configure the tower to "avoid" scanning residential neighborhoods, but the system would remain capable of tracking human beings in residential areas. According to the staff report:
In response to privacy concerns, CBP has stated the system would be configured to avoid scanning residential areas that fall into the scan viewshed, focusing the system on the marine environment. CBP has maintained the purpose of the system is specifically maritime surveillance, and the system would be singularly focused on offshore activities. However, there may be an instance in which there is an active smuggling event, detected by the system at sea, in which the subsequent smuggling event traverses through the residential neighborhoods. In such a case, the system may continue to track and monitor. To restrict this functionality would be contrary to the spirit and intent of the deployment. Therefore, they cannot make such a contractual obligation.
The Anduril towers retain a variety of data, including images and more.
The proposed Anduril surveillance tower. Source: City of San Clemente
"The AST capture and retain imagery which occurs in plan view of the tower sites and is stored as an individual event with a unique event identified allowing replay of the event for further investigation or dismissal based on activity occurring," according to the private threshold analysis.
The document indicates a potential 30-day retention period for imagery, but then contradicts itself to say that data will be held indefinitely to train algorithms: "AST will also be maintaining learning training data, these records should not be deleted." This means that taxpayers would be paying for the privilege of having their data turned into fuel for Anduril's product.
In 2020 CBP said it would work with National Archives and Records Administration (NARA) to develop a retention schedule for training data (i.e., a timeline for deletion). However, when EFF filed a Freedom of Information Act (FOIA) with NARA, the agency said there were no records of these discussions. Likewise, CBP has not provided records in response to the FOIA request EFF filed with them seeking the same records.
Anduril Maritime Sentry in San Diego, where the border fence meets the ocean.
This would not be the first CBP tower placed along the coastline in California. EFF identified one in Del Mar, about 30 miles from the border, and another in San Diego County where the border fence meets the Pacific Ocean. CBP has also applied to place towers–although not necessarily the Anduril model–in or near several other coastal locations: Gaviota State Park, Refugio State Park, Vandenberg Air Force Base, Piedras Blancas and Point Vicente. The California coastline isn’t the only coastline dotted with surveillance towers. The Migrant Rights Network has also documented numerous Anduril towers along the southeast coast of England. Where the San Clemente tower would differ is that there is a substantial population between the tower and the beach, and because it's a 360-degree system, it can watch neighborhoods even further from the coast.
However, this won't be the first time an Anduril tower has been placed next to a community. EFF has documented numerous Anduril towers in public parks along the Rio Grande in Laredo and Roma, Texas. In Mission, Texas, an Anduril tower was placed outside an RV park: the tower could not even see the border without capturing data from the community. Because AI can swivel the cameras 360 degrees, two churches were within the "viewshed" of that tower.
Click here to view EFF's ongoing map of CBP surveillance towers.
Many border surveillance towers are placed on city or county property, requiring a lease to be approved by the local governing body–as is the case with San Clemente. In 2024, EFF and Imperial Valley Equity and Justice organized an effort to fight the renewal of a Border Patrol's lease for a tower next to a public park. The coalition lost narrowly after a recall election ousted two officials who were critical of the lease.
CBP is rapidly increasing the number of towers at the border and beyond, recently announcing the potential to install 1,500 more towers in the next few years–more than tripling what we've documented so far–at a cost of more than $400 million to the public for maintenance alone. This is despite more than 20 years of government reports that have documented how tower-based systems are ineffective and wasteful.
It's time to fight back.
The power of “and” in energy and climate entrepreneurship
A supportive ecosystem is a cornerstone in entrepreneurship, according to Georgina Campbell Flatter, the CEO of Greentown Labs. “If we really want to be driving the most transformational technologies to scale at a speed in which we need them to happen for our planet, we need to be thinking about the ecosystem that we build around it.” During a seminar titled MITEI Presents: Advancing the Energy Transition, Campbell Flatter spoke of “the power of ‘and’” — the importance of multiple people, companies, and solutions collaborating to advance energy and climate solutions — and how that underpins Greentown Labs’ mission. “Innovation is a team sport. No one can go alone,” she said.
Creating these ecosystems is paramount at Greentown Labs, the world’s largest energy and climate incubator. “Through the lens of Greentown, we think about the power of ‘and’ through how we can work together better in the ecosystems where we have physical presence, but also how we can connect better across ecosystems,” said Campbell Flatter. The concept of "and" also exists in energy and climate, innovation and deployment, science and entrepreneurship, and competitiveness and collaboration, she said. Campbell Flatter feels this expansive lens is especially important in our increasingly polarized world.
At its core, Greentown Labs is a place to cluster innovators together. “We have to be very intentional about how we support and accelerate and help those entrepreneurs,” said Campbell Flatter. There is a science behind this “innovation infrastructure” that involves not only bringing creative minds together, but also removing friction so startups can move faster. The majority of this friction exists in the gaps between innovation and deployment, often referred to as the “valleys of death.” The first valley of death happens between idea and prototype; the second valley of death happens between prototype and the first commercial pilot. Greentown often asks where their ecosystems can be most helpful, which has led them to focus on helping entrepreneurs bridge that second valley, according to Campbell Flatter.
“Entrepreneurs at the stage where they can’t quite afford space on their own, and maybe it takes six to 12 months to figure out the permitting anyway, come to Greentown,” said Campbell Flatter. “We’re actively thinking about the customers, the capital, the infrastructure needs that you have in order for you to move your way through this second valley.”
Part of Greentown’s decision to focus on the second valley came from MIT’s unique ability to bring innovators across the first valley of death — an ability that Campbell Flatter deemed “truly world class.” Referencing startups born from universities like MIT and Harvard, Campbell Flatter said “they're far more likely to be successful and scale because of the ecosystem they’re surrounded in. You’re getting feedback constantly from your peers, you’re getting support and mentorship — that all matters for the ecosystem.”
MIT also helps build this ecosystem by attracting innovators to the area. “Thirty percent of our entrepreneurs at Greentown are coming from out of state and moving to Massachusetts,” she said. “One, because Greentown’s a great home for them, but two, because of MIT and the talent that they can source from the ecosystem, which they are well aware of, and the knowledge, IP [intellectual property], and credibility.”
Not only is the symbiotic relationship between MIT and Greentown a powerful entrepreneurial ecosystem, but MIT has also been instrumental in Campbell Flatter’s own journey toward her current body of work. After completing her master’s degree in materials science at Oxford University, she graduated from the MIT Technology and Policy Program. Campbell Flatter credited her time as a graduate student at MIT for giving her an appreciation for how hard it is to commercialize technology, and for the importance of ecosystems, and for giving her an early sense of how energy and climate would define this century. “I think it is really important to recognize the intentionality behind MIT’s commitment to energy and climate,” said Campbell Flatter.
While at MIT, she ran the third iteration of the MIT Clean Energy Prize, advocating for the inclusion of a non-renewables chapter of the prize because she saw “how important it was to continue to decarbonize and bring efficiencies to the traditional energy sectors while we work on all these amazing new energy initiatives.” Greentown has put this into practice through their wide network of industry partners.
“I guess this early lesson I took from MIT was this idea that we must embrace the power of ‘and,’” said Campbell Flatter. “It slows innovation down when we don’t embrace and work together.”
This speaker series highlights energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. Visit the MIT Energy Initiative's events page for more information on this and additional events.
