Feed aggregator
Rep. Finke Was Right: Age-Gating Isn’t About Kids, It’s About Control
When Rep. Leigh Finke spoke last month before the Minnesota House Commerce Finance and Policy Committee to testify against HF1434, a broad-sweeping proposal to age-gate the internet, she began with something disarming: agreement.
“I want to support the basic part of this,” she said, the shared goal of protecting young people online. Because that is not controversial: everyone wants kids to be safe. But HF1434, Minnesota’s proposed age-verification bill, simply won’t “protect children.” It mandates that websites hosting speech that is protected by the First Amendment for both adults and young people to verify users’ identities, often through government IDs or biometric data. As we’ve discussed before, the bill’s definition of speech that lawmakers deem “harmful to minors” is notoriously broad—broad enough to sweep in lawful, non-pornographic speech about sexual orientation, sexual health, and gender identity.
Rep. Finke, an openly transgender lawmaker, next raised a point that her critics have since tried to distort: age-verification laws like the Minnesota bill are already being used to block young LGBTQ+ people from exercising their First Amendment rights to access information that may be educational, affirming, or life-saving. Referencing the Supreme Court case Free Speech Coalition v. Paxton, she noted that state attorneys general have been “almost jubilant” about the ability to use these laws to restrict queer youth from accessing content. “We know that ‘prurient interest’ could be for many people, the very existence of transgender kids,” she added, referring to the malleable legal standard that would govern what content must be age-gated under the law.
But despite years’ worth of evidence to back her up, Finke has faced a wave of attacks from countless media outlets and religious advocacy groups for her statements. Rep. Finke’s testimony was repeatedly mischaracterized as not having young people’s best interests in mind, when really she was accurately describing the lived reality of LGBTQ+ youth and advocating in support of their access to vital resources and community.
In fact, this backlash proves her point. Beyond attempting to silence queer voices and to scare other legislators from speaking up against these laws, it reveals how age-verification mandates are part of a larger effort to give the government much greater control of what young people are allowed to say, read, or see online.
Rep. Finke was also right that these proposals are bad policy—they prevent all young people from finding community online—and that they violate young people’s First Amendment rights.
Why FSC v. Paxton MattersRep. Finke was similarly right to bring up the Paxton case, because beyond the troubling Supreme Court precedent it produced, Texas’s age-verification law also drew eager support from an extraordinary number of amicus briefs from anti-LGBTQ organizations (some even designated hate groups by the Southern Poverty Law Center).
In FSC v. Paxton, the Supreme Court gave Texas the green light to require age verification for sites where at least one-third of the content is sexual material deemed “harmful to minors,” which generally means explicit sexual content. This ruling, based on how young people do not have a First Amendment right to access explicit sexual content, allows states to enact onerous age-verification rules that will block adults from accessing lawful speech, curtail their ability to be anonymous, and jeopardize their data security and privacy. These are real and immense burdens on adults, and the Court was wrong to ignore them in upholding Texas’ law.
But laws enacted by other states and Minnesota HF 1434 go further than the Texas statute. Rather than restricting minors’ from accessing sexual content, these proposals expand what the state deems “harmful to minors” to include any speech that may reference sex, sexuality, gender, and reproductive health. But young people have a First Amendment right to both speak on those topics and to access information online about them.
We will continue to fight against all online age restrictions, but bills like Minnesota’s HF 1434, which seek to restrict minors from accessing speech about their bodies, sexuality, and other truthful information, are especially pernicious.
EFF and Rep. Finke are on the same page here: age verification mandates create immense harm to our First Amendment rights, our right to privacy, as well as our online safety and security. These proposals also fully ignore the reality that LGBTQ young people often rely on the internet for information they cannot get elsewhere.
But the Paxton case, and the coalition behind it, illustrates exactly how these laws can be weaponized. They weren’t there just to stand up for young people’s privacy online—they were there to argue that the state has a compelling interest in shielding minors from material that, in practice, often includes LGBTQ content. Ultimately, these groups would like to age-gate not just porn sites, but also any content that might discuss sex, sexuality, gender, reproductive health, abortion, and more.
Using Children as Props to Enact CensorshipThe coalition of organizations that filed amicus briefs in support of Texas’s age verification law tells us everything we need to know about the true intentions behind legislating access to information online: censorship, surveillance, and control. After all, if the race to age-gate the internet was purely about child safety, we would expect its strongest supporters to be child-development experts or privacy advocates. Instead, the loudest advocates are organizations dedicated to policing sexuality, attacking LGBTQ+ folks and reproductive rights, and censoring anything that doesn’t fit within their worldview.
Below are some of the harmful platforms that the organizations supporting the age-gating movement are advancing, and how their arguments echo in the attacks on Rep. Finke today:
Policing sexuality, bodily autonomy, and reproductive rightsMany of the organizations backing age-verification laws have spent decades trying to restrict access to accurate sexual health information and reproductive care.
Groups like Exodus Cry, for example, who filed a brief in support of the Texas AG in the SCOTUS case, frame pornography as part of a broader moral crisis. Founded by a “Christian dominionist” activist, Exodus Cry advocates for the criminalization of porn and sex work, and promotes a worldview that defines “sexual immorality” as any sexual activity outside marriage between one man and one woman. Its leadership describes the internet as a battleground in a “pornified world” that has to be reclaimed.
Another brief in support of the age-verification law was filed by a group of organizations including the Public Advocate of the United States (an SPLC-designated hate group) and America’s Future. America’s Future is an organization that was formed to “revitalize the role of faith in our society” and fiercely advocates in favor of trans sports bans.
These groups see age-verification laws as attractive solutions because they create a legal mechanism to wall off large swaths of content that merely mentions sex from not only young people but millions of adults, too.
Attacking LGBTQ+ RightsSeveral of the most prominent legal advocates behind age-verification laws have also led the crusade against LGBTQ+ equality. The internet that these groups envision is one that heavily censors critical and even life-saving LGBTQ+ resources, community, and information.
The Alliance Defending Freedom (ADF), for instance (which is another SPLC-designated hate group), built its reputation on litigation aimed at rolling back LGBTQ+ protections—including allowing businesses to refuse service to same-sex couples, criminalizing same-sex relationships abroad, and restricting transgender rights.
The internet that these groups envision is one that heavily censors critical and even life-saving LGBTQ+ resources, community, and information.
Then there’s other groups like Them Before Us and Women’s Liberation Front, both of which submitted amici in support of the Texas Attorney General and are devoted to upending LGBTQ+ rights in the United States. Them Before Us says it’s “committed to putting the rights and well-being of children ahead of the desires and agendas of adults.” But it’s also running a campaign to “End Obergefell,” the 2015 Supreme Court case that upheld the right to same-sex marriage, and has been on the cutting edge of transphobic campaigning and pseudoscientific fearmongering about IVF and surrogacy. The Women’s Liberation Front, on the other hand, is an organization that has a long track record of supporting transphobic policies such as bathroom bills, bans on gender-affirming healthcare, and efforts to define “sex” strictly as the biological sex assigned at birth.
Through cases like FSC v. Paxton, groups like these three continue to advance a vision of society that creates government mandates to enforce their worldviews over personal freedom, while hiding behind a shroud of concern for children’s safety. But when they also describe LGBTQ+ people as “evil” threats to children and run countless campaigns against their human rights, they are being clear about their intentions. This is why we continue to say: the impact of age verification measures goes beyond porn sites.
Expanding censorship beyond the internet into real-life public spacesAs we’ve said for years now, the push to age-gate the internet is part of a broader campaign to control what information people can access in public life both on- and offline. Many of the same organizations advancing these proposals claim to be acting on behalf of young people, but their arguments consistently use children as props to justify giving the government more control over speech and information.
Many of the organizations advocating for online age verification have also supported book bans, attacks on DEI policies and education, and efforts to remove LGBTQ+ materials from schools and libraries. Two of the organizations who supported the Texas Attorney General, Citizens Defending Freedom and Manhattan Institute, have led campaigns around the country to “abolish DEI” and ban classical books like “The Bluest Eye” by Toni Morrison from school libraries. These efforts are not different from the efforts to restrict access to the internet—they reflect a broader strategy to restrict access to ideas or information that these groups find objectionable. And they discourage free thought, inquiry, and the ability for people to decide how to live their lives.
These campaigns rely on the same core argument: that certain ideas are inherently dangerous to young people and must therefore be restricted. But that framing misrepresents an important reality: if lawmakers genuinely want to address harms that young people experience online, they should start by listening to young people themselves. When EFF spoke directly with young people about their online experiences, they overwhelmingly rejected restrictions on their access to the internet and came back with nuanced and diverse perspectives. Once that principle—that certain ideas are inherently dangerous—is accepted, the internet, once a symbol of free expression, connection, creativity, and innovation, becomes the next logical target.
Once that principle—that certain ideas are inherently dangerous—is accepted, the internet, once a symbol of free expression, connection, creativity, and innovation, becomes the next logical target.
This also wouldn’t be the first time a vulnerable group is used as a prop to advance internet censorship laws. We’ve seen this playbook during the debate over FOSTA/SESTA, where many of the same advocates claimed to speak for trafficking victims/survivors and sex workers, while pushing legislation that ultimately censored online speech and harmed the very communities it invoked. It’s a familiar pattern: invoke a vulnerable group, frame certain speech as a threat, and use that as a way to expand government control over the flow of information. And as we said in the fight against FOSTA: if lawmakers are serious about addressing harms to particular communities, they should start by talking to those communities. This means that lawmakers seeking to address online harms to young people should be talking to young people, not groups who claim their interests.
Rep. Finke Was Not Radical. She Was Right.The Paxton case, and the coalition backing age verification laws in the U.S., shows us exactly why the messaging around these laws draws superficial support from parents and lawmakers. But we’ve heard the quiet part said out loud before. Marsha Blackburn, a sponsor of the federal Kids Online Safety Act, has said that her goal with the legislation was to address what she called “the transgender” in society. When lawmakers and advocacy groups frame queer existence itself as a threat to young people, age-verification laws become ideological enforcement instead of regulatory policy.
When lawmakers and advocacy groups frame queer existence itself as a threat to young people, age-verification laws become ideological enforcement instead of regulatory policy.
In defending free speech, privacy, and the right of young people to access truthful information about themselves, Rep. Leigh Finke was not radical—she was right. She was warning that broad, ideologically driven laws will be used to erase, silence, and isolate young people under the banner of child protection.
What’s at stake in the fight against age verification is not just a single bill in a single state, or even multiple states, for that matter. It’s about whether “protecting children” becomes a legal pretext for embedding government control over the internet to enforce specific moral and religious judgments—judgments that deny marginalized people access to speech, community, history, and truth—into law.
And more people in public office need the courage of Rep. Finke to call this out.
Why high oil prices may outlast Trump’s Iran war
Colorado utility warns it may postpone coal plant retirements
Q&A: The shipping official at the center of Trump’s assault on a carbon tax
PacifiCorp facing ‘junk’ credit rating after large jury awards
Lawmakers spar over FEMA funding as shutdown drags on
Florida bill banning net-zero policies, limits on greenhouse gases headed for DeSantis
California’s cap-and-trade proposal gets blowback from in-state Democrats
Some State Farm customers could see refunds; Calif. homeowner rate hikes stay put
EU climate advisers say eat less meat and tax farm emissions
King penguins see some global warming benefits. But that could change.
3 Questions: Fortifying our planetary defenses
When people think of asteroids, they tend to picture rare, civilization-ending impacts like those depicted in movies such as “Armageddon.” In reality, the asteroids most likely to affect modern society are much smaller. While kilometer-scale impacts occur only every tens of millions of years, decameter-scale (building-sized) objects strike Earth far more frequently: roughly every couple decades. As astronomers develop new ways to detect and track these smaller asteroids, planetary defense becomes increasingly relevant for protecting the space-based infrastructure that underpins modern life, from GPS navigation to global communications.
The good news for us earthlings is that a team of MIT researchers is on this space-case. Associate Professor Julien de Wit, Research Scientist Artem Burdanov, and their colleagues recently developed a new asteroid-detection method that could be used to track potential asteroid impactors and help protect our planet. They have now applied this new technique to the James Webb Space Telescope (JWST), demonstrating that JWST can be used to detect and characterize decameter-scale asteroids all the way out to the main belt, a crucial step in fortifying our planetary safety and security. De Wit and his colleagues recently co-led with with Andrew Rivkin PhD ’91 new observations of an asteroid called 2024 YR4, which made headlines last year when it was first discovered. They were able to determine that the asteroid will not collide with the Moon, which could have had impacts on Earth’s critical satellite systems.
De Wit, Burdanov, Assistant Professor Richard Teague, and Research Scientist Saverio Cambioni spoke to MIT News about the importance of planetary defense and how MIT astronomers are helping to lead the charge to ensure our planet’s safety.
Q: What is planetary defense and how is the field changing?
Burdanov: Planetary defense is a field of science and engineering that’s focused on preventing asteroids and comets from hitting the Earth. While traditionally the field has been focused on much larger asteroids, thanks to new observational capabilities the field is growing to include monitoring much smaller asteroids that could also have an impact.
De Wit: When people think about asteroids they tend to think of impacts along the lines of these rare, civilization-ending “dinosaur killer” asteroids — objects that are scientifically fascinating but, happily, statistically unlikely on human timescales. But as soon as you move to smaller asteroids, there are so many of them that you’re looking at impacts happening every few decades or less. That becomes much more relevant on human timescales.
Now that our society has become increasingly reliant on space-based infrastructure for communication, navigation technologies like GPS and satellite-based security systems, we can be affected by different populations of smaller asteroids. These smaller asteroids will probably lead to zero direct human casualties but would have very different consequences on our space infrastructure. At the same time, because they are smaller, they require different technologies to monitor and understand them, both for the detection and for the characterization. At MIT, we are working to redefine planetary defense in a way that is far more pertinent, personable, and practical — focusing on these much smaller asteroids that could have real consequences. In other words, planetary defense is no longer just about avoiding extinction-level events. It is about protecting the systems we depend on in the near term.
Q: Why are observations with telescopes like the James Webb Space Telescope (JWST) so important to keeping our planet safe?
Teague: We’re entering a time now where we have these large-scale sky surveys that are going to be producing an incredible amount of data. We’re trying to develop the framework here at MIT where we can sift through that data as quickly and efficiently as possible, and then use the resources that we have available, such as the optical and radio observatories that we run like the MIT Haystack and Wallace Observatories, to follow up on those potential threats as quickly as possible and determine whether they could be problematic.
We’ve been doing trial observations to try and piece together how fast we can do this. The challenging thing is that the smaller objects that we’ve been talking about, the decameter ones, they’re really hard to detect from the ground. They’re just so small, and so that’s why we really need to use space-based facilities like JWST to help keep our planet safe. JWST is just incomparable, really, for detecting these very small, faint objects. A lot of our work at the moment at MIT is trying to understand is how do we build that entire pipeline — from detection to risk assessment to mitigation — under one roof to make it as efficient as possible. And I think this is a really MIT-type of problem to solve. There’s not many places that have the same range of experts in astronomy and engineering and technology to really tackle this properly. It’s really exciting that MIT hosts all these sorts of experts that we’re bringing together to solve this problem and keep our planet safer.
Cambioni: There is going to be what I like to call an asteroid revolution coming up because in addition to JWST’s observational capabilities, there is a new observatory in Chile called the Vera Rubin Observatory that could increase the detection of known small objects in space by a factor of 10. The most important thing to keep in mind, though, is that this observatory will detect the objects but may lose a lot of them. This is where a part of our work is coming in, to basically follow that object and map it as soon as possible. Additionally, Vera Rubin only looks at the reflected light, and it doesn’t get a precise estimate of an asteroid’s size. This gap between detection and characterization is a fundamental problem of asteroid science, between how many objects we discover and how fast we can characterize them. At MIT, we are using our in-house capabilities to help characterize these objects. That includes the MIT Wallace Observatory and the MIT Haystack Observatory.
Q: What role can MIT play in this new era of planetary defense?
De Wit: The reality is that, given the occurrence rate of these smaller asteroids and the new observational capabilities now coming online — from the Rubin Observatory to space-based facilities like JWST — we expect that within the next decade we will identify a handful of decameter-scale objects whose trajectories place them on course to impact the Earth-Moon system within this century. At that point, society will face a very practical question: whether, and how, to respond. Because these are much smaller objects than the dinosaur-killing asteroids, the types of mitigation strategies that we may envision are different. This is also where I think MIT might have an important role to play in the development, design, and potentially even construction of cost-effective, rapid-response asteroid-mitigation strategies. To help organize that effort, we have begun bringing together researchers across the Institute through the Planetary Defense at MIT project, working closely with colleagues on the engineering side.
Teague: What I’m particularly excited about is the way we’ve managed to engage students at MIT in this research as well. We’ve really focused on the impactful research and the way we’re bridging departments and labs within MIT, and this has been a fantastic way to engage students with practical astronomy and research. Saverio has run an IAP [Independent Activities Period] course, and we’re also running a student observing lab with the Wallace Observatory, where we hire a cohort of students every semester, and they’re taught how to use these observatories remotely. They take the data, do the analysis, and this semester, we've got on the order of 10 undergraduate students that are going to be working throughout the semester to take these observations and help us build this observation pipeline.
It's great that here at MIT we’re not only pushing the forefront of the research, but we’re also training the next generation of astronomers that is going to come in and carry this project through and into the future.
2026 MacVicar Faculty Fellows named
Two outstanding MIT educators have been named MacVicar Faculty Fellows: professor of mechanical engineering Amos Winter and professor of electrical engineering and computer science Nickolai Zeldovich.
For more than 30 years, the MacVicar Faculty Fellows Program has recognized exemplary and sustained contributions to undergraduate education at MIT. The program is named in honor of Margaret MacVicar, MIT’s first dean for undergraduate education and founder of the Undergraduate Research Opportunities Program (UROP). Fellows are chosen through an annual and highly competitive nomination process. The Registrar’s Office coordinates and administers the award on behalf of the Division of Graduate and Undergraduate Education. Nominations are reviewed by an advisory committee, and the provost selects the fellows.
Amos Winter: Bringing excitement to the classroom
Amos Winter is the Germeshausen Professor in the Department of Mechanical Engineering (MechE). He joined the faculty in 2012 and is best known for teaching class 2.007 (Design and Manufacturing I).
A hallmark of Winter’s pedagogy is the way he connects technical learning and core engineering science with real-world impacts. His approach keeps students actively engaged and encourages critical thinking while developing their competence and confidence as design engineers. Current graduate student Ariel Mobius ’24 writes, “Professor Winter is a transformative educator. He successfully blends rigorous technical instruction with lessons on problem scoping and hands-on learning and backs it all up with personalized mentorship. He is a committed advocate for his students and has fundamentally shaped my path as a mechanical engineer.”
Especially notable is Winter’s energetic style and use of interactive materials and demonstrations to make fundamental topics tangible. “He wheels in a large steamer trunk filled with demos he has built or collected to illustrate the day’s topic,” writes Class of 1948 Career Development Professor and assistant professor of mechanical engineering Kaitlyn Becker. “Some demos are enduring classics and others newly designed each year.” Through his “Gearhead Moment of Zen” Winter will share an astonishing car stunt to explain the mechanics using course material. “The theatrics stay in students’ minds,” says Becker, highlighting how Winter’s dramatic examples reinforce learning.
These techniques, combined with a supportive culture, allowed Winter to transform 2.007 from a core class and first subject in engineering design into a celebration of student effort and learning. Throughout the term, students learn how to design and build objects culminating in a robot competition in which their creations tackle themed challenges on a life-size game board. In the past, fewer than half the students were able to compete and today, boosted by Winter’s mentorship and enthusiasm, nearly 97 percent finish a competition-ready robot.
Ralph E. and Eloise F. Cross Professor of Mechanical Engineering David Hardt writes, “Thanks to Amos, this subject has become transformative for many MechE undergraduates.” Becker concurs: “He is the heart and captain of the 2.007 ‘cheer squad,’ cultivating a caring and motivated teaching team.”
Current graduate student Aidan Salazar ’25 notes, “His teaching philosophy is grounded in empowerment: he encourages students to take risks when designing while giving them the confidence and support needed to do so with thoughtful engineering analysis.”
Winter is also deeply invested in students’ growth outside the classroom. He serves as faculty supervisor for MIT’s Formula SAE (Society of Automotive Engineers) and Solar Car teams and guides related UROP projects. In fall 2025 alone, he advised nearly 50 UROP students from the teams, demonstrating his commitment to experiential learning and ability to mentor students at scale.
Salazar continues: “He has offered extraordinary contributions in helping MIT undergraduates embody the Institute’s ‘mens-et-manus’ [‘mind-and-hand’] motto, and I am grateful to be one of the individuals shaped by his teaching.”
“I have always looked up to my colleagues who are MacVicar Fellows as the best educators at the Institute,” writes Winter. “What makes this acknowledgement even more special to me is by earning it from teaching 2.007, which I often cite as one of the best parts of my job. The class is where most mechanical engineering undergraduates gain their first real engineering experience by physically realizing a machine of their own conception. It has been extremely gratifying to watch a generation of students translate their knowledge of engineering and design from the class into their careers … I am honored to have played a role in their intellectual growth and done so meaningfully enough to be recognized as a MacVicar Fellow.”
Nickolai Zeldovich: Inspiring independent thinkers and future teachers
Nickolai Zeldovich is the Joan and Irwin M. (1957) Jacobs Professor of Electrical Engineering and Computer Science (EECS). Student testimonials highlight his unique ability to activate their problem-solving skills, cultivate their intellectual curiosity, and infuse learning with joy.
Katarina Cheng ’25 writes, “From my first day of lecture in the course, I was immediately drawn in by Professor Zeldovich’s joy and enthusiasm for every facet of security and its power,” and Rotem Hemo ’17, ’18 says that Zeldovich “empowers students to find solutions themselves.”
Yael Tauman Kalai, the Ellen Swallow Richards (1873) Professor and professor of EECS concurs. She notes that his lectures — with back-and-forth discussion and probing questions — encourage independent thinking and ensure that “everyone feels a little smarter at the end. It is not surprising that students love him.”
Zeldovich’s affinity for problem-solving translates to his curricular work as well. When he arrived at MIT in 2008, Course 6 offered classes in theoretical and applied cryptography, but lacked a dedicated systems security subject. Recognizing this as a significant gap, Zeldovich took it upon himself to create class 6.566/6.858 (Computer Systems Security) in 2009. Since then, the subject has become a central part of the curriculum, but sustained interest from undergraduates revealed another need, and in 2021 he partnered with colleagues to create a dedicated introductory course: 6.1600 (Foundations of Computer Security).
Edwin Sibley Webster Professor of EECS Srini Devadas writes: “What our curriculum was sorely in need of was a systems security class, and Nickolai immediately and single-handedly created [it],” and has “taught this class to rave reviews ever since.”
The impact of Zeldovich’s thoughtful, inquiry-driven approach to pedagogy extends beyond the walls of his classroom, inspiring future educators, teaching assistants (TAs), and even his faculty colleagues at MIT.
Henry Corrigan-Gibbs, the Douglas Ross (1954) Career Development Professor of Software Technology and associate professor of computer science, writes that Zeldovich has “proven himself to be a dedicated teacher of teachers … One of the things that makes teaching with Nickolai so much fun is that he shares his passion with the undergraduates and MEng students who join the course staff as TAs.”
“[He] encourages the TAs to contribute their own creative ideas to the course,” continues Corrigan-Gibbs. “It should not be a surprise then that 100% of the TAs that we have had in our class have signed up to teach with Nickolai again.”
“Due, in no small part, to how I saw Nickolai lead his classroom, I was inspired to become an educator myself,” writes MIT alumna Anna Arpaci-Dusseau ’23, SM ’24. “I saw that the role of an instructor is not only to teach, but to innovate by thinking of creative projects, and to connect by listening to students’ concerns. As I go forward in my career, I am grateful to have such a wonderful example of an educator to look up to.”
Kalai adds, “I have learned a great deal from the two times that I have ‘taken’ (part of) the class from Nickolai. His extensive knowledge and experience are evident in every lecture. There is so much variety to Nickolai’s teaching.”
Nickolai Zeldovich is the recipient of numerous awards including the EECS Spira Teaching Award (2013), the Edgerton Faculty Achievement Award (2014), the EECS Faculty Research Innovation Fellowship (2018), and the EECS Jamieson Award for Excellence in Teaching (2024).
On receiving this award, Zeldovich says, “MIT has a culture of strong undergraduate education, so being selected as a MacVicar Fellow was truly an honor. It’s a joy to teach smart students about computer systems, and the tradition of co-teaching classes in the EECS department helped me improve as a teacher. Most of all, I look forward to continuing to teach MIT’s students!”
Learn more about the MacVicar Faculty Fellows Program on the Registrar’s Office website.
Certbot and Let's Encrypt Now Support IP Address Certificates
(Note: This post is also cross-posted on the Let's Encrypt blog)
As announced earlier this year, Let's Encrypt now issues IP address and six-day certificates to the general public. The Certbot team here at the Electronic Frontier Foundation has been working on two improvements to support these features: the --preferred-profile flag released last year in Certbot 4.0, and the --ip-address flag, new in Certbot 5.3. With these improvements together, you can now use Certbot to get those IP address certificates!
If you want to try getting an IP address certificate using Certbot, install version 5.4 or higher (for webroot support with IP addresses), and run this command:
sudo certbot certonly --staging \--preferred-profile shortlived \
--webroot \
--webroot-path <filesystem path to webserver root> \
--ip-address <your ip address>
Two things of note:
- This will request a non-trusted certificate from the Let's Encrypt staging server. Once you've got things working the way you want, run without the --staging flag to get a publicly trusted certificate.
- This requests a certificate with Let's Encrypt's "shortlived" profile, which will be good for 6 days. This is a Let's Encrypt requirement for IP address certificates.
As of right now, Certbot only supports getting IP address certificates, not yet installing them in your web server. There's work to come on that front. In the meantime, edit your webserver configuration to load the newly issued certificate from /etc/letsencrypt/live/<ip address>/fullchain.pem and /etc/letsencrypt/live/<ip address>/privkey.pem.
The command line above uses Certbot's "webroot" mode, which places a challenge response file in a location where your already-running webserver can serve it. This is nice since you don't have to temporarily take down your server.
There are two other plugins that support IP address certificates today: --manual and --standalone. The manual plugin is like webroot, except Certbot pauses while you place the challenge response file manually (or runs a user-provided hook to place the file). The standalone plugin runs a simple web server that serves a challenge response. It has the advantage of being very easy to configure, but has the disadvantage that any running webserver on port 80 has to be temporarily taken down so Certbot can listen on that port. The nginx and apache plugins don't yet support IP addresses.
You should also be sure that Certbot is set up for automatic renewal. Most installation methods for Certbot set up automatic renewal for you. However, since the webserver-specific installers don't yet support IP address certificates, you'll have to set a --deploy-hook that tells your webserver to load the most up-to-date certificates from disk. You can provide this --deploy-hook through the certbot reconfigure command using the rest of the flags above.
We hope you enjoy using IP address certificates with Let's Encrypt and Certbot, and as always if you get stuck you can ask for help in the Let's Encrypt Community Forum.
3 Questions: On the future of AI and the mathematical and physical sciences
Curiosity-driven research has long sparked technological transformations. A century ago, curiosity about atoms led to quantum mechanics, and eventually the transistor at the heart of modern computing. Conversely, the steam engine was a practical breakthrough, but it took fundamental research in thermodynamics to fully harness its power.
Today, artificial intelligence and science find themselves at a similar inflection point. The current AI revolution has been fueled by decades of research in the mathematical and physical sciences (MPS), which provided the challenging problems, datasets, and insights that made modern AI possible. The 2024 Nobel Prizes in physics and chemistry, recognizing foundational AI methods rooted in physics and AI applications for protein design, made this connection impossible to miss.
In 2025, MIT hosted a Workshop on the Future of AI+MPS, funded by the National Science Foundation with support from the MIT School of Science and the MIT departments of Physics, Chemistry, and Mathematics. The workshop brought together leading AI and science researchers to chart how the MPS domains can best capitalize on — and contribute to — the future of AI. Now a white paper, with recommendations for funding agencies, institutions, and researchers, has been published in Machine Learning: Science and Technology. In this interview, Jesse Thaler, MIT professor of physics and chair of the workshop, describes key themes and how MIT is positioning itself to lead in AI and science.
Q: What are the report’s key themes regarding last year’s gathering of leaders across the mathematical and physical sciences?
A: Gathering so many researchers at the forefront of AI and science in one room was illuminating. Though the workshop participants came from five distinct scientific communities — astronomy, chemistry, materials science, mathematics, and physics — we found many similarities in how we are each engaging with AI. A real consensus emerged from our animated discussions: Coordinated investment in computing and data infrastructures, cross-disciplinary research techniques, and rigorous training can meaningfully advance both AI and science.
One of the central insights was that this has to be a two-way street. It’s not just about using AI to do better science; science can also make AI better. Scientists excel at distilling insights from complex systems, including neural networks, by uncovering underlying principles and emergent behaviors. We call this the “science of AI,” and it comes in three flavors: science driving AI, where scientific reasoning informs foundational AI approaches; science inspiring AI, where scientific challenges push the development of new algorithms; and science explaining AI, where scientific tools help illuminate how machine intelligence actually works.
In my own field of particle physics, for instance, researchers are developing real-time AI algorithms to handle the data deluge from collider experiments. This work has direct implications for discovering new physics, but the algorithms themselves turn out to be valuable well beyond our field. The workshop made clear that the science of AI should be a community priority — it has the potential to transform how we understand, develop, and control AI systems.
Of course, bridging science and AI requires people who can work across both worlds. Attendees consistently emphasized the need for “centaur scientists” — researchers with genuine interdisciplinary expertise. Supporting these polymaths at every career stage, from integrated undergraduate courses to interdisciplinary PhD programs to joint faculty hires, emerged as essential.
Q: How do MIT’s AI and science efforts align with the workshop recommendations?
A: The workshop framed its recommendations around three pillars: research, talent, and community. As director of the NSF Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) — a collaborative AI and physics effort among MIT and Harvard, Northeastern, and Tufts universities — I’ve seen firsthand how effective this framework can be. Scaling this up to MIT, we can see where progress is being made and where opportunities lie.
On the research front, MIT is already enabling AI-and-science work in both directions. Even a quick scroll through MIT News shows how individual researchers across the School of Science are pursuing AI-driven projects, building a pipeline of knowledge and surfacing new opportunities. At the same time, collaborative efforts like IAIFI and the Accelerated AI Algorithms for Data-Driven Discovery (A3D3) Institute concentrate interdisciplinary energy for greater impact. The MIT Generative AI Impact Consortium is also supporting application-driven AI work at the university scale.
To foster early-career AI-and-science talent, several initiatives are training the next generation of centaur scientists. The MIT Schwarzman College of Computing's Common Ground for Computing Education program helps students become “bilingual” in computing and their home discipline. Interdisciplinary PhD pathways are also gaining traction; IAIFI worked with the MIT Institute for Data, Systems, and Society to create one in physics, statistics, and data science, and about 10 percent of physics PhD students now opt for it — a number that's likely to grow. Dedicated postdoctoral roles like the IAIFI Fellowship and Tayebati Fellowship give early-career researchers the freedom to pursue interdisciplinary work. Funding centaur scientists and giving them space to build connections across domains, universities, and career stages has been transformative.
Finally, community-building ties it all together. From focused workshops to large symposia, organizing interdisciplinary events signals that AI and science isn’t siloed work — it’s an emerging field. MIT has the talent and resources to make a significant impact, and hosting these gatherings at multiple scales helps establish that leadership.
Q: What lessons can MIT draw about further advancing its AI-and-science efforts?
A: The workshop crystallized something important: The institutions that lead in AI and science will be the ones that think systematically, not piecemeal. Resources are finite, so priorities matter. Workshop attendees were clear about what becomes possible when an institution coordinates hires, research, and training around a cohesive strategy.
MIT is well positioned to build on what’s already underway with more structural initiatives — joint faculty lines across computing and scientific domains, expanded interdisciplinary degree pathways, and deliberate “science of AI” funding. We’re already seeing moves in this direction; this year, the MIT Schwarzman College of Computing and the Department of Physics are conducting their first-ever joint faculty search, which is exciting to see.
The virtuous cycle of AI-and-science has the potential to be truly transformative — offering deeper insight into AI, accelerating scientific discovery, and producing robust tools for both. By developing an intentional strategy, MIT will be well positioned to lead in, and benefit from, the coming waves of AI.
New MIT class uses anthropology to improve chatbots
Young adults growing up in the attention economy — preparing for adult life, with social media and chatbots competing for their attention — can easily fall into unhealthy relationships with digital platforms. But what if chatbots weren’t mere distractions from real life? Could they be designed humanely, as moral partners whose digital goal is to be a social guide rather than an addictive escape?
At MIT, a friendship between two professors — one an anthropologist, the other a computer scientist — led to creation of an undergraduate class that set out to find the answer to those questions. Combining the two seemingly disparate disciplines, the class encourages students to design artificial intelligence chatbots in humane ways that help users improve themselves.
The class, 6.S061/21A.S02 (Humane User Experience Design, a.k.a. Humane UXD), is an upper-level computer science class cross-listed with anthropology. This unique cross-listing allows computer science majors to fulfill a humanities requirement while also pursuing their career objectives. The two professors use methods from linguistic anthropology to teach students how to integrate the interactional and interpersonal needs of humans into programming.
Professor Arvind Satyanarayan, a computer scientist whose research develops tools for interactive data visualization and user interfaces, and Professor Graham Jones, an anthropologist whose research focuses on communication, created Humane UXD last summer with a grant from the MIT Morningside Academy for Design (MAD). The MIT MAD Design Curriculum Program provides funding for faculty to develop new classes or enhance existing classes using innovative pedagogical approaches that transcend departmental boundaries.
The Design Curriculum Program is currently accepting applications for the 2026-27 academic year; the deadline is Friday, March 20.
Jones and Satyanarayan met several years ago when they co-advised a doctoral student’s research on data visualization for visually impaired people. They’ve since become close friends who can pretty much finish one another’s sentences.
“There’s a way in which you don’t really fully externalize what you know or how you think until you’re teaching,” Jones says. “So, it’s been really fun for me to see Arvind unfurl his expertise as a teacher in a way that lets me see how the pieces fit together — and discover underlying commonalities between our disciplines and our ways of thinking.”
Satyanarayan continues that thought: “One of the things I really enjoyed is the reciprocal version of what Graham said, which is that my field — human-computer interaction — inherited a lot of methods from anthropology, such as interviews and user studies and observation studies. And over the decades, those methods have gotten more and more watered down. As a result, a lot of things have been lost.
“For instance, it was very exciting for me to see how an anthropologist teaches students to interview people. It’s completely different than how I would do it. With my way, we lose the rapport and connection you need to build with your interview participant. Instead, we just extract data from them.”
For Jones’ part, teaching with a computer scientist holds another kind of allure: design. He says that human speech and interaction are organized into underlying genres with stable sets of rules that differentiate an interview at a cocktail party from a conversation at a funeral.
“ChatGPT and other large language models are trained on naturally occurring human communication, so they have all those genres inside them in a latent state, waiting to be activated,” he says.
“As a social scientist, I teach methods for analyzing human conversation, and give students very powerful tools to do that. But it ends up usually being an exercise in pure research, whereas this is a design class, where students are building real-world systems.”
The curriculum appears to be on target for preparing students for jobs after graduation. One student sought permission to miss class for a week because he had a trial internship at a chatbot startup; when he returned, he said his work at the startup was just like what he was learning in class. He got the job.
The sampling of group projects below, built with Google’s Gemini, demonstrates some of what’s possible when, as Jones says, “there’s a really deep intertwining of the technology piece with the humanities piece.” The students’ design work shows that entirely new ways of programming can be conceptualized when the humane is made a priority.
The bots demonstrate clearly that an interdisciplinary class can be designed in such a way that everyone benefits: Students learn more and differently; they can fulfill a non-major course requirement by taking a class that is directly beneficial to their careers; and long-term faculty partnerships can be forged or strengthened.
Team Pond
One project promises to be particularly useful for graduating seniors. Pond is designed to help young college graduates adapt to the challenges of independent adult life. Team Pond configured the chatbot not to simply parrot the user, or to sycophantically praise wrong answers. Instead, Pond provides advice to help with “adulting” (behaving as a responsible adult).
“Pond is built to be your companion from college life into post-college life, to help you in your transition from being a small fish in a small pond to being a small fish in a very big pond,” says sophomore Mary Feliz.
“College is very much a high-proximity and high-context environment, in the sense that everybody around you is going through the same thing, and it’s easy to build relationships or find opportunities, because there are structured pathways that you have access to,” explains graduate student Emaan Khan. “Post-grad life is low-context. You’re not always surrounded by your peers or your professors. It’s no-proximity also, in the sense that you don’t have opportunities at your doorstep. Pond is a tool to help empower you to access certain opportunities, or learn how to navigate.”
Pond’s developers are graduate student Claire Camacho, Harvard University graduate student Charlotte Cheah, Feliz, and Khan. They trained Pond to offer expertise in three areas: social life, professional life, and adult skills.
The students demonstrated Pond’s social communication by asking how to start conversations in a hobby group. Next, they exhibited Pond’s adulting communication by typing in a question.
“My landlord has asked me personal questions that make me uncomfortable. What should I do?”
Appearing in Pond’s textbox was this response: “Politely set boundaries. You are not obligated to answer personal questions. You can gently redirect the conversation back to property-related topics. Know your rights: familiarize yourself with tenant rights in your city or state regarding privacy and landlord conduct.”
When asked how to do that, Pond offered specific instructions: “You can try saying phrases like: ‘I prefer to keep our conversations focused on the property,’ and ‘let’s stick to topics related to the apartment, please.’ Always redirect the conversation back to the apartment or a legitimate maintenance issue. Keep your tone polite but firm. Document any conversations if needed.”
Pond also offered a role-playing scenario to help the user learn what polite-but-firm language might be in that situation.
“The ethos of the practice mode is that you are actively building a skill, so that after using Pond for some time, you feel confident that you can swim on your own,” Khan says. The chatbot uses a point system that allows users to graduate from a topic, and a treasure chest to store prizes, elements added to boost the bot’s appeal.
Team News Nest
Another of the projects, News Nest, provides a sophisticated means of helping young people engage with credible news sources in a way that makes it fun. The name is derived from the program’s 10 appealing and colorful birds, each of which focuses on a particular area of news. If you want the headlines, you ask Polly the Parrot, the main news carrier; if you’re interested in science, Gaia the Goose guides you. The flock also includes Flynn the Falcon, sports reporter; Credo the Crow, for crime and legal news; Edwin the Eagle, a business and economics news guide; Pizzazz the Peacock for pop and entertainment stories; and Pixel the Pigeon, a technology news specialist.
News Nest’s development team is made up of MIT seniors Tiana Jiang and Krystal Montgomery, and junior Natalie Tan. They intentionally built News Nest to prevent “doomscrolling,” provide media transparency (sources and political leanings are always shown), and they created a clever, healthy buffer from emotional manipulation and engagement traps by employing birds rather than human characters.
Team M^3 (Multi-Agent Murder Mystery)
A third team, M^3, decided to experiment with making AI humane by keeping it fun. MIT senior Rodis Aguilar, junior David De La Torre, and second-year Deeraj Pothapragada developed M^3, a social deduction multi-agent murder mystery that incorporates four chatbots as different personalities: Gemini, OpenAI’s ChatGPT, xAI’s Grok, and Anthropic’s Claude. The user is the fifth player.
Like a regular murder mystery, there are locations, weapons, and lies. The user has to guess who committed the murder. It’s very similar to a board or online game played with real players, only these are enhanced AI opponents you can’t see, who may or may not tell the truth in response to questions. Users can’t get too involved with one chatbot, because they’re playing all four. Also, as in a real life murder mystery game, the user is sometimes guilty.
New photonic device efficiently beams light into free space
Photonic chips use light to process data instead of electricity, enabling faster communication speeds and greater bandwidth. Most of that light typically stays on the chip, trapped in optical wires, and is difficult to transmit to the outside world in an efficient manner.
If a lot of light could be rapidly and precisely beamed off the chip, free from the confines of the wiring, it could open the door to higher-resolution displays, smaller Lidar systems, more precise 3D printers, or larger-scale quantum computers.
Now, researchers from MIT and elsewhere have developed a new class of photonic devices that enable the precise broadcasting of light from the chip into free space in a scalable way.
Their chip uses an array of microscopic structures that curl upward, resembling tiny, glowing ski jumps. The researchers can carefully control how light is emitted from thousands of these tiny structures at once.
They used this new platform to project detailed, full-color images that are roughly half the size of a grain of table salt. Used in this way, the technology could aid in the development of lightweight augmented reality glasses or compact displays.
They also demonstrated how photonic “ski jumps” could be used to precisely control quantum bits, or qubits, in a quantum computing system.
“On a chip, light travels in wires, but in our normal, free-space world, light travels wherever it wants. Interfacing between these two worlds has long been a challenge. But now, with this new platform, we can create thousands of individually controllable laser beams that can interact with the world outside the chip in a single shot,” says Henry Wen, a visiting research scientist in the Research Laboratory of Electronics (RLE) at MIT, research scientist at MITRE, and co-lead author of a paper on the new platform.
He is joined on the paper by co-lead authors Matt Saha, of MITRE; Andrew S. Greenspon, a visiting scientist in RLE and MITRE; Matthew Zimmermann, of MITRE; Matt Eichenfeld, a professor at the University of Arizona; senior author Dirk Englund, a professor in the MIT Department of Electrical Engineering and Computer Science and principal investigator in the Quantum Photonics and Artificial Intelligence Group and the RLE; as well as others at MIT, MITRE, Sandia National Laboratories, and the University of Arizona. The research appears today in Nature.
A scalable platform
This work grew out of the Quantum Moonshot Program, a collaboration between MIT, the University of Colorado at Boulder, the MITRE Corporation, and Sandia National Laboratories to develop a novel quantum computing platform using the diamond-based qubits being developed in the Englund lab.
These diamond-based qubits are controlled using laser beams, and the researchers needed a way to interact with millions of qubits at once.
“We can’t control a million laser beams, but we may need to control a million qubits. So, we needed something that can shoot laser beams into free space and scan them over a large area, kind of like firing a T-shirt gun into the crowd at a sports stadium,” Wen says.
Existing methods used to broadcast and steer light off a photonic chip typically work with only a few beams at once and can’t scale up enough to interact with millions of qubits.
To create a scalable platform, the researchers developed a new fabrication technique. Their method produces photonic chips with tiny structures that curve upward off the chip’s surface to shine laser beams into free space.
They built these tiny “ski jumps” for light by creating two-layer structures from two different materials. Each material expands differently when it cools down from the high fabrication temperatures.
The researchers designed the structures with special patterns in each layer so that, when the temperature changes, the difference in strain between the materials causes the entire structure to curve upward as it cools.
This is the same effect as in an old-fashioned thermostat, which utilizes a coil of two metallic materials that curl and uncurl based on the temperature in the room, triggering the HVAC system. “Both of these materials, silicon nitride and aluminum nitride, were separate technologies. Finding a way to put them together was really the fabrication innovation that enables the ski jumps. This wouldn’t have been possible without the pioneering contributions of Matt Eichenfield and Andrew Leenheer at Sandia National Labs,” Wen says.
On the chip, connected waveguides funnel light to the ski jump structures. The researchers use a series of modulators to rapidly and precisely control how that light is turned on and off, enabling them to project light off the chip and move it around in free space.
Painting with light
They can broadcast light in different colors and, by tweaking the frequencies of light, adjust the density of the pattern that is emitted. In this way, they can essentially paint pictures in free space using light.
“This system is so stable we don’t even need to correct for errors. The pattern stays perfectly still on its own. We just calculate what color lasers need to be on at a given time and then turn it on,” he says.
Because the individual points of light, or pixels, are so tiny, the researchers can use this platform to generate extremely high-resolution displays. For instance, with their technique, 30,000 pixels can be fit into the same area that can hold only two pixels used in smartphone displays, Wen says.
“Our platform is the ideal optical engine because our pixels are at the physical limit of how small a pixel can be,” he adds.
Beyond high-resolution displays and larger quantum computers with diamond-based qubits, the method could be used to produce Lidars that are small enough to fit on tiny robots.
It could also be utilized in 3D printing processes that fabricate objects using lasers to cure layers of resin. Because their chip generates controllable beams of light so rapidly, it could greatly increase the speed of these printing processes, allowing users to create more complex objects.
In the future, the researchers want to scale their system up and conduct additional experiments on the yield and uniformity of the light, design a larger system to capture light from an array of photonic chips with “ski jumps,” and conduct robustness tests to see how long the devices last.
“We envision this opening the door to a new class of lab-on-chip capabilities and lithographically defined micro-opto-robotic agents,” Wen says.
This research was funded, in part, by the MITRE Quantum Moonshot Program, the U.S. Department of Energy, and the Center for Integrated Nanotechnologies.
Government Spying 🤝 Targeted Advertising | EFFector 38.5
Have you ever seen a really creepy targeted ad online? One that revealed just how much these companies know about your life? It's unsettling enough to see how much companies know about you—but now we have confirmation that the government is also tapping the advertising surveillance machine to get your data. We're explaining the dangers of targeted advertising and location tracking, and the latest in the fight for privacy and free speech online, with our EFFector newsletter.
For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This issue covers a victory for protesters seeking to hold police accountable, a troubling conflict over the Department of Defense's use of AI, and how advertising surveillance enables government surveillance.
Prefer to listen in? Big news: EFFector is now available on all major podcast platforms! In this episode we chat with EFF Staff Attorney Lena Cohen about how targeted advertising can reveal your location to federal law enforcement. You can find the episode and subscribe in your podcast player of choice:
%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F924c6faa-1887-475b-a72c-0be4b6f68ba5%3Fdark%3Dfalse%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.comWant to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against online surveillance when you support EFF today!
Canada Needs Nationalized, Public AI
Canada has a choice to make about its artificial intelligence future. The Carney administration is investing $2-billion over five years in its Sovereign AI Compute Strategy. Will any value generated by “sovereign AI” be captured in Canada, making a difference in the lives of Canadians, or is this just a passthrough to investment in American Big Tech?
Forcing the question is OpenAI, the company behind ChatGPT, which has been pushing an “OpenAI for Countries” initiative. It is not the only one eyeing its share of the $2-billion, but it appears to be the most aggressive. OpenAI’s top lobbyist in the region has met with Ottawa officials, including Artificial Intelligence Minister Evan Solomon...
