Feed aggregator
New Jersey eyes fees on oil and gas facilities to fight climate change
Oregon lawmakers to hold special session on emergency wildfire funding
Tax carbon cautiously for sub-Saharan Africa
Nature Climate Change, Published online: 13 December 2024; doi:10.1038/s41558-024-02213-w
A carbon tax will not curb current emissions in sub-Saharan Africa and is unlikely to prevent future carbon lock-in effects. Meanwhile, a carbon tax could hit the poor in this region, thus the international community should be careful in pushing sub-Saharan Africa towards carbon taxation.EFF Speaks Out in Court for Citizen Journalists
No one gets to abuse copyright to shut down debate. Because of that, we at EFF represent Channel 781, a group of citizen journalists whose YouTube channel was temporarily shut down following copyright infringement claims made by Waltham Community Access Corporation (WCAC). As part of that case, the federal court in Massachusetts heard oral arguments in Channel 781 News v. Waltham Community Access Corporation, a pivotal case for copyright law and digital journalism.
WCAC, Waltham’s public access channel, records city council meetings on video. Channel 781, a group of independent journalists, curates clips of those meetings for its YouTube channel, along with original programming, to spark debate on issues like housing policy and real estate development. WCAC sent a series of DMCA takedown notices that accused Channel 781 of copyright infringement, resulting in YouTube deactivating Channel 781’s channel just days before a critical municipal election.
Represented by EFF and the law firm Brown Rudnick LLP, Channel 781 sued WCAC for misrepresentations in its DMCA takedown notices. We argued that using clips of government meetings from the government access station to engage in public debate is an obvious fair use under copyright. Also, by excerpting factual recordings and using captions to improve accessibility, the group aims to educate the public, a purpose distinct from WCAC’s unannotated broadcasts of hours-long meetings. The lawsuit alleges that WCAC’s takedown requests knowingly misrepresented the legality of Channel 781's use, violating Section 512(f) of the DMCA.
Fighting a Motion to DismissIn court today, EFF pushed back against WCAC’s motion to dismiss the case. We argued to District Judge Patti Saris that Channel 781’s use of video clips of city government meetings was an obvious fair use, and that by failing to consider fair use before sending takedown notices to YouTube, WCAC violated the law and should be liable for damages.
If Judge Saris denies WCAC’s motion, we will move on to proving our case. We’re confident that the outcome will promote accountability for copyright holders who misuse the powerful notice-and-takedown mechanism that the DMCA provides, and also protect citizen journalists in their use of digital tools.
EFF will continue to provide updates as the case develops. Stay tuned for the latest news on this critical fight for free expression and the protection of digital rights.
Teaching a robot its limits, to complete open-ended tasks safely
If someone advises you to “know your limits,” they’re likely suggesting you do things like exercise in moderation. To a robot, though, the motto represents learning constraints, or limitations of a specific task within the machine’s environment, to do chores safely and correctly.
For instance, imagine asking a robot to clean your kitchen when it doesn’t understand the physics of its surroundings. How can the machine generate a practical multistep plan to ensure the room is spotless? Large language models (LLMs) can get them close, but if the model is only trained on text, it’s likely to miss out on key specifics about the robot’s physical constraints, like how far it can reach or whether there are nearby obstacles to avoid. Stick to LLMs alone, and you’re likely to end up cleaning pasta stains out of your floorboards.
To guide robots in executing these open-ended tasks, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) used vision models to see what’s near the machine and model its constraints. The team’s strategy involves an LLM sketching up a plan that’s checked in a simulator to ensure it’s safe and realistic. If that sequence of actions is infeasible, the language model will generate a new plan, until it arrives at one that the robot can execute.
This trial-and-error method, which the researchers call “Planning for Robots via Code for Continuous Constraint Satisfaction” (PRoC3S), tests long-horizon plans to ensure they satisfy all constraints, and enables a robot to perform such diverse tasks as writing individual letters, drawing a star, and sorting and placing blocks in different positions. In the future, PRoC3S could help robots complete more intricate chores in dynamic environments like houses, where they may be prompted to do a general chore composed of many steps (like “make me breakfast”).
“LLMs and classical robotics systems like task and motion planners can’t execute these kinds of tasks on their own, but together, their synergy makes open-ended problem-solving possible,” says PhD student Nishanth Kumar SM ’24, co-lead author of a new paper about PRoC3S. “We’re creating a simulation on-the-fly of what’s around the robot and trying out many possible action plans. Vision models help us create a very realistic digital world that enables the robot to reason about feasible actions for each step of a long-horizon plan.”
The team’s work was presented this past month in a paper shown at the Conference on Robot Learning (CoRL) in Munich, Germany.
The researchers’ method uses an LLM pre-trained on text from across the internet. Before asking PRoC3S to do a task, the team provided their language model with a sample task (like drawing a square) that’s related to the target one (drawing a star). The sample task includes a description of the activity, a long-horizon plan, and relevant details about the robot’s environment.
But how did these plans fare in practice? In simulations, PRoC3S successfully drew stars and letters eight out of 10 times each. It also could stack digital blocks in pyramids and lines, and place items with accuracy, like fruits on a plate. Across each of these digital demos, the CSAIL method completed the requested task more consistently than comparable approaches like “LLM3” and “Code as Policies”.
The CSAIL engineers next brought their approach to the real world. Their method developed and executed plans on a robotic arm, teaching it to put blocks in straight lines. PRoC3S also enabled the machine to place blue and red blocks into matching bowls and move all objects near the center of a table.
Kumar and co-lead author Aidan Curtis SM ’23, who’s also a PhD student working in CSAIL, say these findings indicate how an LLM can develop safer plans that humans can trust to work in practice. The researchers envision a home robot that can be given a more general request (like “bring me some chips”) and reliably figure out the specific steps needed to execute it. PRoC3S could help a robot test out plans in an identical digital environment to find a working course of action — and more importantly, bring you a tasty snack.
For future work, the researchers aim to improve results using a more advanced physics simulator and to expand to more elaborate longer-horizon tasks via more scalable data-search techniques. Moreover, they plan to apply PRoC3S to mobile robots such as a quadruped for tasks that include walking and scanning surroundings.
“Using foundation models like ChatGPT to control robot actions can lead to unsafe or incorrect behaviors due to hallucinations,” says The AI Institute researcher Eric Rosen, who isn’t involved in the research. “PRoC3S tackles this issue by leveraging foundation models for high-level task guidance, while employing AI techniques that explicitly reason about the world to ensure verifiably safe and correct actions. This combination of planning-based and data-driven approaches may be key to developing robots capable of understanding and reliably performing a broader range of tasks than currently possible.”
Kumar and Curtis’ co-authors are also CSAIL affiliates: MIT undergraduate researcher Jing Cao and MIT Department of Electrical Engineering and Computer Science professors Leslie Pack Kaelbling and Tomás Lozano-Pérez. Their work was supported, in part, by the National Science Foundation, the Air Force Office of Scientific Research, the Office of Naval Research, the Army Research Office, MIT Quest for Intelligence, and The AI Institute.
X's Last-Minute Update to the Kids Online Safety Act Still Fails to Protect Kids—or Adults—Online
Late last week, the Senate released yet another version of the Kids Online Safety Act, written, reportedly, with the assistance of X CEO Linda Yaccarino in a flawed attempt to address the critical free speech issues inherent in the bill. This last minute draft remains, at its core, an unconstitutional censorship bill that threatens the online speech and privacy rights of all internet users.
TELL CONGRESS: VOTE NO ON KOSA
no kosa in last minute funding bills
Update Fails to Protect Users from Censorship or Platforms from LiabilityThe most important update, according to its authors, supposedly minimizes the impact of the bill on free speech. As we’ve said before, KOSA’s “duty of care” section is its biggest problem, as it would force a broad swath of online services to make policy changes based on the content of online speech. Though the bill’s authors inaccurately claim KOSA only regulates designs of platforms, not speech, the list of harms it enumerates—eating disorders, substance use disorders, and suicidal behaviors, for example—are not caused by the design of a platform.
The authors have failed to grasp the difference between immunizing individual expression and protecting a platform from the liability that KOSA would place on it.
KOSA is likely to actually increase the risks to children, because it will prevent them from accessing online resources about topics like addiction, eating disorders, and bullying. It will result in services imposing age verification requirements and content restrictions, and it will stifle minors from finding or accessing their own supportive communities online. For these reasons, we’ve been critical of KOSA since it was introduced in 2022.
This updated bill adds just one sentence to the “duty of care” requirement:“Nothing in this section shall be construed to allow a government entity to enforce subsection a [the duty of care] based upon the viewpoint of users expressed by or through any speech, expression, or information protected by the First Amendment to the Constitution of the United States.” But the viewpoint of users was never impacted by KOSA’s duty of care in the first place. The duty of care is a duty imposed on platforms, not users. Platforms must mitigate the harms listed in the bill, not users, and the platform’s ability to share users’ views is what’s at risk—not the ability of users to express those views. Adding that the bill doesn’t impose liability based on user expression doesn’t change how the bill would be interpreted or enforced. The FTC could still hold a platform liable for the speech it contains.
Let’s say, for example, that a covered platform like reddit hosts a forum created and maintained by users for discussion of overcoming eating disorders. Even though the speech contained in that forum is entirely legal, often helpful, and possibly even life-saving, the FTC could still hold reddit liable for violating the duty of care by allowing young people to view it. The same could be true of a Facebook group about LGBTQ issues, or for a post about drug use that X showed a user through its algorithm. If a platform’s defense were that this information is protected expression, the FTC could simply say that they aren’t enforcing it based on the expression of any individual viewpoint, but based on the fact that the platform allowed a design feature—a subreddit, Facebook group, or algorithm—to distribute that expression to minors. It’s a superfluous carveout for user speech and expression that KOSA never penalized in the first place, but which the platform would still be penalized for distributing.
It’s particularly disappointing that those in charge of X—likely a covered platform under the law—had any role in writing this language, as the authors have failed to grasp the world of difference between immunizing individual expression, and protecting their own platform from the liability that KOSA would place on it.
Compulsive Usage Doesn’t Narrow KOSA’s ScopeAnother of KOSA’s issues has been its vague list of harms, which have remained broad enough that platforms have no clear guidance on what is likely to cross the line. This update requires that the harms of “depressive disorders and anxiety disorders” have “objectively verifiable and clinically diagnosable symptoms that are related to compulsive usage.” The latest text’s definition of compulsive usage, however, is equally vague: “a persistent and repetitive use of a covered platform that significantly impacts one or more major life activities, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” This doesn’t narrow the scope of the bill.
The bill doesn’t even require that the impact be a negative one.
It should be noted that there is no clinical definition of “compulsive usage” of online services. As in past versions of KOSA, this updated definition cobbles together a definition that sounds just medical, or just legal, enough that it appears legitimate—when in fact the definition is devoid of specific legal meaning, and dangerously vague to boot.
How could the persistent use of social media not significantly impact the way someone socializes or communicates? The bill doesn’t even require that the impact be a negative one. Comments on an Instagram photo from a potential partner may make it hard to sleep for several nights in a row; a lengthy new YouTube video may impact someone’s workday. Opening a Snapchat account might significantly impact how a teenager keeps in touch with her friends, but that doesn’t mean her preference for that over text messages is “compulsive” and therefore necessarily harmful.
Nonetheless, an FTC weaponizing KOSA could still hold platforms liable for showing content to minors that they believe results in depression or anxiety, so long as they can claim the anxiety or depression disrupted someone’s sleep, or even just changed how someone socializes or communicates. These so-called “harms” could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football.
Dangerous Censorship Bills Do Not Belong in Must-Pass LegislationThe latest KOSA draft comes as incoming nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has reportedly vowed to protect free speech by “fighting back against the trans agenda,” among other things. As we’ve said for years (and about every version of the bill), KOSA would give the FTC under this or any future administration wide berth to decide what sort of content platforms must prevent young people from seeing. Just passing KOSA would likely result in platforms taking down protected speech and implementing age verification requirements, even if it's never enforced; the FTC could simply express the types of content they believe harms children, and use the mere threat of enforcement to force platforms to comply.
No representative should consider shoehorning this controversial and unconstitutional bill into a continuing resolution. A law that forces platforms to censor truthful online content should not be in a last minute funding bill.
TELL CONGRESS: VOTE NO ON KOSA
no kosa in last minute funding bills
Students strive for “Balance!” in a lively product showcase
On an otherwise dark and rainy Monday night, attendees packed Kresge Auditorium for a lively and colorful celebration of student product designs, as part of the final presentations for MIT’s popular class 2.009 (Product Engineering Processes).
With “Balance!” as its theme, the vibrant show attracted hundreds of attendees along with thousands more who tuned in online to see students pitch their products.
The presentations were the culmination of a semester’s worth of work in which six student teams were challenged to design, build, and draft a business plan for a product, in a process meant to emulate what engineers experience as part of a design team at a product development firm.
“This semester, we pushed the six teams to step outside of their comfort zones and find equilibrium between creativity and technical rigor, all as they embarked on a product engineering process journey,” said 2.009 lecturer Josh Wiesman.
Trying to find a balance
The course, known on campus as “two-double-oh-nine,” marks a colorful end to the fall semester on campus. Each team, named after a different color, was given mentors, access to makerspaces, and a budget of $7,500 to turn their ideas into working products. In the process, they learned about creativity, product design, and teamwork.
Various on-stage demonstrations and videos alluded to this year’s theme, from balance beam walks to scooter and skateboard rides.
“Balance is a word that can be used to describe stability, steadiness, symmetry, even fairness or impartiality,” said Professor Peko Hosoi, who co-instructed the class with Wiesman this semester. “Balance is something we all strive for, but we rarely stop to reflect on. Tonight, we invite you to reflect on balance and to celebrate the energy and creativity of each student and team.”
Safety first
The student products spanned industries and sectors. The Red Team developed a respirator for wildland firefighters, who work to prevent and control forest fires by building “fire lines.” Over the course of long days in challenging terrain, these firefighters use hand tools and chainsaws to create fire barriers by digging trenches, clearing vegetation, and other work based on soil and weather conditions. The team’s respirator is designed to comfortably rest on a user’s face and includes a battery-powered air filter the size of a large water bottle that can fit inside a backpack.
The mask includes a filter and a valve for exhalations, with a hose that connects to the blower unit. Team members said their system provides effective respiratory protection against airborne particles and organic vapors as users’ work. Each unit costs $40 to make, and the team plans to license the product to manufacturers, who can sell directly to fire departments and governments.
The Purple Team presented Contact, a crash-detection system designed to enhance safety for young bicycle riders. The device combines hardware and smart algorithms to detect accidents and alert parents or guardians. The system includes features like a head-sensing algorithm to minimize false alerts, plus a crash-detection algorithm that uses acceleration data to calculate injury severity. The compact device is splashproof and dustproof, includes Wi-Fi/LTE connectivity, and can run for a week on a single charge. With a retail price of $75 based on initial production of 5,000 units, the team plans to market the product to schools and outdoor youth groups, aiming to give young riders more independence while keeping them safe.
On ergonomics and rehabilitation
The Yellow Team presented an innovative device for knee rehabilitation. Their prototype is an adjustable, wearable device that monitors patients' seated exercises in real-time. The data is processed by a mobile app and shared with the patient’s physical therapist, enabling tailored feedback and adjustments. The app also encourages patients to exercise each day, tracks range of motion, and gives therapists a quick overview of each patient's progress. The product aims to improve recovery outcomes for postsurgery patients or those undergoing rehabilitation for knee-related injuries.
The Blue Team, meanwhile, presented Band-It, an ergonomic tool designed to address the issue of wrist pain among lobstermen. With their research showing that among the 20,000 lobstermen in North America, 1 in 3 suffer from wrist pain, the team developed a durable and simple-to-use banding tool. The product would retail for $50, with a manufacturing cost of $10.50, and includes a licensing model with 10 percent royalties plus a $5,000 base licensing fee. The team emphasized three key features: ergonomic design, simplicity, and durability.
Underwater solutions
Some products were designed for the sea. The Pink Team presented MARLIN (Marine Augmented Reality Lens Imaging Network), a system designed to help divers see more clearly underwater. The device integrates into diving masks and features a video projection system that improves visibility in murky or cloudy water conditions. The system creates a 3D-like view that helps divers better judge distances and depth, while also processing and improving the video feed in real-time to make it easier to see in poor conditions. The team included a hinged design that allows the system to be easily removed from the mask when needed.
The Green Team presented Neptune, an underwater communication device designed for beginner scuba divers. The system features six preprogrammed messages, including essential diving communications like “Ascend,” “Marine Life,” “Look at Me,” “Something’s Off,” “Air,” and “SOS.” The compact device has a range of 20 meters underwater, can operate at depths of up to 50 meters, and runs for six hours on a battery charge. Built with custom electronics to ensure clear and reliable communications underwater, Neptune is housed in a waterproof enclosure with an intuitive button interface. The communications systems will be sold to dive shops in packs of two for $800. The team plans to have dive shops rent the devices for $15 a dive.
“Product engineers of the future”
Throughout the night, spectators in Kresge cheered and waved colorful pompoms as teams demonstrated their prototypes and shared business plans. Teams pitched their products with videos, stories, and elaborate props.
In closing, Wiesman and Hosoi thanked the many people behind the scenes, from lab instructors and teaching assistants to those working to produce the night’s show. They also commended the students for embracing the rigorous and often chaotic coursework, all while striving for balance.
“This all started a mere 13 weeks ago with ideation, talking to people from all walks of life to understand their challenges and uncover problems and opportunities,” Hosoi said. “The class’s six phases of product design ultimately turned our students into product engineers of the future.”
Hank Green to deliver MIT’s 2025 Commencement address
Hank Green, a prolific digital content creator and entrepreneur with the ethos “make things, learn stuff,” will deliver the address at the OneMIT Commencement Ceremony on Thursday, May 29.
Since the 1990s, Green has launched, built, and sustained a wide-ranging variety of projects, from videos to podcasts to novels, many featuring STEM-related topics and a signature enthusiasm for the natural world and the human experience. He often collaborates with his brother, author John Green.
The Greens’ educational media company, Complexly, produces content that is used in high schools across the U.S. and has been viewed more than 2 billion times. The company continues to grow its large number of YouTube channels, including SciShow, which investigates everything from the deepest hole on Earth to the weirdest kinds of lightning. Videos on other channels, such as CrashCourse, ask questions like “Where did democracy come from?” and “Why do we study art?” On his own platforms, Green takes on virtually any topic under the sun, including the weird science of tattoos and how ferrofluid speakers work.
Green has also launched platforms to help support other content creators, including VidCon, the world’s largest gathering that celebrates the community, craft, and industry of online video, which was acquired by Viacom in 2018. He also launched the crowdfunding platform Subbable, which was later acquired by Patreon. His latest book is the New York Times best-selling “A Beautifully Foolish Endeavor,” the sequel in a pair of novels that grapple with the implications of overnight fame, internet culture, and reality-shifting discoveries.
“Many of our students grew up captivated by the way Hank Green makes learning about complex science subjects accessible and fun — whether he’s describing climate change, electromagnetism, or the anatomy of a pelican,” says MIT President Sally Kornbluth. “Our students told us they wanted a Commencement speaker whose knowledge and insight are complemented by creativity, humor, and a sense of hope for the future. Hank and his endless curiosity more than fit the bill, and we’re thrilled to welcome him to join us in celebrating the Class of 2025.”
“I was just so honored to be invited,” Green says. “MIT has always represented the best of what happens when creativity meets rigorous inquiry, and I can’t wait to be part of this moment.”
Green has been a YouTube celebrity since starting a vlog with his brother in 2007, which led to the growth of a huge fanbase known as the NerdFighters and the Greens’ signature phrase “Don’t forget to be awesome.” Hank Green also writes songs and performs standup. Last summer he released a comedy special about his recent diagnosis and successful treatment of Hodgkin lymphoma.
“Hank Green shares our students’ boundless curiosity about how things work, and we’re excited to welcome such an enthusiastic educator to MIT. CrashCourse’s lucid, engaging videos have bolstered the efforts of millions of high-school students to master AP physical and social science curricula and have invited learners of all ages to better understand our universe, our planet and humanity,” says Les Norford, professor of architecture and chair of the Commencement Committee.
“Hank Green is an inspiration for those of us who want to make science and education accessible, and I’m eager to hear what words of wisdom he has for the graduating class. He embodies a pure and hopeful form of curiosity just like what I’ve observed across the MIT community,” says senior class president Megha Vemuri.
“As someone that has worked tirelessly to make science accessible to the public, Hank Green is an excellent choice for commencement speaker. He has commendably used his many skills to help improve the world,” says Teddy Warner, president of the Graduate Student Council.
Green joins notable recent MIT Commencement speakers including inventor and entrepreneur Noubar Afeyan (2024); YouTuber and inventor Mark Rober (2023); Director-General of the World Trade Organization Ngozi Okonjo-Iweala (2022); lawyer and social justice activist Bryan Stevenson (2021); retired U.S. Navy four-star admiral William McRaven (2020); and three-term New York City mayor and philanthropist Michael Bloomberg (2019).