Feed aggregator
Trump vs. California Round 1: The future of electric cars
Congress wants disaster aid released by Jan. 15. It’s not about Trump.
Canada’s first federal youth climate case gets trial date
Bill Gates-backed fund bets $40M on Deep Sky carbon removal
Calif. insurance commissioner’s attempts to rein in agency draw criticism
Amazon invests another $10B in Ohio-based data centers
Barcelona subway recycles energy from braking to charge EVs
Biden strengthens US climate goal before Trump takes office
Rethinking microbial carbon use efficiency in soil models
Nature Climate Change, Published online: 19 December 2024; doi:10.1038/s41558-024-02217-6
Soil models include a key parameter known as carbon use efficiency, which impacts estimates of global carbon storage by determining the flow of carbon into soil pools versus the atmosphere. Microbial-explicit versions of these models are due for an update that recasts carbon use efficiency as an output variable emerging from microbial metabolism.What You Should Know When Joining Bluesky
Bluesky promises to rethink social media by focusing on openness and user control. But what does this actually mean for the millions of people joining the site?
November was a good month for alternatives to X. Many users hit their balking point after two years of controversial changes turned Twitter into X, a restrictive hub filled with misinformation and hate speech. Musk’s involvement in the U.S. presidential election was the last straw for many who are now looking for greener pastures.
Threads, the largest alternative, grew about 15% with 35 million new users. However, the most explosive growth came from Bluesky, seeing over 500% growth and a total user base of over 25 million users at the time of writing.
We’ve dug into the nerdy details of how Mastodon, Threads, and Bluesky compare, but given this recent momentum it’s important to clear up some questions for new Bluesky users, and what this new approach to the social web really means for how you connect with people online.
Note that Bluesky is still in an early stage, and many big changes are anticipated from the project. Answers here are accurate as of the time of writing, and will indicate the company’s future plans where possible.
Is Bluesky just another Twitter?At face value the Bluesky app has a lot of similarities to Twitter prior to becoming X. That’s by design: the Bluesky team has prioritized making a drop-in replacement for 2022 Twitter, so everything from the layout, posting options, and even color scheme will feel familiar to users familiar with that site.
While discussed in the context of decentralization, this experience is still very centralized like traditional social media, with a single platform controlled by one company, Bluesky PBLLC. However, a few aspirations from this company make it stand out:
- Prioritizing interoperability and community development: Other platforms frequently get this wrong, so this dedication to user empowerment and open source tooling is commendable.
- “Credible Exit” Decentralization: Bluesky the company wants Bluesky, the network, to be able to function even if the company is eliminated or ‘enshittified.’
The first difference is evident already from the wide variety of tools and apps on the network. From blocking certain content to highlighting communities you’re a part of, there are a lot of settings to make your feed yours— some of which we walked through here. You can also abandon Bluesky’s Twitter-style interface for an app like Firesky, which presents a stream of all Bluesky content. Other apps on the network can even be geared towards sharing audio, events, or work as a web forum, all using the same underlying AT protocol. This interoperable and experimental ecosystem parallels another based on the ActivityPub protocol, called “The Fediverse”, which connects Threads to Mastodon as well as many other decentralized apps which experiment with the functions of traditional social media sites.
That “credible exit” priority is less immediately visible, but explains some of the ways Bluesky looks different. The most visible difference is that usernames are domain names, with the default for new users being a subdomain of bsky.social. EFF set it up so that our account name is our website, @eff.org, which will be the case across the Bluesky network, even if viewed with different apps. Comparable to how Mastodon handles verification, no central authority or government documents are needed for verification, just proof of control over a site or record.
As Bluesky decentralizes, it is likely to diverge more from the Twitter experience as the tricky problems of decentralization creep in.
How is Bluesky for privacy?While Bluesky is not engaged in surveillance-based advertising like many incumbent social media platforms, users should be aware that shared information is more public and accessible than they might expect.
Bluesky, the app, offers some sensible data-minimizing defaults like requiring user consent for third-party embedded media, which can include tracking. The real assurance to users, however, is that even if the flagship apps were to become less privacy protective, the open tools let others make full-featured alternative apps on the same network.
However, by design, Bluesky content is fully public on the network. Users can change privacy settings to encourage apps on the network to require login to view your account, but it is optional to honor. Every post, every like, and every share is visible to the world. Even blocking data is plainly visible. By design all of this information is also accessible in one place, as Bluesky aims to be the megaphone for a global audience Twitter once was.
This transparency extends to how Bluesky handles moderation, where users and content are labeled by a combination of Bluesky moderators, community moderators, and automated labeling. The result is information about you will, over time, be held by these moderators to either promote or hide your content.
Users leaving X out of frustration for the platform using public content to feed AI training may also find that this approach of funneling all content into one stream is very friendly to scraping for AI training by third parties. Bluesky’s CEO has been clear the company will not engage in AI licensing deals, but it’s important to be clear this is inherent to any network prioritizing openness. The freedom to use public data for creative expression, innovation, and research extends to those who use it to train AI.
Users you have blocked may also be able to use this public stream to view your posts without interacting with you. If your threat model includes trolls and other bad actors who might reshare your posts in other contexts, this is important to consider.
Direct messages are not included in this heap of public information. However they are not end-to-end encrypted, and only hosted by Bluesky servers. As was the case for X, that means any DM is visible to Bluesky PBLLC. DMs may be accessed for moderation, for valid police warrants, and may even one day be public through a data breach. Encrypted DMs are planned, but we advise sensitive conversations be moved to dedicated fully encrypted conversations.
How do I find people to follow?Tools like Skybridge are being built to make it easier for people to import their Twitter contacts into Bluesky. Similar to advice we gave for joining Mastodon, keep in mind these tools may need extensive account access, and may need to be re-run as more people switch networks.
Bluesky has also implemented “starter packs,” which are curated lists of users anyone can create and share to new users. EFF recently put together a few for you to check out:
- Electronic Frontier Foundation Staff
- Electronic Frontier Alliance members
- Digital Rights, News & Advocacy
“Fediverse” refers to a wide variety of sites and services generally communicating with each other over the ActivityPub protocol, including Threads, Mastodon, and a number of other projects. Bluesky uses the AT Protocol, which is not currently compatible with ActivityPub, thus it is not part of “the fediverse.”
However, Bluesky is already being integrated into the vision of an interoperable and decentralized social web. You can follow Bluesky accounts from the fediverse over RSS. A number of mobile apps will also seamlessly merge Bluesky and fediverse feeds and let you post to both accounts. Even with just one Bluesky or fediverse account, users can also share posts and DMs to both networks using a project called Bridgy Fed.
In recent weeks this bridging also opened up to the hundreds of millions of Threads users. It just requires an additional step of enabling fediverse sharing, before connecting to the fediverse Bridgy Fed account. We’re optimistic that all of these projects will continue to improve integrations even more in the future.
Is the Bluesky network decentralized?The current Bluesky network is not decentralized.
It is nearly all made and hosted by one company, Bluesky PBLLC, which is working on creating the “credible exit” from their control as a platform host. If Bluesky the company and the infrastructure it operates disappeared tonight, however, the entire Bluesky network would effectively vanish along with it.
Of the 25 million users, only 10,000 are hosted by a non-Bluesky services — most of which through fediverse connections. Changing to another host is also currently a one-way exit. All DMs rely on Bluesky owned servers, as does the current system for managing user identities, as well as the resource-intensive “Relay” server aggregating content from across the network. The same company also handles the bulk of moderation and develops the main apps used by most users. Compared to networks like the fediverse or even email, hosting your own Bluesky node currently requires a considerable investment.
Once this is no longer the case, a “credible exit” is also not quite the same as “decentralized.” An escape hatch for particularly dire circumstances is good, but it falls short of the distributed power and decision making of decentralized networks. This distinction will become more pressing as the reliance on Bluesky PBLLC is tested, and the company opens up to more third parties for each component of the network.
How does Bluesky make money?The past few decades have shown the same ‘enshittification’ cycle too many times. A new startup promises something exciting, users join, and then the platform turns on users to maximize profits—often through surveillance and restricting user autonomy.
Will Bluesky be any different? From the team’s outlined plan we can glean that Bluesky promises not to use surveillance-based advertising, nor lock-in users. Bluesky CEO Jay Graber also promised to not sell user content to AI training licenses and intends to always keep the service free to join. Paid services like custom domain hosting or paid subscriptions seem likely.
So far, though, the company relies on investment funding. It was initially incubated by Twitter co-founder Jack Dorsey— who has since distanced himself from the project—and more recently received 8 million and 15 million dollar rounds of funding.
That later investment round has raised concerns among the existing userbase that Bluesky would pivot to some form of cryptocurrency service, as it was led by Blockchain Capital, a cryptocurrency focused venture capital company which also had a partner join the Bluesky board. Jay Graber committed to “not hyperfinancialize the social experience” with blockchain projects, and emphasized that Bluesky does not use blockchain.
As noted above, Bluesky has prioritized maintaining a “credible exit” for users, a commitment to interoperability that should keep the company accountable to the community and hopefully prevent the kind of “enshittification” that drove people away from X. Holding the company to all of these promises will be key to seeing the Bluesky network and the AT protocol reach that point of maturity.
How does moderation work?Our comparison of Mastodon, Threads, and Bluesky gets into more detail, but as it stands Bluesky’s moderation is similar to Twitter’s before Musk. The Bluesky corporation uses the open moderation tools to label posts and users, and will remove users from their hosted services for breaking their terms of service. This tooling keeps the Bluesky company’s moderation tied to its “credible exit” goals, giving it the same leverage any other future operator might have. It also means Bluesky’s centralized moderation of today can’t scale, and even with a good faith effort it will run into issues.
Bluesky accounts for this by opening its moderation tools to the community. Advanced options are available under settings in the web app, and anyone can label content and users on the site. These labels let users filter, prioritize, or block content. However, only Bluesky has the power to “deplatform” poorly behaved users by removing them, either by no longer hosting their account, no longer relaying their content to other users, or both.
Bluesky aspires to censorship resistance, and part of creating a “credible exit” means reducing the company’s ability to remove users entirely. In a future with a variety of hosts and relays on the Bluesky network, removing a user looks more like removing a website from the internet—not impossible, but very difficult. Instead users will need to settle with filtering out or blocking speech they object to, and take some comfort that voices they align with will not be removed from the network.
The permeability of Bluesky also means community tooling will need to address network abuses, like last May when a pro-Trump botnet on Nostr bridged to Bluesky via Mastodon to flood timelines. It’s possible that like in the Fediverse, Bluesky may eventually form a network of trusted account hosts and relays to mitigate these concerns.
Bluesky is still a work in progress, but its focus on decentralization, user control, and interoperability makes it an exciting space to watch. Whether you’re testing the waters or planning a full migration, these insights should help you navigate the platform.
Australia Banning Kids from Social Media Does More Harm Than Good
Age verification systems are surveillance systems that threaten everyone’s privacy and anonymity. But Australia’s government recently decided to ignore these dangers, passing a vague, sweeping piece of age verification legislation after giving only a day for comments. The Online Safety Amendment (Social Media Minimum Age) Act 2024, which bans children under the age of 16 from using social media, will force platforms to take undefined “reasonable steps” to verify users’ ages and prevent young people from using them, or face over $30 million in fines.
The country’s Prime Minister, Anthony Albanese, claims that the legislation is needed to protect young people in the country from the supposed harmful effects of social media, despite no study showing such an impact. This legislation will be a net loss for both young people and adults who rely on the internet to find community and themselves.
The law does not specify which social media platforms will be banned. Instead, this decision is left to Australia’s communications minister who will work alongside the country’s internet regulator, the eSafety Commissioner, to enforce the rules. This gives government officials dangerous power to target services they do not like, all at a cost to both minor and adult internet users.
The legislation also does not specify what type of age verification technology will be necessary to implement the restrictions but prohibits using only government IDs for this purpose. This is a flawed attempt to protect privacy.
Since platforms will have to provide other means to verify their users' ages other than by government ID, they will likely rely on unreliable tools like biometric scanners. The Australian government awarded the contract for testing age verification technology to a UK-based company, Age Check Certification Scheme (ACCS) who, according to the company website, “can test all kinds of age verification systems,” including “biometrics, database lookups, and artificial intelligence-based solutions.”
The ban will not take effect for at least another 12 months while these points are decided upon, but we are already concerned that the systems required to comply with this law will burden all Australians’ privacy, anonymity, and data security.
Banning social media and introducing mandatory age verification checks is the wrong approach to protecting young people online, and this bill was hastily pushed through the Parliament of Australia with little oversight or scrutiny. We urge politicians in other countries—like the U.S. and France—to explore less invasive approaches to protecting all people from online harms and focus on comprehensive privacy protections, rather than mandatory age verification.
EFF Statement on U.S. Supreme Court's Decision to Consider TikTok Ban
The TikTok ban itself and the DC Circuit's approval of it should be of great concern even to those who find TikTok undesirable or scary. Shutting down communications platforms or forcing their reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the U.S. has previously condemned globally.
The U.S. government should not be able to restrict speech—in this case by cutting off a tool used by 170 million Americans to receive information and communicate with the world—without proving with evidence that the tools are presently seriously harmful. But in this case, Congress has required and the DC Circuit approved TikTok’s forced divestiture based only upon fears of future potential harm. This greatly lowers well-established standards for restricting freedom of speech in the U.S.
So we are pleased that the Supreme Court will take the case and will urge the justices to apply the appropriately demanding First Amendment scrutiny.
Speaking Freely: Winnie Kabintie
Winnie Kabintie is a journalist and Communications Specialist based in Nairobi, Kenya. As an award-winning youth media advocate, she is passionate about empowering young people with Media and Information Literacy skills, enabling them to critically engage with and shape the evolving digital media landscape in meaningful ways.
Greene: To get us started, can you tell us what the term free expression means to you?
I think it's the opportunity to speak in a language that you understand and speak about subjects of concern to you and to anybody who is affected or influenced by the subject of conversation. To me, it is the ability to communicate openly and share ideas or information without interference, control, or restrictions.
As a journalist, it means having the freedom to report on matters affecting society and my work without censorship or limitations on where that information can be shared. Beyond individual expression, it is also about empowering communities to voice their concerns and highlight issues that impact their lives. Additionally, access to information is a vital component of freedom of expression, as it ensures people can make informed decisions and engage meaningfully in societal discourse because knowledge is power.
Greene: You mention the freedom to speak and to receive information in your language. How do you see that currently? Are language differences a big obstacle that you see currently?
If I just look at my society—I like to contextualize things—we have Swahili, which is a national language, and we have English as the secondary official language. But when it comes to policies, when it comes to public engagement, we only see this happening in documents that are only written in English. This means when it comes to the public barazas (community gatherings) interpretation is led by a few individuals, which creates room for disinformation and misinformation. I believe the language barrier is an obstacle to freedom of speech. We've also seen it from the civil society dynamics, where you're going to engage the community but you don't speak the same language as them, then it becomes very difficult for you to engage them on the subject at hand. And if you have to use a translator, sometimes what happens is you're probably using a translator for whom their only advantage, or rather the only advantage they bring to the table, is the fact that they understand different languages. But they're not experts in the topic that you're discussing.
Greene: Why do you think the government only produces materials in English? Do you think part of that is because they want to limit who is able to understand them? Or is it just, are they lazy or they just disregard the other languages?
In all fairness, I think it comes from the systematic approach on how things run. This has been the way of doing things, and it's easier to do it because translating some words from, for example, English to Swahili is very hard. And you see, as much as we speak Swahili in Kenya—and it's our national language—the kind of Swahili we speak is also very diluted or corrupted with English and Sheng—I like to call “ki-shenglish”. I know there were attempts to translate the new Kenyan Constitution, and they did translate some bits of the summarized copy, but even then it wasn’t the full Constitution. We don't even know how to say certain words in Swahili from English which makes it difficult to translate many things. So I think it's just an innocent omission.
Greene: What makes you passionate about freedom of expression?
As a journalist and youth media advocate, my passion for freedom of expression stems from its fundamental role in empowering individuals and communities to share their stories, voice their concerns, and drive meaningful change. Freedom of expression is not just about the right to speak—it’s about the ability to question, to challenge injustices, and to contribute to shaping a better society.
For me, freedom of expression is deeply personal as I like to question, interrogate and I am not just content with the status quo. As a journalist, I rely on this freedom to shed light on critical issues affecting society, to amplify marginalized voices, and to hold power to account. As a youth advocate, I’ve witnessed how freedom of expression enables young people to challenge stereotypes, demand accountability, and actively participate in shaping their future. We saw this during the recent Gen Z revolution in Kenya when youth took to the streets to reject the proposed Finance Bill.
Freedom of speech is also about access. It matters to me that people not only have the ability to speak freely, but also have the platforms to articulate their issues. You can have all the voice you need, but if you do not have the platforms, then it becomes nothing. So it's also recognizing that we need to create the right platforms to advance freedom of speech. These, in our case, include platforms like radio and social media platforms.
So we need to ensure that we have connectivity to these platforms. For example, in the rural areas of our countries, there are some areas that are not even connected to the internet. They don't have the infrastructure including electricity. It then becomes difficult for those people to engage in digital media platforms where everybody is now engaging. I remember recently during the Reject Finance Bill process in Kenya, the political elite realized that they could leverage social media and meet with and engage the youth. I remember the President was summoned to an X-space and he showed up and there was dialogue with hundreds of young people. But what this meant was that the youth in rural Kenya who didn’t have access to the internet or X were left out of that national, historic conversation. That's why I say it's not just as simple as saying you are guaranteed freedom of expression by the Constitution. It's also how governments are ensuring that we have the channels to advance this right.
Greene: Have you had a personal experience or any personal experiences that shaped how you feel about freedom of expression? Maybe a situation where you felt like it was being denied to you or someone close to you was in that situation?
At a personal level I believe that I am a product of speaking out and I try to use my voice to make an impact! There is also this one particular incident that stands out during my early career as a journalist. In 2014 I amplified a story from a video shared on facebook by writing a news article that was published on The Kenya Forum, which at the time was one of the two publications that were fully digital in the country covering news and feature articles.
The story, which was a case of gender based assault, gained traction drawing attention to the unfortunate incident that had seen a woman stripped naked allegedly for being “dressed indecently.” The public uproar sparked the famous #MyDressMyChoice protest in Kenya where women took to the streets countrywide to protest against sexual violence.
Greene: Wow. Do you have any other specific stories that you can tell about the time when you spoke up and you felt that it made a difference? Or maybe you spoke up, and there was some resistance to you speaking up?
I've had many moments where I've spoken up and it's made a difference including the incident I shared in the previous question. But, on the other hand, I also had a moment where I did not speak out years ago, when a classmate in primary school was accused of theft.
There was this girl once in class, she was caught with books that didn't belong to her and she was accused of stealing them. One of the books she had was my deskmate’s and I was there when she had borrowed it. So she was defending herself and told the teacher, “Winnie was there when I borrowed the book.” When the teacher asked me if this was true I just said, “I don't know.” That feedback was her last line of defense and the girl got expelled from school. So I’ve always wondered, if I'd said yes, would the teacher have been more lenient and realized that she had probably just borrowed the rest of the books as well? I was only eight years old at the time, but because of that, and how bad the outcome made me feel, I vowed to myself to always stand for the truth even when it’s unpopular with everyone else in the room. I would never look the other way in the face of an injustice or in the face of an issue that I can help resolve. I will never walk away in silence.
Greene: Have you kept to that since then?
Absolutely.
Greene: Okay, I want to switch tracks a little bit. Do you feel there are situations where it's appropriate for government to limit someone's speech?
Yes, absolutely. In today’s era of disinformation and hate speech, it’s crucial to have legal frameworks that safeguard society. We live in a society where people, especially politicians, often make inflammatory statements to gain political mileage, and such remarks can lead to serious consequences, including civil unrest.
Kenya’s experience during the 2007-2008 elections is a powerful reminder of how harmful speech can escalate tensions and pit communities against each other. That period taught us the importance of being mindful of what leaders say, as their words have the power to unite or divide.
I firmly believe that governments must strike a balance between protecting freedom of speech and preventing harm. While everyone has the right to express themselves, that right ends where it begins to infringe on the rights and safety of others. It’s about ensuring that freedom of speech is exercised responsibly to maintain peace and harmony in society.
Greene: So what do we have to be careful about with giving the government the power to regulate speech? You mentioned hate speech can be hard to define. What's the risk of letting the government define that?
The risk is that the government may overstep its boundaries, as often happens. Another concern is the lack of consistent and standardized enforcement. For instance, someone with influence or connections within the government might escape accountability for their actions, while an activist doing the same thing could face arrest. This disparity in treatment highlights the risks of uneven application of the law and potential misuse of power.
Greene: Earlier you mentioned special concern for access to information. You mentioned children and you mentioned women. Both of those are groups of people where, at least in some places, someone else—not the government, but some other person—might control their access, right? I wonder if you could talk a little bit more about why it's so important to ensure access to information for those particular groups.
I believe home is the foundational space where access to information and freedom of expression are nurtured. Families play a crucial role in cultivating these values, and it’s important for parents to be intentional about fostering an environment where open communication and access to information are encouraged. Parents have a responsibility to create opportunities for discussion within their households and beyond.
Outside the family, communities provide broader platforms for engagement. In Kenya, for example, public forums known as barazas serve as spaces where community members gather to discuss pressing issues, such as insecurity and public utilities, and to make decisions that impact the neighborhood. Ensuring that your household is represented in these forums is essential to staying informed and being part of decisions that directly affect you.
It’s equally important to help people understand the power of self-expression and active participation in decision-making spaces. By showing up and speaking out, individuals can contribute to meaningful change. Additionally, exposure to information and critical discussions is vital in today’s world, where misinformation and disinformation are prevalent. Families can address these challenges by having conversations at the dinner table, asking questions like, “Have you heard about this? What’s your understanding of misinformation? How can you avoid being misled online?”
By encouraging open dialogue and critical thinking in everyday interactions, we empower one another to navigate information responsibly and contribute to a more informed and engaged society.
Greene: Now, a question we ask everyone, who is your free speech hero?
I have two. One is a Human Rights lawyer and a former member of Parliament Gitobu Imanyara. He is one of the few people in Kenya who fought by blood and sweat, literally, for the freedom of speech and that of the press in Kenya. He will always be my hero when we talk about press freedom. We are one of the few countries in Africa that enjoys extreme freedoms around speech and press freedom and it’s thanks to people like him.
The other is an activist named Boniface Mwangi. He’s a person who never shies away from speaking up. It doesn’t matter who you are or how dangerous it gets, Boni, as he is popularly known, will always be that person who calls out the government when things are going wrong. You’re driving on the wrong side of the traffic just because you’re a powerful person in government. He'll be the person who will not move his car and he’ll tell you to get back in your lane. I like that. I believe when we speak up we make things happen.
Greene: Anything else you want to add?
I believe it’s time we truly recognize and understand the importance of freedom of expression and speech. Too often, these rights are mentioned casually or taken at face value, without deeper reflection. We need to start interrogating what free speech really means, the tools that enable it, and the ways in which this right can be infringed upon.
As someone passionate about community empowerment, I believe the key lies in educating people about these rights—what it looks like when they are fully exercised and what it means when they are violated and especially in today’s digital age. Only by raising awareness can we empower individuals to embrace these freedoms and advocate for better policies that protect and regulate them effectively. This understanding is essential for fostering informed, engaged communities that can demand accountability and meaningful change.
Surface-based sonar system could rapidly map the ocean floor at high resolution
On June 18, 2023, the Titan submersible was about an hour-and-a-half into its two-hour descent to the Titanic wreckage at the bottom of the Atlantic Ocean when it lost contact with its support ship. This cease in communication set off a frantic search for the tourist submersible and five passengers onboard, located about two miles below the ocean's surface.
Deep-ocean search and recovery is one of the many missions of military services like the U.S. Coast Guard Office of Search and Rescue and the U.S. Navy Supervisor of Salvage and Diving. For this mission, the longest delays come from transporting search-and-rescue equipment via ship to the area of interest and comprehensively surveying that area. A search operation on the scale of that for Titan — which was conducted 420 nautical miles from the nearest port and covered 13,000 square kilometers, an area roughly twice the size of Connecticut — could take weeks to complete. The search area for Titan is considered relatively small, focused on the immediate vicinity of the Titanic. When the area is less known, operations could take months. (A remotely operated underwater vehicle deployed by a Canadian vessel ended up finding the debris field of Titan on the seafloor, four days after the submersible had gone missing.)
A research team from MIT Lincoln Laboratory and the MIT Department of Mechanical Engineering's Ocean Science and Engineering lab is developing a surface-based sonar system that could accelerate the timeline for small- and large-scale search operations to days. Called the Autonomous Sparse-Aperture Multibeam Echo Sounder, the system scans at surface-ship rates while providing sufficient resolution to find objects and features in the deep ocean, without the time and expense of deploying underwater vehicles. The echo sounder — which features a large sonar array using a small set of autonomous surface vehicles (ASVs) that can be deployed via aircraft into the ocean — holds the potential to map the seabed at 50 times the coverage rate of an underwater vehicle and 100 times the resolution of a surface vessel.
"Our array provides the best of both worlds: the high resolution of underwater vehicles and the high coverage rate of surface ships," says co–principal investigator Andrew March, assistant leader of the laboratory's Advanced Undersea Systems and Technology Group. "Though large surface-based sonar systems at low frequency have the potential to determine the materials and profiles of the seabed, they typically do so at the expense of resolution, particularly with increasing ocean depth. Our array can likely determine this information, too, but at significantly enhanced resolution in the deep ocean."
Underwater unknown
Oceans cover 71 percent of Earth's surface, yet more than 80 percent of this underwater realm remains undiscovered and unexplored. Humans know more about the surface of other planets and the moon than the bottom of our oceans. High-resolution seabed maps would not only be useful to find missing objects like ships or aircraft, but also to support a host of other scientific applications: understanding Earth's geology, improving forecasting of ocean currents and corresponding weather and climate impacts, uncovering archaeological sites, monitoring marine ecosystems and habitats, and identifying locations containing natural resources such as mineral and oil deposits.
Scientists and governments worldwide recognize the importance of creating a high-resolution global map of the seafloor; the problem is that no existing technology can achieve meter-scale resolution from the ocean surface. The average depth of our oceans is approximately 3,700 meters. However, today's technologies capable of finding human-made objects on the seabed or identifying person-sized natural features — these technologies include sonar, lidar, cameras, and gravitational field mapping — have a maximum range of less than 1,000 meters through water.
Ships with large sonar arrays mounted on their hull map the deep ocean by emitting low-frequency sound waves that bounce off the seafloor and return as echoes to the surface. Operation at low frequencies is necessary because water readily absorbs high-frequency sound waves, especially with increasing depth; however, such operation yields low-resolution images, with each image pixel representing a football field in size. Resolution is also restricted because sonar arrays installed on large mapping ships are already using all of the available hull space, thereby capping the sonar beam's aperture size. By contrast, sonars on autonomous underwater vehicles (AUVs) that operate at higher frequencies within a few hundred meters of the seafloor generate maps with each pixel representing one square meter or less, resulting in 10,000 times more pixels in that same football field–sized area. However, this higher resolution comes with trade-offs: AUVs are time-consuming and expensive to deploy in the deep ocean, limiting the amount of seafloor that can be mapped; they have a maximum range of about 1,000 meters before their high-frequency sound gets absorbed; and they move at slow speeds to conserve power. The area-coverage rate of AUVs performing high-resolution mapping is about 8 square kilometers per hour; surface vessels map the deep ocean at more than 50 times that rate.
A solution surfaces
The Autonomous Sparse-Aperture Multibeam Echo Sounder could offer a cost-effective approach to high-resolution, rapid mapping of the deep seafloor from the ocean's surface. A collaborative fleet of about 20 ASVs, each hosting a small sonar array, effectively forms a single sonar array 100 times the size of a large sonar array installed on a ship. The large aperture achieved by the array (hundreds of meters) produces a narrow beam, which enables sound to be precisely steered to generate high-resolution maps at low frequency. Because very few sonars are installed relative to the array's overall size (i.e., a sparse aperture), the cost is tractable.
However, this collaborative and sparse setup introduces some operational challenges. First, for coherent 3D imaging, the relative position of each ASV's sonar subarray must be accurately tracked through dynamic ocean-induced motions. Second, because sonar elements are not placed directly next to each other without any gaps, the array suffers from a lower signal-to-noise ratio and is less able to reject noise coming from unintended or undesired directions. To mitigate these challenges, the team has been developing a low-cost precision-relative navigation system and leveraging acoustic signal processing tools and new ocean-field estimation algorithms. The MIT campus collaborators are developing algorithms for data processing and image formation, especially to estimate depth-integrated water-column parameters. These enabling technologies will help account for complex ocean physics, spanning physical properties like temperature, dynamic processes like currents and waves, and acoustic propagation factors like sound speed.
Processing for all required control and calculations could be completed either remotely or onboard the ASVs. For example, ASVs deployed from a ship or flying boat could be controlled and guided remotely from land via a satellite link or from a nearby support ship (with direct communications or a satellite link), and left to map the seabed for weeks or months at a time until maintenance is needed. Sonar-return health checks and coarse seabed mapping would be conducted on board, while full, high-resolution reconstruction of the seabed would require a supercomputing infrastructure on land or on a support ship.
"Deploying vehicles in an area and letting them map for extended periods of time without the need for a ship to return home to replenish supplies and rotate crews would significantly simplify logistics and operating costs," says co–principal investigator Paul Ryu, a researcher in the Advanced Undersea Systems and Technology Group.
Since beginning their research in 2018, the team has turned their concept into a prototype. Initially, the scientists built a scale model of a sparse-aperture sonar array and tested it in a water tank at the laboratory's Autonomous Systems Development Facility. Then, they prototyped an ASV-sized sonar subarray and demonstrated its functionality in Gloucester, Massachusetts. In follow-on sea tests in Boston Harbor, they deployed an 8-meter array containing multiple subarrays equivalent to 25 ASVs locked together; with this array, they generated 3D reconstructions of the seafloor and a shipwreck. Most recently, the team fabricated, in collaboration with Woods Hole Oceanographic Institution, a first-generation, 12-foot-long, all-electric ASV prototype carrying a sonar array underneath. With this prototype, they conducted preliminary relative navigation testing in Woods Hole, Massachusetts and Newport, Rhode Island. Their full deep-ocean concept calls for approximately 20 such ASVs of a similar size, likely powered by wave or solar energy.
This work was funded through Lincoln Laboratory's internally administered R&D portfolio on autonomous systems. The team is now seeking external sponsorship to continue development of their ocean floor–mapping technology, which was recognized with a 2024 R&D 100 Award.
“Can the Government Read My Text Messages?”
You should be able to message your family and friends without fear that law enforcement is reading everything you send. Privacy is a human right, and that’s why we break down the ways you can protect your ability to have a private conversation.
Learn how governments are able to read certain text messages, and how to ensure your messages are end-to-end encrypted on Digital Rights Bytes, our new site dedicated to helping break down tech issues into byte-sized pieces.
Whether you’re just starting to think about your privacy online, or you’re already a regular user of encrypted messaging apps, Digital Rights Bytes is here to help answer some of the common questions that may be bothering you about the devices you use. Watch the short video that explains how to keep your communications private online--and share it with family and friends who may have asked similar questions!
Have you also wondered why it is so expensive to fix your phone, or if you really own the digital media you paid for? We’ve got answers to those and other questions as well! And, if you’ve got additional questions you’d like us to answer in the future, let us know on your social platform of choice using the hashtag #DigitalRightsBytes.
New Advances in the Understanding of Prime Numbers
Really interesting research into the structure of prime numbers. Not immediately related to the cryptanalysis of prime-number-based public-key algorithms, but every little bit matters.
New autism research projects represent a broad range of approaches to achieving a shared goal
From studies of the connections between neurons to interactions between the nervous and immune systems to the complex ways in which people understand not just language, but also the unspoken nuances of conversation, new research projects at MIT supported by the Simons Center for the Social Brain are bringing a rich diversity of perspectives to advancing the field’s understanding of autism.
As six speakers lined up to describe their projects at a Simons Center symposium Nov. 15, MIT School of Science dean Nergis Mavalvala articulated what they were all striving for: “Ultimately, we want to seek understanding — not just the type that tells us how physiological differences in the inner workings of the brain produce differences in behavior and cognition, but also the kind of understanding that improves inclusion and quality of life for people living with autism spectrum disorders.”
Simons Center director Mriganka Sur, Newton Professor of Neuroscience in The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences (BCS), said that even though the field still lacks mechanism-based treatments or reliable biomarkers for autism spectrum disorders, he is optimistic about the discoveries and new research MIT has been able to contribute. MIT research has led to five clinical trials so far, and he praised the potential for future discovery, for instance in the projects showcased at the symposium.
“We are, I believe, at a frontier — at a moment where a lot of basic science is coming together with the vision that we could use that science for the betterment of people,” Sur said.
The Simons Center funds that basic science research in two main ways that each encourage collaboration, Sur said: large-scale projects led by faculty members across several labs, and fellowships for postdocs who are mentored by two faculty members, thereby bringing together two labs. The symposium featured talks and panel discussions by faculty and fellows leading new research.
In her remarks, Associate Professor Gloria Choi of The Picower Institute and BCS department described her collaboration’s efforts to explore the possibility of developing an autism therapy using the immune system. Previous research in mice by Choi and collaborator Jun Huh of Harvard Medical School has shown that injection of the immune system signaling molecule IL-17a into a particular region of the brain’s cortex can reduce neural hyperactivity and resulting differences in social and repetitive behaviors seen in autism model mice compared to non-autism models. Now Choi’s team is working on various ways to induce the immune system to target the cytokine to the brain by less invasive means than direct injection. One way under investigation, for example, is increasing the population of immune cells that produce IL-17a in the meningeal membranes that surround the brain.In a different vein, Associate Professor Ev Fedorenko of The McGovern Institute for Brain Research and BCS is leading a seven-lab collaboration aimed at understanding the cognitive and neural infrastructure that enables people to engage in conversation, which involves not only the language spoken but also facial expressions, tone of voice, and social context. Critical to this effort, she said, is going beyond previous work that studied each related brain area in isolation to understand the capability as a unified whole. A key insight, she said, is that they are all nearby each other in the lateral temporal cortex.
“Going beyond these individual components we can start asking big questions like, what are the broad organizing principles of this part of the brain?,” Fedorenko said. “Why does it have this particular arrangement of areas, and how do these work together to exchange information to create the unified percept of another individual we’re interacting with?”
While Choi and Fedorenko are looking at factors that account for differences in social behavior in autism, Picower Professor Earl K. Miller of The Picower Institute and BCS is leading a project that focuses on another phenomenon: the feeling of sensory overload that many autistic people experience. Research in Miller’s lab has shown that the brain’s ability to make predictions about sensory stimuli, which is critical to filtering out mundane signals so attention can be focused on new ones, depends on a cortex-wide coordination of the activity of millions of neurons implemented by high frequency “gamma” brain waves and lower-frequency “beta” waves. Working with animal models and human volunteers at Boston Children’s Hospital (BCH), Miller said his team is testing the idea that there may be a key difference in these brain wave dynamics in the autistic brain that could be addressed with closed-loop brain wave stimulation technology.
Simons postdoc Lukas Vogelsang, who is based in BCS Professor Pawan Sinha’s lab, is looking at potential differences in prediction between autistic and non-autistic individuals in a different way: through experiments with volunteers that aim to tease out how these differences are manifest in behavior. For instance, he’s finding that in at least one prediction task that requires participants to discern the probability of an event from provided cues, autistic people exhibit lower performance levels and undervalue the predictive significance of the cues, while non-autistic people slightly overvalue it. Vogelsang is co-advised by BCH researcher and Harvard Medical School Professor Charles Nelson.
Fundamentally, the broad-scale behaviors that emerge from coordinated brain-wide neural activity begins with the molecular details of how neurons connect with each other at circuit junctions called synapses. In her research based in The Picower Institute lab of Menicon Professor Troy Littleton, Simons postdoc Chhavi Sood is using the genetically manipulable model of the fruit fly to investigate how mutations in the autism-associated protein FMRP may alter the expression of molecular gates regulating ion exchange at the synapse , which would in turn affect how frequently and strongly a pre-synaptic neuron excites a post-synaptic one. The differences she is investigating may be a molecular mechanism underlying neural hyperexcitability in fragile X syndrome, a profound autism spectrum disorder.
In her talk, Simons postdoc Lace Riggs, based in The McGovern Institute lab of Poitras Professor of Neuroscience Guoping Feng, emphasized how many autism-associated mutations in synaptic proteins promote pathological anxiety. She described her research that is aimed at discerning where in the brain’s neural circuitry that vulnerability might lie. In her ongoing work, Riggs is zeroing in on a novel thalamocortical circuit between the anteromedial nucleus of the thalamus and the cingulate cortex, which she found drives anxiogenic states. Riggs is co-supervised by Professor Fan Wang.
After the wide-ranging talks, supplemented by further discussion at the panels, the last word came via video conference from Kelsey Martin, executive vice president of the Simons Foundation Autism Research Initiative. Martin emphasized that fundamental research, like that done at the Simons Center, is the key to developing future therapies and other means of supporting members of the autism community.
“We believe so strongly that understanding the basic mechanisms of autism is critical to being able to develop translational and clinical approaches that are going to impact the lives of autistic individuals and their families,” she said.
From studies of synapses to circuits to behavior, MIT researchers and their collaborators are striving for exactly that impact.
MIT engineers grow “high-rise” 3D chips
The electronics industry is approaching a limit to the number of transistors that can be packed onto the surface of a computer chip. So, chip manufacturers are looking to build up rather than out.
Instead of squeezing ever-smaller transistors onto a single surface, the industry is aiming to stack multiple surfaces of transistors and semiconducting elements — akin to turning a ranch house into a high-rise. Such multilayered chips could handle exponentially more data and carry out many more complex functions than today’s electronics.
A significant hurdle, however, is the platform on which chips are built. Today, bulky silicon wafers serve as the main scaffold on which high-quality, single-crystalline semiconducting elements are grown. Any stackable chip would have to include thick silicon “flooring” as part of each layer, slowing down any communication between functional semiconducting layers.
Now, MIT engineers have found a way around this hurdle, with a multilayered chip design that doesn’t require any silicon wafer substrates and works at temperatures low enough to preserve the underlying layer’s circuitry.
In a study appearing today in the journal Nature, the team reports using the new method to fabricate a multilayered chip with alternating layers of high-quality semiconducting material grown directly on top of each other.
The method enables engineers to build high-performance transistors and memory and logic elements on any random crystalline surface — not just on the bulky crystal scaffold of silicon wafers. Without these thick silicon substrates, multiple semiconducting layers can be in more direct contact, leading to better and faster communication and computation between layers, the researchers say.
The researchers envision that the method could be used to build AI hardware, in the form of stacked chips for laptops or wearable devices, that would be as fast and powerful as today’s supercomputers and could store huge amounts of data on par with physical data centers.
“This breakthrough opens up enormous potential for the semiconductor industry, allowing chips to be stacked without traditional limitations,” says study author Jeehwan Kim, associate professor of mechanical engineering at MIT. “This could lead to orders-of-magnitude improvements in computing power for applications in AI, logic, and memory.”
The study’s MIT co-authors include first author Ki Seok Kim, Seunghwan Seo, Doyoon Lee, Jung-El Ryu, Jekyung Kim, Jun Min Suh, June-chul Shin, Min-Kyu Song, Jin Feng, and Sangho Lee, along with collaborators from Samsung Advanced Institute of Technology, Sungkyunkwan University in South Korea, and the University of Texas at Dallas.
Seed pockets
In 2023, Kim’s group reported that they developed a method to grow high-quality semiconducting materials on amorphous surfaces, similar to the diverse topography of semiconducting circuitry on finished chips. The material that they grew was a type of 2D material known as transition-metal dichalcogenides, or TMDs, considered a promising successor to silicon for fabricating smaller, high-performance transistors. Such 2D materials can maintain their semiconducting properties even at scales as small as a single atom, whereas silicon’s performance sharply degrades.
In their previous work, the team grew TMDs on silicon wafers with amorphous coatings, as well as over existing TMDs. To encourage atoms to arrange themselves into high-quality single-crystalline form, rather than in random, polycrystalline disorder, Kim and his colleagues first covered a silicon wafer in a very thin film, or “mask” of silicon dioxide, which they patterned with tiny openings, or pockets. They then flowed a gas of atoms over the mask and found that atoms settled into the pockets as “seeds.” The pockets confined the seeds to grow in regular, single-crystalline patterns.
But at the time, the method only worked at around 900 degrees Celsius.
“You have to grow this single-crystalline material below 400 Celsius, otherwise the underlying circuitry is completely cooked and ruined,” Kim says. “So, our homework was, we had to do a similar technique at temperatures lower than 400 Celsius. If we could do that, the impact would be substantial.”
Building up
In their new work, Kim and his colleagues looked to fine-tune their method in order to grow single-crystalline 2D materials at temperatures low enough to preserve any underlying circuitry. They found a surprisingly simple solution in metallurgy — the science and craft of metal production. When metallurgists pour molten metal into a mold, the liquid slowly “nucleates,” or forms grains that grow and merge into a regularly patterned crystal that hardens into solid form. Metallurgists have found that this nucleation occurs most readily at the edges of a mold into which liquid metal is poured.
“It’s known that nucleating at the edges requires less energy — and heat,” Kim says. “So we borrowed this concept from metallurgy to utilize for future AI hardware.”
The team looked to grow single-crystalline TMDs on a silicon wafer that already has been fabricated with transistor circuitry. They first covered the circuitry with a mask of silicon dioxide, just as in their previous work. They then deposited “seeds” of TMD at the edges of each of the mask’s pockets and found that these edge seeds grew into single-crystalline material at temperatures as low as 380 degrees Celsius, compared to seeds that started growing in the center, away from the edges of each pocket, which required higher temperatures to form single-crystalline material.
Going a step further, the researchers used the new method to fabricate a multilayered chip with alternating layers of two different TMDs — molybdenum disulfide, a promising material candidate for fabricating n-type transistors; and tungsten diselenide, a material that has potential for being made into p-type transistors. Both p- and n-type transistors are the electronic building blocks for carrying out any logic operation. The team was able to grow both materials in single-crystalline form, directly on top of each other, without requiring any intermediate silicon wafers. Kim says the method will effectively double the density of a chip’s semiconducting elements, and particularly, metal-oxide semiconductor (CMOS), which is a basic building block of a modern logic circuitry.
“A product realized by our technique is not only a 3D logic chip but also 3D memory and their combinations,” Kim says. “With our growth-based monolithic 3D method, you could grow tens to hundreds of logic and memory layers, right on top of each other, and they would be able to communicate very well.”
“Conventional 3D chips have been fabricated with silicon wafers in-between, by drilling holes through the wafer — a process which limits the number of stacked layers, vertical alignment resolution, and yields,” first author Kiseok Kim adds. “Our growth-based method addresses all of those issues at once.”
To commercialize their stackable chip design further, Kim has recently spun off a company, FS2 (Future Semiconductor 2D materials).
“We so far show a concept at a small-scale device arrays,” he says. “The next step is scaling up to show professional AI chip operation.”
This research is supported, in part, by Samsung Advanced Institute of Technology and the U.S. Air Force Office of Scientific Research.
Physicists magnetize a material with light
MIT physicists have created a new and long-lasting magnetic state in a material, using only light.
In a study appearing today in Nature, the researchers report using a terahertz laser — a light source that oscillates more than a trillion times per second — to directly stimulate atoms in an antiferromagnetic material. The laser’s oscillations are tuned to the natural vibrations among the material’s atoms, in a way that shifts the balance of atomic spins toward a new magnetic state.
The results provide a new way to control and switch antiferromagnetic materials, which are of interest for their potential to advance information processing and memory chip technology.
In common magnets, known as ferromagnets, the spins of atoms point in the same direction, in a way that the whole can be easily influenced and pulled in the direction of any external magnetic field. In contrast, antiferromagnets are composed of atoms with alternating spins, each pointing in the opposite direction from its neighbor. This up, down, up, down order essentially cancels the spins out, giving antiferromagnets a net zero magnetization that is impervious to any magnetic pull.
If a memory chip could be made from antiferromagnetic material, data could be “written” into microscopic regions of the material, called domains. A certain configuration of spin orientations (for example, up-down) in a given domain would represent the classical bit “0,” and a different configuration (down-up) would mean “1.” Data written on such a chip would be robust against outside magnetic influence.
For this and other reasons, scientists believe antiferromagnetic materials could be a more robust alternative to existing magnetic-based storage technologies. A major hurdle, however, has been in how to control antiferromagnets in a way that reliably switches the material from one magnetic state to another.
“Antiferromagnetic materials are robust and not influenced by unwanted stray magnetic fields,” says Nuh Gedik, the Donner Professor of Physics at MIT. “However, this robustness is a double-edged sword; their insensitivity to weak magnetic fields makes these materials difficult to control.”
Using carefully tuned terahertz light, the MIT team was able to controllably switch an antiferromagnet to a new magnetic state. Antiferromagnets could be incorporated into future memory chips that store and process more data while using less energy and taking up a fraction of the space of existing devices, owing to the stability of magnetic domains.
“Generally, such antiferromagnetic materials are not easy to control,” Gedik says. “Now we have some knobs to be able to tune and tweak them.”
Gedik is the senior author of the new study, which also includes MIT co-authors Batyr Ilyas, Tianchuang Luo, Alexander von Hoegen, Zhuquan Zhang, and Keith Nelson, along with collaborators at the Max Planck Institute for the Structure and Dynamics of Matter in Germany, University of the Basque Country in Spain, Seoul National University, and the Flatiron Institute in New York.
Off balance
Gedik’s group at MIT develops techniques to manipulate quantum materials in which interactions among atoms can give rise to exotic phenomena.
“In general, we excite materials with light to learn more about what holds them together fundamentally,” Gedik says. “For instance, why is this material an antiferromagnet, and is there a way to perturb microscopic interactions such that it turns into a ferromagnet?”
In their new study, the team worked with FePS3 — a material that transitions to an antiferromagnetic phase at a critical temperature of around 118 kelvins (-247 degrees Fahrenheit).
The team suspected they might control the material’s transition by tuning into its atomic vibrations.
“In any solid, you can picture it as different atoms that are periodically arranged, and between atoms are tiny springs,” von Hoegen explains. “If you were to pull one atom, it would vibrate at a characteristic frequency which typically occurs in the terahertz range.”
The way in which atoms vibrate also relates to how their spins interact with each other. The team reasoned that if they could stimulate the atoms with a terahertz source that oscillates at the same frequency as the atoms’ collective vibrations, called phonons, the effect could also nudge the atoms’ spins out of their perfectly balanced, magnetically alternating alignment. Once knocked out of balance, atoms should have larger spins in one direction than the other, creating a preferred orientation that would shift the inherently nonmagnetized material into a new magnetic state with finite magnetization.
“The idea is that you can kill two birds with one stone: You excite the atoms’ terahertz vibrations, which also couples to the spins,” Gedik says.
Shake and write
To test this idea, the team worked with a sample of FePS3 that was synthesized by colleages at Seoul National University. They placed the sample in a vacuum chamber and cooled it down to temperatures at and below 118 K. They then generated a terahertz pulse by aiming a beam of near-infrared light through an organic crystal, which transformed the light into the terahertz frequencies. They then directed this terahertz light toward the sample.
“This terahertz pulse is what we use to create a change in the sample,” Luo says. “It’s like ‘writing’ a new state into the sample.”
To confirm that the pulse triggered a change in the material’s magnetism, the team also aimed two near-infrared lasers at the sample, each with an opposite circular polarization. If the terahertz pulse had no effect, the researchers should see no difference in the intensity of the transmitted infrared lasers.
“Just seeing a difference tells us the material is no longer the original antiferromagnet, and that we are inducing a new magnetic state, by essentially using terahertz light to shake the atoms,” Ilyas says.
Over repeated experiments, the team observed that a terahertz pulse successfully switched the previously antiferromagnetic material to a new magnetic state — a transition that persisted for a surprisingly long time, over several milliseconds, even after the laser was turned off.
“People have seen these light-induced phase transitions before in other systems, but typically they live for very short times on the order of a picosecond, which is a trillionth of a second,” Gedik says.
In just a few milliseconds, scientists now might have a decent window of time during which they could probe the properties of the temporary new state before it settles back into its inherent antiferromagnetism. Then, they might be able to identify new knobs to tweak antiferromagnets and optimize their use in next-generation memory storage technologies.
This research was supported, in part, by the U.S. Department of Energy, Materials Science and Engineering Division, Office of Basic Energy Sciences, and the Gordon and Betty Moore Foundation.
How humans continuously adapt while walking stably
Researchers have developed a model that explains how humans adapt continuously during complex tasks, like walking, while remaining stable.
The findings were detailed in a recent paper published in the journal Nature Communications authored by Nidhi Seethapathi, an assistant professor in MIT’s Department of Brain and Cognitive Sciences; Barrett C. Clark, a robotics software engineer at Bright Minds Inc.; and Manoj Srinivasan, an associate professor in the Department of Mechanical and Aerospace Engineering at Ohio State University.
In episodic tasks, like reaching for an object, errors during one episode do not affect the next episode. In tasks like locomotion, errors can have a cascade of short-term and long-term consequences to stability unless they are controlled. This makes the challenge of adapting locomotion in a new environment more complex.
"Much of our prior theoretical understanding of adaptation has been limited to episodic tasks, such as reaching for an object in a novel environment," Seethapathi says. "This new theoretical model captures adaptation phenomena in continuous long-horizon tasks in multiple locomotor settings."
To build the model, the researchers identified general principles of locomotor adaptation across a variety of task settings, and developed a unified modular and hierarchical model of locomotor adaptation, with each component having its own unique mathematical structure.
The resulting model successfully encapsulates how humans adapt their walking in novel settings such as on a split-belt treadmill with each foot at a different speed, wearing asymmetric leg weights, and wearing an exoskeleton. The authors report that the model successfully reproduced human locomotor adaptation phenomena across novel settings in 10 prior studies and correctly predicted the adaptation behavior observed in two new experiments conducted as part of the study.
The model has potential applications in sensorimotor learning, rehabilitation, and wearable robotics.
"Having a model that can predict how a person will adapt to a new environment has immense utility for engineering better rehabilitation paradigms and wearable robot control," Seethapathi says. "You can think of a wearable robot itself as a new environment for the person to move in, and our model can be used to predict how a person will adapt for different robot settings. Understanding such human-robot adaptation is currently an experimentally intensive process, and our model could help speed up the process by narrowing the search space."