Feed aggregator
Musk distorts migrant program to attack FEMA
Startup takes its sweat sensors global in an era of extreme heat
Chile issues evacuation alerts as wildfires rage in the country’s south
Swiss voters reject emission curbs over economic concerns
India doesn’t plan to boost climate goals after COP29 finance outcome
Creating smart buildings with privacy-first sensors
Gaining a better understanding of how people move through the spaces where they live and work could make those spaces safer and more sustainable. But no one wants cameras watching them 24/7.
Two former Media Lab researchers think they have a solution. Their company, Butlr, offers places like skilled nursing facilities, offices, and senior living communities a way to understand how people are using buildings without compromising privacy. Butlr uses low-resolution thermal sensors and an analytics platform to help detect falls in elderly populations, save energy, and optimize spaces for work.
“We have this vision of using the right technology to understand people’s movements and behaviors in space,” says Jiani Zeng SM ’20, who co-founded Butlr with former Media Lab research affiliate Honghao Deng. “So many resources today go toward cameras and AI that take away people’s privacy. We believe we can make our environments safer, healthier, and more sustainable without violating privacy.”
To date, the company has sold more than 20,000 of its privacy-preserving sensors to senior living and skilled nursing facilities as well as businesses with large building footprints, including Verizon, Netflix, and Microsoft. In the future, Butlr hopes to enable more dynamic spaces that can understand and respond to the ways people use them.
“Space should be like a digital user interface: It should be multi-use and responsive to your needs,” Deng says. “If the office has a big room with people working individually, it should automatically separate into smaller rooms, or lights and temperature should be adjusted to save energy.”
Building intelligence, with privacy
As an undergraduate at Tianjin University in China, Deng joined the Media Lab’s City Science Group as a visiting student in 2016. He went on to complete his master’s at Harvard University, but he returned to the Media Lab as a research affiliate and led projects around what he calls responsive architecture: spaces that can understand their users’ needs through non-camera sensors.
“My vision of the future of building environments emerged from the Media Lab,” Deng says. “The real world is the largest user interface around us — it’s not the screens. We all live in a three-dimensional world and yet, unlike the digital world, this user interface doesn’t yet understand our needs, let alone the critical situations when someone falls in a room. That could be life-saving.”
Zeng came to MIT as a master’s student in the Integrated Design and Management program, which was run jointly out of the MIT Sloan School of Management and the School of Engineering. She also worked as a research assistant at the Media Lab and the Computer Science and Artificial Intelligence Lab (CSAIL).
The pair met during a hackathon at the Media Lab and continued collaborating on various projects. During that time, they worked with MIT’s Venture Mentoring Service (VMS) and the MIT I-Corps Program. When they graduated in 2019, they decided to start a company based on the idea of creating smart buildings with privacy-preserving sensors. Crucial early funding came from the Media Lab-affiliated E14 Fund.
“I tell every single MIT founder they should have the E14 Fund in their cap table,” Deng says. “They understand what it takes to go from an MIT student to a founder, and to transition from the ‘scientist brain’ to the ‘inventor brain.’ We wouldn’t be where we are today without MIT.”
Ray Stata ’57, SM ’58, the founder of Analog Devices, is also an investor in Butlr and serves as Butlr’s board director.
“We would love to give back to the MIT community once we become successful entrepreneurs like Ray, whose advice and mentoring has been invaluable,” Deng says.
After launching, the founders had to find the right early customers for their real-time sensors, which can discern rough body shapes but no personally identifiable information. They interviewed hundreds of people before starting with owners of office spaces.
“People have zero baseline data on what’s happening in their workplace,” Deng says. “That’s especially true since the Covid-19 pandemic made people hybrid, which has opened huge opportunities to cut the energy use of large office spaces. Sometimes, the only people in these buildings are the receptionist and the cleaner.”
Butlr’s multiyear, battery-powered sensors can track daily occupancy in each room and give other insights into space utilization that can be used to reduce energy use. For companies with a lot of office space, the opportunities are immense. One Butlr customer has 40 building leases. Deng says optimizing the HVAC controls based on usage could amount to millions of dollars saved.
“We can be like the Google Analytics for these spaces without any concerns in terms of privacy,” Deng says.
The founders also knew the problem went well beyond office spaces.
“In skilled nursing facilities, instead of office spaces it’s individual rooms, all with people who may need the nurse’s help,” Deng says. “But the nurses have no visibility into what’s happening unless they physically enter the room.”
Acute care environments and senior living facilities are another key market for Butlr. The company’s platform can detect falls and instances when someone isn’t getting out of bed to alert staff. The system integrates with nurse calling systems to alert staff when something is wrong.
The “nerve cells” of the building
Butlr is continuing to develop analytics that give important insights into spaces. For instance, today the platform can use information around movement in elderly populations to help detect problems like urinary tract infections. Butlr also recently started a collaboration with Harvard Medical School’s Beth Israel Deaconess Medical Center and the University of Massachusetts at Amherst’s Artificial Intelligence and Technology Center for Connected Care in Aging and Alzheimer’s Disease. Through the project, Butlr will try to detect changes in movement that could indicate declining cognitive or physical abilities. Those insights could be used to provide aging patients with more supervision.
“In the near term we are preventing falls, but the vision is when you look up in any buildings or homes, you’ll see Butlr,” Deng says. “This could allow older adults to age in place with dignity and privacy.”
More broadly, Butlr’s founders see their work as an important way to shape the future of AI technology, which is expected to be a growing part of everyone’s lives.
“We’re the nerve cells in the building, not the eyes,” Deng says. “That’s the future of AI we believe in: AI that can transform regular rooms into spaces that understand people and can use that understanding to do everything from making efficiency improvements to saving lives in senior care communities. That’s the right way to use this powerful technology.”
Mapping mRNA through its life cycle within a cell
When Xiao Wang applied to faculty jobs, many of the institutions where she interviewed thought her research proposal — to study the life cycle of RNA in cells and how it influences normal development and disease — was too broad.
However, that was not the case when she interviewed at MIT, where her future colleagues embraced her ideas and encouraged her to be even more bold.
“What I’m doing now is even broader, even bolder than what I initially proposed,” says Wang, who holds joint appointments in the Department of Chemistry and the Broad Institute of MIT and Harvard. “I got great support from all my colleagues in my department and at Broad so that I could get the resources to conduct what I wanted to do. It’s also a demonstration of how brave the students are. There is a really innovative culture and environment here, so the students are not scared by taking on something that might sound weird or unrealistic.”
Wang’s work on RNA brings together students from chemistry, biology, computer science, neuroscience, and other fields. In her lab, research is focused on developing tools that pinpoint where in a given cell different types of messenger RNA are translated into proteins — information that can offer insight into how cells control their fate and what goes wrong in disease, especially in the brain.
“The joint position between MIT Chemistry and the Broad Institute was very attractive to me because I was trained as a chemist, and I would like to teach and recruit students from chemistry. But meanwhile, I also wanted to get exposure to biomedical topics and have collaborators outside chemistry. I can collaborate with biologists, doctors, as well as computational scientists who analyze all these daunting data,” she says.
Imaging RNA
Wang began her career at MIT in 2019, just before the Covid-19 pandemic began. Until that point, she hardly knew anyone in the Boston area, but she found a warm welcome.
“I wasn’t trained at MIT, and I had never lived in Boston before. At first, I had very small social circles, just with my colleagues and my students, but amazingly, even during the pandemic, I never felt socially isolated. I just felt so plugged in already even though it’s very a close, small circle,” she says.
Growing up in China, Wang became interested in science in middle school, when she was chosen to participate in China’s National Olympiad in math and chemistry. That gave her the chance to learn college-level course material, and she ended up winning a gold medal in the nationwide chemistry competition.
“That exposure was enough to draw me into initially mathematics, but later on more into chemistry. That’s how I got interested in a more science-oriented major and then career path,” Wang says.
At Peking University, she majored in chemistry and molecular engineering. There, she worked with Professor Jian Pei, who gave her the opportunity to work independently on her own research project.
“I really like to do research because every day you have a hypothesis, you have a design, and you make it happen. It’s like playing a video game: You have this roughly daily feedback loop. Sometimes it’s a reward, sometimes it’s not. I feel it’s more interesting than taking a class, so I think that made me decide I should apply for graduate school,” she says.
As a graduate student at the University of Chicago, she became interested in RNA while doing a rotation in the lab of Chuan He, a professor of chemistry. He was studying chemical modifications that affect the function of messenger RNA — the molecules that carry protein-building instructions from DNA to ribosomes, where proteins are assembled.
Wang ended up joining He’s lab, where she studied a common mRNA modification known as m6A, which influences how efficiently mRNA is translated into protein and how fast it gets degraded in the cell. She also began to explore how mRNA modifications affect embryonic development. As a model for these studies, she was using zebrafish, which have transparent embryos that develop from fertilized eggs into free-swimming larvae within two days. That got her interested in developing methods that could reveal where different types of RNA were being expressed, by imaging the entire organism.
Such an approach, she soon realized, could also be useful for studying the brain. As a postdoc at Stanford University, she started to develop RNA imaging methods, working with Professor Karl Deisseroth. There are existing techniques for identifying mRNA molecules that are expressed in individual cells, but those don’t offer information about exactly where in the cells different types of mRNA are located. She began developing a technique called STARmap that could accomplish this type of “spatial transcriptomics.”
Using this technique, researchers first use formaldehyde to crosslink all of the mRNA molecules in place. Then, the tissue is washed with fluorescent DNA probes that are complementary to the target mRNA sequences. These probes can then be imaged and sequenced, revealing the locations of each mRNA sequence within a cell. This allows for the visualization of mRNA molecules that encode thousands of different genes within single cells.
“I was leveraging my background in the chemistry of RNA to develop this RNA-centered brain mapping technology, which allows you to use RNA expression profiles to define brain cell types and also visualize their spatial architecture,” Wang says.
Tracking the RNA life cycle
Members of Wang’s lab are now working on expanding the capability of the STARmap technique so that it can be used to analyze brain function and brain wiring. They are also developing tools that will allow them to map the entire life cycle of mRNA molecules, from synthesis to translation to degradation, and track how these molecules are transported within a cell during their lifetime.
One of these tools, known as RIBOmap, pinpoints the locations of mRNA molecules as they are being translated at ribosomes. Another tool allows the researchers to measure how quickly mRNA is degraded after being transcribed.
“We are trying to develop a toolkit that will let us visualize every step of the RNA life cycle inside cells and tissues,” Wang says. “These are newer generations of tool development centered around these RNA biological questions.”
One of these central questions is how different cell types control their RNA life cycles differently, and how that affects their differentiation. Differences in RNA control may also be a factor in diseases such as Alzheimer’s. In a 2023 study, Wang and MIT Professor Morgan Sheng used a version of STARmap to discover how cells called microglia become more inflammatory as amyloid-beta plaques form in the brain. Wang’s lab is also pursuing studies of how differences in mRNA translation might affect schizophrenia and other neurological disorders.
“The reason we think there will be a lot of interesting biology to discover is because the formation of neural circuits is through synapses, and synapse formation and learning and memory are strongly associated with localized RNA translation, which involves multiple steps including RNA transport and recycling,” she says.
In addition to investigating those biological questions, Wang is also working on ways to boost the efficiency of mRNA therapeutics and vaccines by changing their chemical modifications or their topological structure.
“Our goal is to create a toolbox and RNA synthesis strategy where we can precisely tune the chemical modification on every particle of RNA,” Wang says. “We want to establish how those modifications will influence how fast mRNA can produce protein, and in which cell types they could be used to more efficiently produce protein.”
Why the so-called AI Action Summit falls short
Ever since Chat-GPT’s debut, artificial intelligence (AI) has been the center of worldwide discussions on the promises and perils of new technologies. This has spawned a flurry of debates on the governance and regulation of large language models and “generative” AI, which have, among others, resulted in the Biden administration’s executive order on AI and international guiding principles for the development of generative AI and influenced Europe’s AI Act. As part of that global policy discussion, the UK government hosted the AI Safety Summit in 2023, which was followed in 2024 by the AI Seoul Summit, leading up to this year’s AI Action Summit hosted by France.
As heads of states and CEOs are heading to Paris for the AI Action Summit, the summit’s shortcomings are becoming glaringly obvious. The summit, which is hosted by the French government, has been described as a “pivotal moment in shaping the future of artificial intelligence governance”. However, a closer look at its agenda and the voices it will amplify tells a different story.
Focusing on AI’s potential economic contributions, and not differentiating between for example large language models and automated decision-making, the summit fails to take into account the many ways in which AI systems can be abused to undermine fundamental rights and push the planet's already stretched ecological limits over the edge. Instead of centering nuanced perspectives on the capabilities of different AI systems and associated risks, the summit’s agenda paints a one-sided and simplistic image, not reflective of global discussion on AI governance. For example, the summit’s main program does not include a single panel addressing issues related to discrimination or sustainability.
A summit captured by industry interests cannot claim to be a transformative venue
This imbalance is also mirrored in the summit’s speakers, among which industry representatives notably outnumber civil society leaders. While many civil society organizations are putting on side events to counterbalance the summit’s misdirected priorities, an exclusive summit captured by industry interests cannot claim to be a transformative venue for global policy discussions.
The summit’s significant shortcomings are especially problematic in light of the leadership role European countries are claiming when it comes to the governance of the AI. The European Union’s AI Act, which recently entered into force, has been celebrated as the world’s first legal framework addressing the risks of AI. However, whether the AI Act will actually “promote the uptake of human centric and trustworthy artificial intelligence” remains to be seen.
It's unclear if the AI Act will provide a framework that incentivizes the roll out of user-centric AI tools or whether it will lock-in specific technologies at the expense of users. We like that the new rules contain a lot of promising language on fundamental rights protection, however, exceptions for law enforcement and national security render some of the safeguards fragile. This is especially true when it comes to the use of AI systems in high-risks contexts such as migration, asylum, border controls, and public safety, where the AI Act does little to protect against mass surveillance and profiling and predictive technologies. We are also concerned by the possibility that other governments will copy-paste the AI Act’s broad exceptions without having the strong constitutional and human rights protections that exist within the EU legal system. We will therefore keep a close eye on how the AI Act is enforced in practice.
The summit also lags in addressing the essential role human rights should play in providing a common baseline for AI deployment, especially in high-impact uses. Although human-rights-related concerns appear in a few sessions, the Summit as purportedly a global forum aimed at unleashing the potential of AI for the public good and in the public interest, at a minimum, seems to miss the opportunity to clearly articulate how such a goal connects with fulfilling international human rights guarantees and which steps this entail.
Countries must address the AI divide without replicating AI harms.
Ramping up government use of AI systems is generally a key piece in national strategies for AI development worldwide. While countries must address the AI divide, doing so must not mean replicating AI harms. For example, we’ve elaborated on leveraging Inter-American human rights standards to tackle challenges and violations that emerge from public institutions’ use of algorithmic systems for rights-affecting determinations in Latin America.
In times of a global AI arms race, we do not need more hype for AI. Rather, there is a crucial need for evidence-based policy debates that address AI power centralization and consider the real-world harms associated with AI systems—while enabling diverse stakeholders to engage at eye level. The AI Action Summit will not be the place to have this conversation.
Puzzling out climate change
Shreyaa Raghavan’s journey into solving some of the world’s toughest challenges started with a simple love for puzzles. By high school, her knack for problem-solving naturally drew her to computer science. Through her participation in an entrepreneurship and leadership program, she built apps and twice made it to the semifinals of the program’s global competition.
Her early successes made a computer science career seem like an obvious choice, but Raghavan says a significant competing interest left her torn.
“Computer science sparks that puzzle-, problem-solving part of my brain,” says Raghavan ’24, an Accenture Fellow and a PhD candidate in MIT’s Institute for Data, Systems, and Society. “But while I always felt like building mobile apps was a fun little hobby, it didn’t feel like I was directly solving societal challenges.”
Her perspective shifted when, as an MIT undergraduate, Raghavan participated in an Undergraduate Research Opportunity in the Photovoltaic Research Laboratory, now known as the Accelerated Materials Laboratory for Sustainability. There, she discovered how computational techniques like machine learning could optimize materials for solar panels — a direct application of her skills toward mitigating climate change.
“This lab had a very diverse group of people, some from a computer science background, some from a chemistry background, some who were hardcore engineers. All of them were communicating effectively and working toward one unified goal — building better renewable energy systems,” Raghavan says. “It opened my eyes to the fact that I could use very technical tools that I enjoy building and find fulfillment in that by helping solve major climate challenges.”
With her sights set on applying machine learning and optimization to energy and climate, Raghavan joined Cathy Wu’s lab when she started her PhD in 2023. The lab focuses on building more sustainable transportation systems, a field that resonated with Raghavan due to its universal impact and its outsized role in climate change — transportation accounts for roughly 30 percent of greenhouse gas emissions.
“If we were to throw all of the intelligent systems we are exploring into the transportation networks, by how much could we reduce emissions?” she asks, summarizing a core question of her research.
Wu, an associate professor in the Department of Civil and Environmental Engineering, stresses the value of Raghavan's work.
“Transportation is a critical element of both the economy and climate change, so potential changes to transportation must be carefully studied,” Wu says. “Shreyaa’s research into smart congestion management is important because it takes a data-driven approach to add rigor to the broader research supporting sustainability.”
Raghavan’s contributions have been recognized with the Accenture Fellowship, a cornerstone of the MIT-Accenture Convergence Initiative for Industry and Technology.
As an Accenture Fellow, she is exploring the potential impact of technologies for avoiding stop-and-go traffic and its emissions, using systems such as networked autonomous vehicles and digital speed limits that vary according to traffic conditions — solutions that could advance decarbonization in the transportation section at relatively low cost and in the near term.
Raghavan says she appreciates the Accenture Fellowship not only for the support it provides, but also because it demonstrates industry involvement in sustainable transportation solutions.
“It’s important for the field of transportation, and also energy and climate as a whole, to synergize with all of the different stakeholders,” she says. “I think it’s important for industry to be involved in this issue of incorporating smarter transportation systems to decarbonize transportation.”
Raghavan has also received a fellowship supporting her research from the U.S. Department of Transportation.
“I think it’s really exciting that there’s interest from the policy side with the Department of Transportation and from the industry side with Accenture,” she says.
Raghavan believes that addressing climate change requires collaboration across disciplines. “I think with climate change, no one industry or field is going to solve it on its own. It’s really got to be each field stepping up and trying to make a difference,” she says. “I don’t think there’s any silver-bullet solution to this problem. It’s going to take many different solutions from different people, different angles, different disciplines.”
With that in mind, Raghavan has been very active in the MIT Energy and Climate Club since joining about three years ago, which, she says, “was a really cool way to meet lots of people who were working toward the same goal, the same climate goals, the same passions, but from completely different angles.”
This year, Raghavan is on the community and education team, which works to build the community at MIT that is working on climate and energy issues. As part of that work, Raghavan is launching a mentorship program for undergraduates, pairing them with graduate students who help the undergrads develop ideas about how they can work on climate using their unique expertise.
“I didn’t foresee myself using my computer science skills in energy and climate,” Raghavan says, “so I really want to give other students a clear pathway, or a clear sense of how they can get involved.”
Raghavan has embraced her area of study even in terms of where she likes to think.
“I love working on trains, on buses, on airplanes,” she says. “It’s really fun to be in transit and working on transportation problems.”
Anticipating a trip to New York to visit a cousin, she holds no dread for the long train trip.
“I know I’m going to do some of my best work during those hours,” she says. “Four hours there. Four hours back.”
Can deep learning transform heart failure prevention?
The ancient Greek philosopher and polymath Aristotle once concluded that the human heart is tri-chambered and that it was the single most important organ in the entire body, governing motion, sensation, and thought.
Today, we know that the human heart actually has four chambers and that the brain largely controls motion, sensation, and thought. But Aristotle was correct in observing that the heart is a vital organ, pumping blood to the rest of the body to reach other vital organs. When a life-threatening condition like heart failure strikes, the heart gradually loses the ability to supply other organs with enough blood and nutrients that enables them to function.
Researchers from MIT and Harvard Medical School recently published an open-access paper in Nature Communications Medicine, introducing a noninvasive deep learning approach that analyzes electrocardiogram (ECG) signals to accurately predict a patient’s risk of developing heart failure. In a clinical trial, the model showed results with accuracy comparable to gold-standard but more-invasive procedures, giving hope to those at risk of heart failure. The condition has recently seen a sharp increase in mortality, particularly among young adults, likely due to the growing prevalence of obesity and diabetes.
“This paper is a culmination of things I’ve talked about in other venues for several years,” says the paper’s senior author Collin Stultz, director of Harvard-MIT Program in Health Sciences and Technology and affiliate of the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic). “The goal of this work is to identify those who are starting to get sick even before they have symptoms so that you can intervene early enough to prevent hospitalization.”
Of the heart’s four chambers, two are atria and two are ventricles — the right side of the heart has one atrium and one ventricle, and vice versa. In a healthy human heart, these chambers operate in a rhythmic synchrony: oxygen-poor blood flows into the heart via the right atrium. The right atrium contracts and the pressure generated pushes the blood into the right ventricle where the blood is then pumped into the lungs to be oxygenated. The oxygen-rich blood from the lungs then drains into the left atrium, which contracts, pumping the blood into the left ventricle. Another contraction follows, and the blood is ejected from the left ventricle via the aorta, flowing into veins branching out to the rest of the body.
“When the left atrial pressures become elevated, the blood drain from the lungs into the left atrium is impeded because it’s a higher-pressure system,” Stultz explains. In addition to being a professor of electrical engineering and computer science, Stultz is also a practicing cardiologist at Mass General Hospital (MGH). “The higher the pressure in the left atrium, the more pulmonary symptoms you develop — shortness of breath and so forth. Because the right side of the heart pumps blood through the pulmonary vasculature to the lungs, the elevated pressures in the left atrium translate to elevated pressures in the pulmonary vasculature.”
The current gold standard for measuring left atrial pressure is right heart catheterization (RHC), an invasive procedure that requires a thin tube (the catheter) attached to a pressure transmitter to be inserted into the right heart and pulmonary arteries. Physicians often prefer to assess risk noninvasively before resorting to RHC, by examining the patient’s weight, blood pressure, and heart rate.
But in Stultz’s view, these measures are coarse, as evidenced by the fact that one-in-four heart failure patients is readmitted to the hospital within 30 days. “What we are seeking is something that gives you information like that of an invasive device, other than a simple weight scale,” Stultz says.
In order to gather more comprehensive information on a patient’s heart condition, physicians typically use a 12-lead ECG, in which 10 adhesive patches are stuck onto the patient and linked with a machine that produces information from 12 different angles of the heart. However, 12-lead ECG machines are only accessible in clinical settings and they are also not typically used to assess heart failure risk.
Instead, what Stultz and other researchers propose is a Cardiac Hemodynamic AI monitoring System (CHAIS), a deep neural network capable of analyzing ECG data from a single lead — in other words, the patient only needs to have a single adhesive, commercially-available patch on their chest that they can wear outside of the hospital, untethered to a machine.
To compare CHAIS with the current gold standard, RHC, the researchers selected patients who were already scheduled for a catheterization and asked them to wear the patch 24 to 48 hours before the procedure, although patients were asked to remove the patch before catheterization took place. “When you get to within an hour-and-a-half [before the procedure], it’s 0.875, so it’s very, very good,” Stultz explains. “Thereby a measure from the device is equivalent and gives you the same information as if you were cathed in the next hour-and-a-half.”
“Every cardiologist understands the value of left atrial pressure measurements in characterizing cardiac function and optimizing treatment strategies for patients with heart failure,” says Aaron Aguirre SM '03, PhD '08, a cardiologist and critical care physician at MGH. “This work is important because it offers a noninvasive approach to estimating this essential clinical parameter using a widely available cardiac monitor.”
Aguirre, who completed a PhD in medical engineering and medical physics at MIT, expects that with further clinical validation, CHAIS will be useful in two key areas: first, it will aid in selecting patients who will most benefit from more invasive cardiac testing via RHC; and second, the technology could enable serial monitoring and tracking of left atrial pressure in patients with heart disease. “A noninvasive and quantitative method can help in optimizing treatment strategies in patients at home or in hospital,” Aguirre says. “I am excited to see where the MIT team takes this next.”
But the benefits aren’t just limited to patients — for patients with hard-to-manage heart failure, it becomes a challenge to keep them from being readmitted to the hospital without a permanent implant, taking up more space and more time of an already beleaguered and understaffed medical workforce.
The researchers have another ongoing clinical trial using CHAIS with MGH and Boston Medical Center that they hope to conclude soon to begin data analysis.
“In my view, the real promise of AI in health care is to provide equitable, state-of-the-art care to everyone, regardless of their socioeconomic status, background, and where they live,” Stultz says. “This work is one step towards realizing this goal.”
Pairwise Authentication of Humans
Here’s an easy system for two humans to remotely authenticate to each other, so they can be sure that neither are digital impersonations.
To mitigate that risk, I have developed this simple solution where you can setup a unique time-based one-time passcode (TOTP) between any pair of persons.
This is how it works:
- Two people, Person A and Person B, sit in front of the same computer and open this page;
- They input their respective names (e.g. Alice and Bob) onto the same page, and click “Generate”;
- The page will generate two TOTP QR codes, one for Alice and one for Bob; ...