Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

What to expect at Google I/O

14 May 2024 at 06:42

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

In the world of AI, a lot can happen in a year. Last year, at the beginning of Big Tech’s AI wars, Google announced during its annual I/O conference that it was throwing generative AI at everything, integrating it into its suite of products from Docs to email to e-commerce listings and its chatbot Bard. It was an effort to catch up with competitors like Microsoft and OpenAI, which had unveiled snazzy products like coding assistants and ChatGPT, the product that has done more than any other to ignite the current excitement about AI.

Since then, its ChatGPT competitor chatbot Bard (which, you may recall, temporarily wiped $100 billion off Google’s share price when it made a factual error during the demo) has been replaced by the more advanced Gemini. But, for me, the AI revolution hasn’t felt like one. Instead, it’s been a slow slide toward marginal efficiency gains. I see more autocomplete functions in my email and word processing applications, and Google Docs now offers more ready-made templates. They are not groundbreaking features, but they are also reassuringly inoffensive. 

Google is holding its I/O conference tomorrow, May 14, and we expect them to announce a whole new slew of AI features, further embedding it into everything it does. The company is tight-lipped about its announcements, but we can make educated guesses. There has been a lot of speculation that it will upgrade its crown jewel, Search, with generative AI features that could, for example, go behind a paywall. Perhaps we will see Google’s version of AI agents, a buzzy word that basically means more capable and useful smart assistants able to do more complex tasks, such as booking flights and hotels much as a travel agent would. 

Google, despite having 90% of the online search market, is in a defensive position this year. Upstarts such as Perplexity AI have launched their own versions of AI-powered search to rave reviews, Microsoft’s AI-powered Bing has managed to increase its market share slightly, and OpenAI is working on its own AI-powered online search function and is also reportedly in conversation with Apple to integrate ChatGPT into smartphones

There are some hints about what any new AI-powered search features might look like. Felix Simon, a research fellow at the Reuters Institute for Journalism, has been part of the Google Search Generative Experience trial, which is the company’s way of testing new products on a small selection of real users. 

Last month, Simon noticed that his Google searches with links and short snippets from online sources had been replaced by more detailed, neatly packaged AI-generated summaries. He was able to get these results from queries related to nature and health, such as “Do snakes have ears?” Most of the information offered to him was correct, which was a surprise, as AI language models have a tendency to “hallucinate” (which means make stuff up), and they have been criticized for being an unreliable source of information. 

To Simon’s surprise, he enjoyed the new feature. “It’s convenient to ask [the AI] to get something presented just for you,” he says. 

Simon then started using the new AI-powered Google function to search for news items rather than scientific information.

For most of these queries, such as what happened in the UK or Ukraine yesterday, he was simply offered links to news sources such as the BBC and Al Jazeera. But he did manage to get the search engine to generate an overview of recent news items from Germany, in the form of a bullet-pointed list of news headlines from the day before. The first entry was about an attack on Franziska Giffey, a Berlin politician who was assaulted in a library. The AI summary had the date of the attack wrong. But it was so close to the truth that Simon didn’t think twice about its accuracy. 

A quick online search during our call revealed that the rest of the AI-generated news summaries were also littered with inaccuracies. Details were wrong, or the events referred to happened years ago. All the stories were also about terrorism, hate crimes, or violence, with one soccer result thrown in. Omitting headlines on politics, culture, and the economy seems like a weird choice.  

People have a tendency to believe computers to be correct even when they are not, and Simon’s experience is an example of the kinds of problems that might arise when AI models hallucinate. The ease of getting results means that people might unknowingly ingest fake news or wrong information. It’s very problematic if even people like Simon, who are trained to fact-check things and know how AI models work, don’t do their due diligence and assume information is correct. 

Whatever Google announces at I/O tomorrow, there is immense pressure for it to be something that would justify its massive investment into AI. And after a year of experimenting, there also need to be serious improvements in making its generative AI tools more accurate and reliable. 

There are some people in the computer science community who say that hallucinations are an intrinsic part of generative AI that can’t ever be fixed, and that we can never fully trust these systems. But hallucinations will make AI-powered products less appealing to users. And it’s highly unlikely that Google will announce it has fixed this problem at I/O tomorrow. 

If you want to learn more about how Google plans to develop and deploy AI, come and hear from its vice president of AI, Jay Yagnik, at our flagship AI conference, EmTech Digital. It’ll be held at the MIT campus and streamed live online next week on May 22-23.  I’ll be there, along with AI leaders from companies like OpenAI, AWS, and Nvidia, talking about where AI is going next. Nick Clegg, Meta’s president of global affairs, will also join MIT Technology Review’s executive editor Amy Nordrum for an exclusive interview on stage. See you there! 

Readers of The Algorithm get 30% off tickets with the code ALGORITHMD24.


Now read the rest of The Algorithm

Deeper Learning

Deepfakes of your dead loved ones are a booming Chinese business

Once a week, Sun Kai has a video call with his mother. He opens up about work, the pressures he faces as a middle-aged man, and thoughts that he doesn’t even discuss with his wife. His mother will occasionally make a comment, but mostly, she just listens. That’s because Sun’s mother died five years ago. And the person he’s talking to isn’t actually a person, but a digital replica he made of her—a moving image that can conduct basic conversations. 

AI resurrection: There are plenty of people like Sun who want to use AI to interact with lost loved ones. The market is particularly strong in China, where at least half a dozen companies are now offering such technologies. In some ways, the avatars are the latest manifestation of a cultural tradition: Chinese people have always taken solace from confiding in the dead. Read more from Zeyi Yang

Bits and Bytes

Google DeepMind’s new AlphaFold can model a much larger slice of biological life
Google DeepMind has released an improved version of its biology prediction tool, AlphaFold, that can predict the structures not only of proteins but of nearly all the elements of biological life. It’s an exciting development that could help accelerate drug discovery and other scientific research. ​​(MIT Technology Review

The way whales communicate is closer to human language than we realized
Researchers used statistical models to analyze whale “codas” and managed to identify a structure to their language that’s similar to features of the complex vocalizations humans use. It’s a small step forward, but it could help unlock a greater understanding of how whales communicate. (MIT Technology Review)

Tech workers should shine a light on the industry’s secretive work with the military
Despite what happens in Google’s executive suites, workers themselves can force change. William Fitzgerald, who leaked information about Google’s controversial Project Maven, has shared how he thinks they can do this. (MIT Technology Review

AI systems are getting better at tricking us
A wave of AI systems have “deceived” humans in ways they haven’t been explicitly trained to do, by offering up false explanations for their behavior or concealing the truth from human users and misleading them to achieve a strategic end. This issue highlights how difficult artificial intelligence is to control and the unpredictable ways in which these systems work. (MIT Technology Review

Why America needs an Apollo program for the age of AI
AI is crucial to the future security and prosperity of the US. We need to lay the groundwork now by investing in computational power, argues Eric Schmidt. (MIT Technology Review

Fooled by AI? These firms sell deepfake detection that’s “REAL 100%”
The AI detection business is booming. There is one catch, however. Detecting AI-generated content is notoriously unreliable, and the tech is still in its infancy. That hasn’t stopped some startup founders (many of whom have no experience or background in AI) from trying to sell services they claim can do so. (The Washington Post

The tech-bro turf war over AI’s most hardcore hacker house
A hilarious piece taking an anthropological look at the power struggle between two competing hacker houses in Silicon Valley. The fight is over which house can call itself “AGI House.” (Forbes

My deepfake shows how valuable our data is in the age of AI

30 April 2024 at 05:23

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Deepfakes are getting good. Like, really good. Earlier this month I went to a studio in East London to get myself digitally cloned by the AI video startup Synthesia. They made a hyperrealistic deepfake that looked and sounded just like me, with realistic intonation. It is a long way away from the glitchiness of earlier generations of AI avatars. The end result was mind-blowing. It could easily fool someone who doesn’t know me well.

Synthesia has managed to create AI avatars that are remarkably humanlike after only one year of tinkering with the latest generation of generative AI. It’s equally exciting and daunting thinking about where this technology is going. It will soon be very difficult to differentiate between what is real and what is not, and this is a particularly acute threat given the record number of elections happening around the world this year. 

We are not ready for what is coming. If people become too skeptical about the content they see, they might stop believing in anything at all, which could enable bad actors to take advantage of this trust vacuum and lie about the authenticity of real content. Researchers have called this the “liar’s dividend.” They warn that politicians, for example, could claim that genuinely incriminating information was fake or created using AI. 

I just published a story on my deepfake creation experience, and on the big questions about a world where we increasingly can’t tell what’s real. Read it here

But there is another big question: What happens to our data once we submit it to AI companies? Synthesia says it does not sell the data it collects from actors and customers, although it does release some of it for academic research purposes. The company uses avatars for three years, at which point actors are asked if they want to renew their contracts. If so, they come into the studio to make a new avatar. If not, the company deletes their data.

But other companies are not that transparent about their intentions. As my colleague Eileen Guo reported last year, companies such as Meta license actors’ data—including their faces and  expressions—in a way that allows the companies to do whatever they want with it. Actors are paid a small up-front fee, but their likeness can then be used to train AI models in perpetuity without their knowledge. 

Even if contracts for data are transparent, they don’t apply if you die, says Carl Öhman, an assistant professor at Uppsala University who has studied the online data left by deceased people and is the author of a new book, The Afterlife of Data. The data we input into social media platforms or AI models might end up benefiting companies and living on long after we’re gone. 

“Facebook is projected to host, within the next couple of decades, a couple of billion dead profiles,” Öhman says. “They’re not really commercially viable. Dead people don’t click on any ads, but they take up server space nevertheless,” he adds. This data could be used to train new AI models, or to make inferences about the descendants of those deceased users. The whole model of data and consent with AI presumes that both the data subject and the company will live on forever, Öhman says.

Our data is a hot commodity. AI language models are trained by indiscriminately scraping the web, and that also includes our personal data. A couple of years ago I tested to see if GPT-3, the predecessor of the language model powering ChatGPT, has anything on me. It struggled, but I found that I was able to retrieve personal information about MIT Technology Review’s editor in chief, Mat Honan. 

High-quality, human-written data is crucial to training the next generation of powerful AI models, and we are on the verge of running out of free online training data. That’s why AI companies are racing to strike deals with news organizations and publishers to access their data treasure chests. 

Old social media sites are also a potential gold mine: when companies go out of business or platforms stop being popular, their assets, including users’ data, get sold to the highest bidder, says Öhman. 

“MySpace data has been bought and sold multiple times since MySpace crashed. And something similar may well happen to Synthesia, or X, or TikTok,” he says. 

Some people may not care much about what happens to their data, says Öhman. But securing exclusive access to high-quality data helps cement the monopoly position of large corporations, and that harms us all. This is something we need to grapple with as a society, he adds. 

Synthesia said it will delete my avatar after my experiment, but the whole experience did make me think of all the cringeworthy photos and posts that haunt me on Facebook and other social media platforms. I think it’s time for a purge.


Now read the rest of The Algorithm

Deeper Learning

Chatbot answers are all made up. This new tool helps you figure out which ones to trust.

Large language models are famous for their ability to make things up—in fact, it’s what they’re best at. But their inability to tell fact from fiction has left many businesses wondering if using them is worth the risk. A new tool created by Cleanlab, an AI startup spun out of MIT, is designed to provide a clearer sense of how trustworthy these models really are. 

A BS-o-meter for chatbots: Called the Trustworthy Language Model, it gives any output generated by a large language model a score between 0 and 1, according to its reliability. This lets people choose which responses to trust and which to throw out. Cleanlab hopes that its tool will make large language models more attractive to businesses worried about how much stuff they invent. Read more from Will Douglas Heaven.

Bits and Bytes

Here’s the defense tech at the center of US aid to Israel, Ukraine, and Taiwan
President Joe Biden signed a $95 billion aid package into law last week. The bill will send a significant quantity of supplies to Ukraine and Israel, while also supporting Taiwan with submarine technology to aid its defenses against China. (MIT Technology Review

Rishi Sunak promised to make AI safe. Big Tech’s not playing ball.
The UK’s prime minister thought he secured a political win when he got AI power players to agree to voluntary safety testing with the UK’s new AI Safety Institute. Six months on, it turns out pinkie promises don’t go very far. OpenAI and Meta have not granted access to the AI Safety Institute to do prerelease safety testing on their models. (Politico

Inside the race to find AI’s killer app
The AI hype bubble is starting to deflate as companies try to find a way to make profits out of the eye-wateringly expensive process of developing and running this technology. Tech companies haven’t solved some of the fundamental problems slowing its wider adoption, such as the fact that generative models constantly make things up. (The Washington Post)  

Why the AI industry’s thirst for new data centers can’t be satisfied
The current boom in data-hungry AI means there is now a shortage of parts, property, and power to build data centers. (The Wall Street Journal

The friends who became rivals in Big Tech’s AI race
This story is a fascinating look into one of the most famous and fractious relationships in AI. Demis Hassabis and Mustafa Suleyman are old friends who grew up in London and went on to cofound AI lab DeepMind. Suleyman was ousted following a bullying scandal, went on to start his own short-lived startup, and now heads rival Microsoft’s AI efforts, while Hassabis still runs DeepMind, which is now Google’s central AI research lab. (The New York Times

This creamy vegan cheese was made with AI
Startups are using artificial intelligence to design plant-based foods. The companies train algorithms on data sets of ingredients with desirable traits like flavor, scent, or stretchability. Then they use AI to comb troves of data to develop new combinations of those ingredients that perform similarly. (MIT Technology Review

Three things we learned about AI from EmTech Digital London

23 April 2024 at 05:55

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Last week, MIT Technology Review held its inaugural EmTech Digital conference in London. It was a great success! I loved seeing so many of you there asking excellent questions, and it was a couple of days full of brain-tickling insights about where AI is going next. 

Here are the three main things I took away from the conference.

1. AI avatars are getting really, really good

UK-based AI unicorn Synthesia teased its next generation of AI avatars, which are far more emotive and realistic than any I have ever seen before. The company is pitching these avatars as a new, more engaging way to communicate. Instead of skimming through pages and pages of onboarding material, for example, new employees could watch a video where a hyperrealistic AI avatar explains what they need to know about their job. This has the potential to change the way we communicate, allowing content creators to outsource their work to custom avatars and making it easier for organizations to share information with their staff. 

2. AI agents are coming 

Thanks to the ChatGPT boom, many of us have interacted with  an AI assistant that can retrieve information. But the next generation of these tools, called AI agents, can do much more than that. They are AI models and algorithms that can autonomously make decisions by themselves in a dynamic world. Imagine an AI travel agent that can not only retrieve information and suggest things to do, but also take action to book things for you, from flights to tours and accommodations. Every AI lab worth its salt, from OpenAI to Meta to startups, is racing to build agents that can reason better, memorize more steps, and interact with other apps and websites.  

3. Humans are not perfect either 

One of the best ways we have of ensuring that AI systems don’t go awry is getting humans to audit and evaluate them. But humans are complicated and biased, and we don’t always get things right. In order to build machines that meet our expectations​ and complement our limitations, we should account for human error from the get-go. In a fascinating presentation, Katie Collins, an AI researcher at the University of Cambridge, explained how she found that allowing people to express how certain or uncertain they are—for example, by using a percentage to indicate how confident they are in labeling data—leads to better accuracy for AI models overall. The only downside with this approach is that it costs more and takes more time.

And we’re doing it all again next month, this time at the mothership. 

Join us for EmTech Digital at the MIT campus in Cambridge, Massachusetts, on May 22-23, 2024. I’ll be there—join me! 

Our fantastic speakers include Nick Clegg, president of global affairs at Meta, who will talk about elections and AI-generated misinformation. We also have the OpenAI researchers who built the video-generation AI Sora, sharing their vision on how generative AI will change Hollywood. Then Max Tegmark, the MIT professor who wrote an open letter last year calling for a pause on AI development, will take stock of what has happened and discuss how to make powerful systems more safe. We also have a bunch of top scientists from the labs at Google, OpenAI, AWS, MIT, Nvidia and more. 

Readers of The Algorithm get 30% off with the discount code ALGORITHMD24.

I hope to see you there!


Now read the rest of The Algorithm

Deeper Learning

Researchers taught robots to run. Now they’re teaching them to walk.

Researchers at Oregon State University have successfully trained a humanoid robot called Digit V3 to stand, walk, pick up a box, and move it from one location to another. Meanwhile, a separate group of researchers from the University of California, Berkeley, have focused on teaching Digit to walk in unfamiliar environments while carrying different loads, without toppling over. 

What’s the big deal: Both groups are using an AI technique called sim-to-real reinforcement learning, a burgeoning method of training two-legged robots like Digit. Researchers believe it will lead to more robust, reliable two-legged machines capable of interacting with their surroundings more safely—as well as learning much more quickly. Read more from Rhiannon Williams

Bits and Bytes

It’s time to retire the term “user”
The proliferation of AI means we need a new word. Tools we once called AI bots have been assigned lofty titles like “copilot,” “assistant,” and “collaborator” to convey a sense of partnership instead of a sense of automation. But if AI is now a partner, then what are we? (MIT Technology Review

Three ways the US could help universities compete with tech companies on AI innovation
Empowering universities to remain at the forefront of AI research will be key to realizing the field’s long-term potential, argue Ylli Bajraktari, Tom Mitchell, and Daniela Rus. (MIT Technology Review

AI was supposed to make police body cams better. What happened?
New AI programs that analyze bodycam recordings promise more transparency but are doing little to change culture. This story serves as a useful reminder that technology is never a panacea for these sorts of deep-rooted issues. (MIT Technology Review

The World Health Organization’s AI chatbot makes stuff up
The World Health Organization launched a “virtual health worker“ to help people with questions about things like mental health, tobacco use, and healthy eating. But the chatbot frequently offers outdated information or just simply makes things up, a common issue with AI models. This is a great cautionary tale of why it’s not always a good idea to use AI chatbots. Hallucinating chatbots can lead to serious consequences when they are applied to important tasks such as giving health advice. (Bloomberg

Meta is adding AI assistants everywhere in its biggest AI push
The tech giant is rolling out its latest AI model, Llama 3, in most of its apps including Instagram, Facebook, and WhatsApp. People will also be able to ask its AI assistants for advice, or use them to search for information on the internet. (New York Times

Stability AI is in trouble
One of the first new generative AI unicorns, the company behind the open-source image-generating AI model Stable Diffusion, is laying off 10% of its workforce. Just a couple of weeks ago its CEO, Emad Mostaque, announced that he was leaving the company. Stability has also lost several high-profile researchers and struggled to monetize its product, and it is facing a slew of lawsuits over copyright. (The Verge

Three reasons robots are about to become way more useful 

16 April 2024 at 05:40

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The holy grail of robotics since the field’s beginning has been to build a robot that can do our housework. But for a long time, that has just been a dream. While roboticists have been able to get robots to do impressive things in the lab, such as parkour, this usually requires meticulous planning in a tightly-controlled setting. This makes it hard for robots to work reliably in homes around children and pets, homes have wildly varying floorplans, and contain all sorts of mess. 

There’s a well-known observation among roboticists called the Moravec’s paradox: What is hard for humans is easy for machines, and what is easy for humans is hard for machines. Thanks to AI, this is now changing. Robots are starting to become capable of doing tasks such as folding laundry, cooking and unloading shopping baskets, which not too long ago were seen as almost impossible tasks. 

In our most recent cover story for the MIT Technology Review print magazine, I looked at how robotics as a field is at an inflection point. You can read more here. A really exciting mix of things are converging in robotics research, which could usher in robots that might—just might—make it out of the lab and into our homes. 

Here are three reasons why robotics is on the brink of having its own “ChatGPT moment.”

1. Cheap hardware makes research more accessible
Robots are expensive. Highly sophisticated robots can easily cost hundreds of thousands of dollars, which makes them inaccessible for most researchers. For example the PR2, one of the earliest iterations of home robots, weighed 450 pounds (200 kilograms) and cost $400,000. 

But new, cheaper robots are allowing more researchers to do cool stuff. A new robot called Stretch, developed by startup Hello Robot, launched during the pandemic with a much more reasonable price tag of around $18,000 and a weight of 50 pounds. It has a small mobile base, a stick with a camera dangling off it, an adjustable arm featuring a gripper with suction cups at the ends, and it can be controlled with a console controller. 

Meanwhile, a team at Stanford has built a system called Mobile ALOHA (a loose acronym for “a low-cost open-source hardware teleoperation system”), that learned to cook shrimp with the help of just 20 human demonstrations and data from other tasks. They used off-the-shelf components to cobble together robots with more reasonable price tags in the tens, not hundreds, of thousands.

2. AI is helping us build “robotic brains”
What separates this new crop of robots is their software. Thanks to the AI boom the focus is now shifting from feats of physical dexterity achieved by expensive robots to building “general-purpose robot brains” in the form of neural networks. Instead of the traditional painstaking planning and training, roboticists have started using deep learning and neural networks to create systems that learn from their environment on the go and adjust their behavior accordingly. 

Last summer, Google launched a vision-language-­action model called RT-2. This model gets its general understanding of the world from the online text and images it has been trained on, as well as its own interactions. It translates that data into robotic actions. 

And researchers at the Toyota Research Institute, Columbia University and MIT have been able to quickly teach robots to do many new tasks with the help of an AI learning technique called imitation learning, plus generative AI. They believe they have found a way to extend the technology propelling generative AI from the realm of text, images, and videos into the domain of robot movements. 

Many others have taken advantage of generative AI as well. Covariant, a robotics startup that spun off from OpenAI’s now-shuttered robotics research unit, has built a multimodal model called RFM-1. It can accept prompts in the form of text, image, video, robot instructions, or measurements. Generative AI allows the robot to both understand instructions and generate images or videos relating to those tasks. 

3. More data allows robots to learn more skills
The power of large AI models such as GPT-4 lie in the reams and reams of data hoovered from the internet. But that doesn’t really work for robots, which need data that have been specifically collected for robots. They need physical demonstrations of how washing machines and fridges are opened, dishes picked up, or laundry folded. Right now that data is very scarce, and it takes a long time for humans to collect.

A new initiative kick-started by Google DeepMind, called the Open X-Embodiment Collaboration, aims to change that. Last year, the company partnered with 34 research labs and about 150 researchers to collect data from 22 different robots, including Hello Robot’s Stretch. The resulting data set, which was published in October 2023, consists of robots demonstrating 527 skills, such as picking, pushing, and moving.  

Early signs show that more data is leading to smarter robots. The researchers built two versions of a model for robots, called RT-X, that could be either run locally on individual labs’ computers or accessed via the web. The larger, web-accessible model was pretrained with internet data to develop a “visual common sense,” or a baseline understanding of the world, from the large language and image models. When the researchers ran the RT-X model on many different robots, they discovered that the robots were able to learn skills 50% more successfully than in the systems each individual lab was developing.

Read more in my story here


Now read the rest of The Algorithm

Deeper Learning

Generative AI can turn your most precious memories into photos that never existed

Maria grew up in Barcelona, Spain, in the 1940s. Her first memories of her father are vivid. As a six-year-old, Maria would visit a neighbor’s apartment in her building when she wanted to see him. From there, she could peer through the railings of a balcony into the prison below and try to catch a glimpse of him through the small window of his cell, where he was locked up for opposing the dictatorship of Francisco Franco. There is no photo of Maria on that balcony. But she can now hold something like it: a fake photo—or memory-based reconstruction.

Remember this: Dozens of people have now had their memories turned into images in this way via Synthetic Memories, a project run by Barcelona-based design studio Domestic Data Streamers. Read this story by my colleague Will Douglas Heaven to find out more

Bits and Bytes

Why the Chinese government is sparing AI from harsh regulations—for now
The way China regulates its tech industry can seem highly unpredictable. The government can celebrate the achievements of Chinese tech companies one day and then turn against them the next. But there are patterns in China’s approach, and they indicate how it’ll regulate AI. (MIT Technology Review

AI could make better beer. Here’s how.
New AI models can accurately identify not only how tasty consumers will deem beers, but also what kinds of compounds brewers should be adding to make them taste better, according to research. (MIT Technology Review

OpenAI’s legal troubles are mounting
OpenAI is lawyering up as it faces a deluge of lawsuits both at home and abroad. The company has hired about two dozen in-house lawyers since last spring to work on copyright claims, and is also hiring an antitrust lawyer. The company’s new strategy is to try to position itself as America’s bulwark against China. (The Washington Post

Did Google’s AI actually discover millions of new materials?
Late last year, Google DeepMind claimed it had discovered millions of new materials using deep learning. But researchers who analyzed a subset of DeepMind’s work found that the company’s claims may have been overhyped, and that the company hadn’t found materials that were useful or credible. (404 Media

OpenAI and Meta are building new AI models capable of “reasoning”
The next generation of powerful AI models from OpenAI and Meta will be able to do more complex tasks, such as reason, plan and retain more information. This, tech companies believe, will allow them to be more reliable and not make the kind of silly mistakes that this generation of language models are so prone to. (The Financial Times

A conversation with Dragoș Tudorache, the politician behind the AI Act

8 April 2024 at 05:43

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Dragoș Tudorache is feeling pretty damn good. We’re sitting in a conference room in a chateau overlooking a lake outside Brussels, sipping glasses of cava. The Romanian liberal member of the European Parliament has spent the day hosting a conference on AI, defense, and geopolitics attended by nearly 400 VIP guests. The day is almost over, and Tudorache has promised to squeeze an interview with me in during cocktail hour. 

A former interior minister, Tudorache is one of the most important players in European AI policy. He is one of the two lead negotiators of the AI Act in the European Parliament. The bill, the first sweeping AI law of its kind in the world, will enter into force this year. We first met two years ago, when Tudorache was appointed to his position as negotiator. 

But Tudorache’s interest in AI started much earlier, in 2015. He says reading Nick Bostrom’s book Superintelligence, which explores how an AI superintelligence could be created and what the implications could be, made him realize the potential and dangers of AI and the need for regulating it. (Bostrom has recently been embroiled in a scandal for expressing racist views in emails unearthed from the ‘90s. Tudorache says he is not aware of Bostrom’s career after the publication of the book, and he did not comment on the controversy.) 

When he was elected to the European Parliament in 2019, he says, he arrived determined to work on AI regulation if the opportunity presented itself. 

“When I heard [Ursula] von der Leyen [the European Commission president] say in her first speech in front of Parliament that there will be AI regulation, I said ‘Whoo-ha, this is my moment,’” he recalls. 

Since then, Tudorache has chaired a special committee on AI, and shepherded the AI Act through the European Parliament and into its final form following negotiations with other EU institutions. 

It’s been a wild ride, with intense negotiations, the rise of ChatGPT, lobbying from tech companies, and flip-flopping by some of Europe’s largest economies. But now, as the AI Act has passed into law, Tudorache’s job on it is done and dusted, and he says he has no regrets. Although the act has been criticized—both by civil society for not protecting human rights enough and by industry for being too restrictive—Tudorache says its final form was the sort of compromise he expected. Politics is the art of compromise, after all. 

“There’s going to be a lot of building the plane while flying, and there’s going to be a lot of learning while doing,” he says. “But if the true spirit of what we meant with the legislation is well understood by all concerned, I do think that the outcome can be a positive one.”  

It is still early days—the law comes fully into force two years from now. But Tudorache believes it will change the tech industry for the better and start a process where companies will start to take responsible AI seriously thanks to the legally binding obligations for AI companies to be more transparent about how their models are built. (I wrote about the five things you need to know about the AI Act a couple of months ago here.)

“The fact that we now have a blueprint for how you put the right boundaries, while also leaving room for innovation, is something that will serve society,” says Tudorache. It will also serve businesses, he says, because it offers a predictable path forward on what you can and cannot do with AI. 

But the AI Act is just the beginning, and there is still plenty keeping Tudorache up at night. AI is ushering in big changes across every industry and society. It will change everything from health care to education, labor, defense, and even human creativity. Most countries have not grasped what AI will mean for them, he says, and the responsibility now lies with governments to ensure that citizens and society more broadly are ready for the AI age. 

“The crunch time … starts now,” he says. 

Join Dragoș Tudorache and me at Emtech Digital London on April 16-17! Tudorache will walk you through what companies need to take into account with the AI Act right now. See you next week!


Now read the rest of The Algorithm

Deeper Learning

A conversation with OpenAI’s first artist in residence

Alex Reben’s work is often absurd, sometimes surreal: a mash-up of giant ears imagined by DALL-E and sculpted by hand out of marble; critical burns generated by ChatGPT that thumb the nose at AI art. But its message is relevant to everyone. Reben is interested in the roles humans play in a world filled with machines, and how those roles are changing. He is also OpenAI’s first artist in residence. 
Meet the artist: Officially, the appointment started in January and lasts three months. But he’s been working with OpenAI for years already. Our senior editor for AI, Will Douglas Heaven, sat down with Reben to talk about the role AI can play in art, and the backlash against it from artists. Read more here.

Bits and Bytes

It’s easy to tamper with watermarks from AI-generated text

Watermarks for AI-generated text are easy to remove and can be stolen and copied, rendering them useless, researchers have found. They say these kinds of attacks discredit watermarks and can fool people into trusting text they shouldn’t. It’s an especially significant finding because many regulations around the world, including the AI Act, are betting heavily on the development of watermarks to trace AI-generated content.  (MIT Technology Review

How three filmmakers created Sora’s latest stunning videos

In the last month, a handful of filmmakers have taken OpenAI’s new generative AI model Sora for a test drive. The results are amazing. The short films are a big jump up even from the cherry-picked demo videos that OpenAI used to tease Sora just six weeks ago. Here’s how three of the filmmakers did it. (MIT Technology Review

What’s next for generative video

Generative video will probably upend a wide range of businesses and change the roles of many professionals, from animators to advertisers. Fears of misuse are also growing. The widespread ability to generate fake video will make it easier than ever to flood the internet with propaganda and nonconsensual porn. We can see it coming. The problem is, nobody has a good fix. (MIT Technology Review

Google is considering charging for AI-powered search

In a major potential shake-up to Google’s business model, the tech giant is considering putting AI-powered search features behind a paywall. But considering how untrustworthy AI search results are, it’s unclear if people will want to pay for them. (Financial Times) 

The fight for AI talent heats up 

As layoffs sweep through the tech sector, AI jobs are still super hot. Tech giants are fighting each other for top talent, even offering seven-figure salaries, and poaching entire engineering teams with experience in generative AI. (Wall Street Journal

Inside Big Tech’s underground race to buy AI training data

AI models need to be trained on massive data sets, and big tech companies are quietly paying for data, chat logs, and personal photos hidden behind paywalls and login screens. (Reuters

How tech giants cut corners to harvest data for AI

AI companies are running out of quality training data for their huge AI models. In order to harvest more data, tech companies such as OpenAI, Google, and Meta have cut corners, ignored corporate policies, and debated bending the law, the New York Times found. (New York Times)

Meet the MIT Technology Review AI team in London

26 March 2024 at 07:06

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The UK is home to AI powerhouse Google DeepMind, a slew of exciting AI startups, and some of the world’s best universities. It’s also where I live, along with quite a few of my MIT Technology Review colleagues, including our senior AI editor Will Douglas Heaven. 

That’s why I’m super stoked to tell you that we’re gathering some of the brightest minds in AI in Europe for our flagship AI conference, EmTech Digital, in London on April 16 and 17. 

Our speakers include top figures like Zoubin Ghahramani, vice president of research at Google DeepMind; Maja Pantic, AI scientific research lead at Meta; Dragoș Tudorache, a member of the European Parliament and one of the key politicians behind the newly passed EU AI Act; and Victor Riparbelli, CEO of AI avatar company Synthesia. 

We’ll also hear from executives at NVIDIA, Roblox, Faculty, and ElevenLabs, and researchers from the UK’s top universities and AI research institutes. 

They will share their wisdom on how to harness AI and what businesses need to know right now about this transformative technology. 

Here are some sessions I am particularly excited about.

Generating AI’s Path Forward
Where is AI going next? Zoubin Ghahramani, vice president of research at Google DeepMind, will map out realistic timelines for new innovation, and he will discuss the need for an overall strategy for a safe and productive AI future for Europe and beyond.

Digital Assistants for AI Automation
You’ve perhaps heard of AI assistants. But in this session, David Barber, director of the Centre for Artificial Intelligence at University College London, will argue that a major transformation will come with the rise of AI agents, which can complete complex sets of actions such as booking travel, answering messages, and performing data entry. 

AI’s Impact on Democracy
A senior official from the UK’s National Cyber Security Centre will walk us through some of the threats posed by AI that keep him up at night. Based on our speaker prep call, I can tell you that real life really is stranger than fiction. 

The AI Act’s Impacts on Policy and Regulations
The AI Act is here, and companies in the US and the UK will have to comply with it if they want to do business in the EU. I will be sitting down with Dragoș Tudorache, one of the key politicians behind the law, to walk you through what companies need to take into account right now. 

Venturing into AI Opportunity
The European startup scene has long played second fiddle to the US. But with the rise of open-source AI unicorn Mistral and others, hopes are rising that European startups could become more competitive in the global AI marketplace. Paul Murphy, a partner at venture capital firm Lightspeed, one of the first funds to invest in Mistral, will tell us all about his predictions. 

The Business of Solving Big Challenges with AI
Colin Murdoch, Google DeepMind’s chief business officer, will show us why AI is so much more than generative AI and how it can help solve society’s greatest challenges, from gene editing to sustainable energy and computing. 

And the best bit of all: the post-conference drinks! A conference in London would not be nearly as fun without some good old-fashioned networking in a pub afterward. So join us April 16–17 in London, and get the inside scoop on how AI is transforming the world. Get your tickets here

Before you go… We have a freebie to give you a taster of the event. Join me and MIT Technology Review’s editors Niall Firth and David Rotman for a free half-hour LinkedIn Live session today, March 26. We’ll discuss how AI is changing the way we work. Bring your questions and tune in here  at 4pm GMT/12pm EDT/9am EDT.


Now read the rest of The Algorithm

Deeper Learning

The tech industry can’t agree on what open-source AI means. That’s a problem.

Suddenly, “open source” is the latest buzzword in AI circles. Meta has pledged to create open-source artificial general intelligence. And Elon Musk is suing OpenAI over its lack of open-source AI models. Meanwhile, a growing number of tech leaders and companies are setting themselves up as open-source champions. But there’s a fundamental problem—no one can agree on what “open-source AI” means. 

Definitions wanted: Open-source AI promises a future where anyone can take part in the technology’s development. That could accelerate innovation, boost transparency, and give users greater control over systems that could soon reshape many aspects of our lives. But what even is it? What makes an AI model open source, and what disqualifies it? The answers could have significant ramifications for the future of the technology. Read more from Edd Gent.

Bits and Bytes

Apple researchers are exploring dropping “Hey Siri” and listening with AI instead
So maybe our phones will be listening to us all the time after all? New research aims to see if AI models can determine when you’re speaking to your phone without needing a trigger phrase. They also show how Apple, considered a laggard in AI, is determined to catch up. (MIT Technology Review)

An AI-driven “factory of drugs” claims to have hit a big milestone
Insilico is part of a wave of companies betting on AI as the “next amazing revolution” in biology. The company claims to have created the first “true AI drug” that’s advanced to a test of whether it can cure a fatal lung condition in humans. (MIT Technology Review

Chinese platforms are cracking down on influencers selling AI lessons
Over the last year, a few Chinese influencers have made millions of dollars peddling short video lessons on AI, profiting off people’s fears about the as-yet-unclear impact of the new technology on their livelihoods. Now the platforms they thrived on have started to turn against them. (MIT Technology Review

Google DeepMind’s new AI assistant helps elite soccer coaches get even better
The system can predict the outcome of corner kicks and provide realistic and accurate tactical suggestions in matches. The system, called TacticAI, works by analyzing a dataset of 7,176 corner kicks taken by players for Liverpool FC, one of the world’s biggest soccer clubs. (MIT Technology Review)

How AI taught Cassie the two-legged robot to run and jump
Researchers used an AI technique called reinforcement learning to help a two-legged robot nicknamed Cassie run 400 meters, over varying terrains, and execute standing long jumps and high jumps, without being trained explicitly on each movement. (MIT Technology Review)

France fined Google €250 million over copyright infringements 
The country’s competition watchdog says the tech company failed to broker fair agreements with media outlets for publishing links to their content and plundered press articles to train its AI technology without informing the publishers. This sets an interesting precedent for AI and copyright in Europe, and potentially beyond. (Bloomberg

China is educating the next generation of top AI talent
New research suggests that China has eclipsed the United States as the biggest producer of AI talent. (New York Times

DeepMind’s cofounder has ditched his startup to lead Microsoft’s AI initiative
Mustafa Suleyman  has now left his conversational AI startup Inflection to lead Microsoft AI, a new organization focused on advancing Microsoft’s Copilot and other consumer AI products. (Microsoft)

The AI Act is done. Here’s what will (and won’t) change

19 March 2024 at 07:17

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s official. After three years, the AI Act, the EU’s new sweeping AI law, jumped through its final bureaucratic hoop last week when the European Parliament voted to approve it. (You can catch up on the five main things you need to know about the AI Act with this story I wrote last year.) 

This also feels like the end of an era for me personally: I was the first reporter to get the scoop on an early draft of the AI Act in 2021, and have followed the ensuing lobbying circus closely ever since. 

But the reality is that the hard work starts now. The law will enter into force in May, and people living in the EU will start seeing changes by the end of the year. Regulators will need to get set up in order to enforce the law properly, and companies will have between up to three years to comply with the law.

Here’s what will (and won’t) change:

1. Some AI uses will get banned later this year

The Act places restrictions on AI use cases that pose a high risk to people’s fundamental rights, such as in healthcare, education, and policing. These will be outlawed by the end of the year. 

It also bans some uses that are deemed to pose an “unacceptable risk.” They include some pretty out-there and ambiguous use cases, such as AI systems that deploy “subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making,” or exploit vulnerable people. The AI Act also bans systems that infer sensitive characteristics such as someone’s political opinions or sexual orientation, and the use of real-time facial recognition software in public places. The creation of facial recognition databases by scraping the internet à la Clearview AI will also be outlawed. 

There are some pretty huge caveats, however. Law enforcement agencies are still allowed to use sensitive biometric data, as well as facial recognition software in public places to fight serious crime, such as terrorism or kidnappings. Some civil rights organizations, such as digital rights organization Access Now, have called the AI Act a “failure for human rights” because it did not ban controversial AI use cases such as facial recognition outright. And while companies and schools are not allowed to use software that claims to recognize people’s emotions, they can if it’s for medical or safety reasons.

2. It will be more obvious when you’re interacting with an AI system

Tech companies will be required to label deepfakes and AI-generated content and notify people when they are interacting with a chatbot or other AI system. The AI Act will also require companies to develop AI-generated media in a way that makes it possible to detect. This is promising news in the fight against misinformation, and will give research around watermarking and content provenance a big boost. 

However, this is all easier said than done, and research lags far behind what the regulation requires. Watermarks are still an experimental technology and easy to tamper with. It is still difficult to reliably detect AI-generated content. Some efforts show promise, such as the C2PA, an open-source internet protocol, but far more work is needed to make provenance techniques reliable, and to build an industry-wide standard. 

3. Citizens can complain if they have been harmed by an AI

The AI Act will set up a new European AI Office to coordinate compliance, implementation, and enforcement (and they are hiring). Thanks to the AI Act, citizens in the EU cansubmit complaints about AI systems when they suspect they have been harmed by one, and can receive explanations on why the AI systems made decisions they did. It’s an important first step toward giving people more agency in an increasingly automated world. However, this will require citizens to have a decent level of AI literacy, and to be aware of how algorithmic harms happen. For most people, these are still very foreign and abstract concepts. 

4. AI companies will need to be more transparent

Most AI uses will not require compliance with the AI Act. It’s only AI companies developing technologies in “high risk” sectors, such as critical infrastructure or healthcare, that will have new obligations when the Act fully comes into force in three years. These include better data governance, ensuring human oversight and assessing how these systems will affect people’s rights.

AI companies that are developing “general purpose AI models,” such as language models, will also need to create and keep technical documentation showing how they built the model, how they respect copyright law, and publish a publicly available summary of what training data went into training the AI model. 

This is a big change from the current status quo, where tech companies are secretive about the data that went into their models, and will require an overhaul of the AI sector’s messy data management practices

The companies with the most powerful AI models, such as GPT-4 and Gemini, will face more onerous requirements, such as having to perform model evaluations and risk-assessments and mitigations, ensure cybersecurity protection, and report any incidents where the AI system failed. Companies that fail to comply will face huge fines or their products could be banned from the EU. 

It’s also worth noting that free open-source AI models that share every detail of how the model was built, including the model’s architecture, parameters, and weights, are exempt from many of the obligations of the AI Act.


Now read the rest of The Algorithm

Deeper Learning

Africa’s push to regulate AI starts now

The projected benefit of AI adoption on Africa’s economy is tantalizing. Estimates suggest that Nigeria, Ghana, Kenya, and South Africa alone could rake in up to $136 billion worth of economic benefits by 2030 if businesses there begin using more AI tools. Now the African Union—made up of 55 member nations—is trying to work out how to develop and regulate this emerging technology. 

It’s not going to be easy: If African countries don’t develop their own regulatory frameworks to protect citizens from the technology’s misuse, some experts worry that Africans will be hurt in the process. But if these countries don’t also find a way to harness AI’s benefits, others fear their economies could be left behind. (Read more from Abdullahi Tsanni.) 

Bits and Bytes

An AI that can play Goat Simulator is a step toward more useful machines
A new AI agent from Google DeepMind can play different games, including ones it has never seen before such as Goat Simulator 3, a fun action game with exaggerated physics. It’s a step toward more generalized AI that can transfer skills across multiple environments. (MIT Technology Review

This self-driving startup is using generative AI to predict traffic
Waabi says its new model can anticipate how pedestrians, trucks, and bicyclists move using lidar data. If you prompt the model with a situation, like a driver recklessly merging onto a highway at high speed, it predicts how the surrounding vehicles will move, then generates a lidar representation of 5 to 10 seconds into the future (MIT Technology Review

LLMs become more covertly racist with human intervention
It’s long been clear that large language models like ChatGPT absorb racist views from the millions of pages of the internet they are trained on. Developers have responded by trying to make them less toxic. But new research suggests that those efforts, especially as models get larger, are only curbing racist views that are overt, while letting more covert stereotypes grow stronger and better hidden. (MIT Technology Review)

Let’s not make the same mistakes with AI that we made with social media
Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies, argue Nathan E. Sanders and Bruce Schneier. (MIT Technology Review

OpenAI’s CTO Mira Murati fumbled when asked about training data for Sora
In this interview with the Wall Street Journal, the journalist asks Murati whether OpenAI’s new video-generation AI system, Sora, was trained on videos from YouTube. Murati says she is not sure, which is an embarrassing answer from someone who should really know. OpenAI has been hit with copyright lawsuits about the data used to train its other AI models, and I would not be surprised if video was its next legal headache. (Wall Street Journal

Among the AI doomsayers
I really enjoyed this piece. Writer Andrew Marantz spent time with people who fear that AI poses an existential risk to humanity, and tried to get under their skin. The details in this story are both hilarious and juicy—and raise questions about who we should be listening to when it comes to AI’s harms. (The New Yorker

Why we need better defenses against VR cyberattacks

12 March 2024 at 06:14

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I remember the first time I tried on a VR headset. It was the first Oculus Rift, and I nearly fainted after experiencing an intense but visually clumsy VR roller-coaster. But that was a decade ago, and the experience has gotten a lot smoother and more realistic since. That impressive level of immersiveness could be a problem, though: it makes us particularly vulnerable to cyberattacks in VR. 

I just published a story about a new kind of security vulnerability discovered by researchers at the University of Chicago. Inspired by the Christoper Nolan movie Inception, the attack allows hackers to create an app that injects malicious code into the Meta Quest VR system. Then it launches a clone of the home screen and apps that looks identical to the user’s original screen. Once inside, attackers are able to see, record, and modify everything the person does with the VR headset, tracking voice, motion, gestures, keystrokes, browsing activity, and even interactions with other people in real time. New fear = unlocked. 

The findings are pretty mind-bending, in part because the researchers’ unsuspecting test subjects had absolutely no idea they were under attack. You can read more about it in my story here.

It’s shocking to see how fragile and unsecure these VR systems are, especially considering that Meta’s Quest headset is the most popular such product on the market, used by millions of people. 

But perhaps more unsettling is how attacks like this can happen without our noticing, and can warp our sense of reality. Past studies have shown how quickly people start treating things in AR or VR as real, says Franzi Roesner, an associate professor of computer science at the University of Washington, who studies security and privacy but was not part of the study. Even in very basic virtual environments, people start stepping around objects as if they were really there. 

VR has the potential to put misinformation, deception and other problematic content on steroids because it exploits people’s brains, and deceives them physiologically and subconsciously, says Roesner: “The immersion is really powerful.”  

And because VR technology is relatively new, people aren’t vigilantly looking out for security flaws or traps while using it. To test how stealthy the inception attack was, the University of Chicago researchers recruited 27 volunteer VR experts to experience it. One of the participants was Jasmine Lu, a computer science PhD researcher at the University of Chicago. She says she has been using, studying, and working with VR systems regularly since 2017. Despite that, the attack took her and almost all the other participants by surprise. 

“As far as I could tell, there was not any difference except a bit of a slower loading time—things that I think most people would just translate as small glitches in the system,” says Lu.  

One of the fundamental issues people may have to deal with in using VR is whether they can trust what they’re seeing, says Roesner. 

Lu agrees. She says that with online browsers, we have been trained to recognize what looks legitimate and what doesn’t, but with VR, we simply haven’t. People do not know what an attack looks like. 

This is related to a growing problem we’re seeing with the rise of generative AI, and even with text, audio, and video: it is notoriously difficult to distinguish real from AI-generated content. The inception attack shows that we need to think of VR as another dimension in a world where it’s getting increasingly difficult to know what’s real and what’s not. 

As more people use these systems, and more products enter the market, the onus is on the tech sector to develop ways to make them more secure and trustworthy. 

The good news? While VR technologies are commercially available, they’re not all that widely used, says Roesner. So there’s time to start beefing up defenses now. 


Now read the rest of The Algorithm

Deeper Learning

An OpenAI spinoff has built an AI model that helps robots learn tasks like humans

In the summer of 2021, OpenAI quietly shuttered its robotics team, announcing that progress was being stifled by a lack of data necessary to train robots in how to move and reason using artificial intelligence. Now three of OpenAI’s early research scientists say the startup they spun off in 2017, called Covariant, has solved that problem and unveiled a system that combines the reasoning skills of large language models with the physical dexterity of an advanced robot.

Multimodal prompting: The new model, called RFM-1, was trained on years of data collected from Covariant’s small fleet of item-picking robots that customers like Crate & Barrel and Bonprix use in warehouses around the world, as well as words and videos from the internet. Users can prompt the model using five different types of input: text, images, video, robot instructions, and measurements. The company hopes the system will become more capable and efficient as it’s deployed in the real world. Read more from James O’Donnell here

Bits and Bytes

You can now use generative AI to turn your stories into comics
By pulling together several different generative models into an easy-to-use package controlled with the push of a button, Lore Machine heralds the arrival of one-click AI. (MIT Technology Review

A former Google engineer has been charged with stealing AI trade secrets for Chinese companies
The race to develop ever more powerful AI systems is becoming dirty. A Chinese engineer downloaded confidential files about Google’s supercomputing data centers to his personal Google Cloud account while working for Chinese companies. (US Department of Justice)  

There’s been even more drama in the OpenAI saga
This story truly is the  gift that keeps on giving. OpenAI has clapped back at Elon Musk and his lawsuit, which claims the company has betrayed its original mission of doing good for the world, by publishing emails showing that Musk was keen to commercialize OpenAI too. Meanwhile, Sam Altman is back on the OpenAI board after his temporary ouster, and it turns out that chief technology officer Mira Murati played a bigger role in the coup against Altman than initially reported. 

A Microsoft whistleblower has warned that the company’s AI tool creates violent and sexual images, and ignores copyright
Shane Jones, an engineer who works at Microsoft, says his tests with the company’s Copilot Designer gave him concerning and disturbing results. He says the company acknowledged his concerns, but it did not take the product off the market. Jones then sent a letter explaining these concerns to the Federal Trade Commission, and Microsoft has since started blocking some terms that generated toxic content. (CNBC)

Silicon Valley is pricing academics out of AI research
AI research is eye-wateringly expensive, and Big Tech, with its huge salaries and computing resources, is draining academia of top talent. This has serious implications for the technology, causing it to be focused on commercial uses over science. (The Washington Post

❌
❌