Former Google CEO Eric Schmidt on the challenges of regulating AI

[ad_1]

Artificial intelligence was one thing, but no of thing, when Eric Schmidt became Google’s CEO in 2001. Sixteen years later, when he stepped down as executive chairman of Google’s parent company, Alphabet, the world had changed. Speaking at Princeton University that year, Schmidt declared that we are in the “century of AI.”

Schmidt, who recently chaired the National Security Commission on Artificial Intelligence, and MIT computer science professor Aleksander Madry discussed how this transition should be managed and its broader implications at the AI ​​Forum Policy Summit 2022 MYTH.

Their conversation came at a time when AI is on the rise in both the public and private imaginations.

Headlines are touting her accomplishments both small — winning an art contest at the Colorado State Fair — and big, such as predicting the shape of nearly every protein known to science. And the White House just released a plan for an AI Bill of Rights to “protect the American public in the age of artificial intelligence.”

Companies are investing billions of dollars in the technology and the talent needed to develop it (this includes Schmidt, who drew attention last month for not publicly disclosing his investments in several AI startups while chairing the NSC committee).

Schmidt spoke about the fundamental challenge of determining what our society wants to gain from AI and called for a balance between AI regulation and investment in innovation.

A pragmatic approach to development

Schmidt said a naïve utopianism often accompanies technological innovation. “It…goes back to how technology works: a group of people come from similar backgrounds, build tools that make sense to them without realizing that those tools will be used for other people in other ways,” he said.

We must learn from these mistakes, Schmidt said. The incredible potential of AI should not blind developers and regulators to the ways in which it can be abused. He mentioned the potential challenges of information manipulation, bioterrorism and cyber threats, among many others. As much as possible, guardrails should be put in place from the start to prevent criminal or destructive applications, he said.

Schmidt also criticized the extent to which people working in AI have focused on the problem of bias. “We’re all obsessed with prejudice,” he said. It’s a significant challenge rooted in the data used to train AI systems, he acknowledged, but he said he was confident that would be fixed by using smaller data sets and zero-shot learning. “We will find a way to address bias,” he said. “Academics used to write all sorts of things about prejudice because that’s what they could draft. But that is not the real issue. The real point is that when you start manipulating the information space, you manipulate human behavior.”

Starting a productive discussion on the regulation

One of the main challenges right now, according to Schmidt, is that we don’t have a clear definition of what we, as a society, want from AI. What role should it fill? Which apps are suitable? “If you can’t define what you want, it’s very hard to say how you’re going to fix it,” he said.

To begin this process, one of Schmidt’s suggestions was a relatively small task force of 10 to 20 people that would build a list of proposed regulations. These may include: making certain content, such as hate speech, illegal; there must be rules to distinguish humans from robots; all algorithms should be openly published.

This list is, of course, just a starting point. “Let’s assume we got such a list – which we don’t have now… How do you make CEOs of companies who, despite what they say, are driven by revenue … agree on anything?” Schmidt asked.

Government should do more than regulate

The government’s role is not simply to regulate AI, Schmidt said. It should simultaneously promote technology. In addition to a regulatory plan, Schmidt suggested that each country should have a “how-to-win-AI” plan.

Looking at Europe, he described the admirable model of deep and long-term investment in the grand challenges of physics. The CERN particle accelerator is one of many examples. But Schmidt doesn’t see commensurate levels of investment in AI. “This is a big mistake and it will hurt them,” he said.

It’s hard to invest productively in new technologies while crafting regulations for those new technologies, Schmidt acknowledged, but he believes the tendency is to over-regulate and under-promote. As an example, he pointed to the European Union’s strict online data privacy goals embodied in the General Data Protection Regulation. While these efforts appear to do a good job of protecting consumer data, high compliance costs have the unintended consequence of stifling innovation, Schmidt asserted.

“You have to be innovative and regulatory at the same time,” he said. “If you don’t have both, you won’t lead.”

The special case of social media

Social media presents specific challenges, Schmidt said. He noted problems with today’s platforms, which often started as basic information sources and evolved into recommendation engines. And the rules by which these engines operate may not be the rules we care about as citizens.

Similar articles

“I have been CEO for more than 20 years. CEOs care a lot about revenue,” Schmidt said. “And revenue comes from engagement. Engagement comes from anger.”

To curb this problem, Schmidt offered a suggestion rooted in his preferences for free speech: people should be allowed to say what they want, but algorithms should be more specific in what they encourage. “Everybody gets their say, but not everybody gets a megaphone,” he said. The goal should be to encourage speech, not shut it down—and poorly designed algorithms shut it down.

From this angle, companies actually benefit from better internal policing. Schmidt noted that TikTok found itself facing a problem of toxic content polluting its video streams and reducing the entertainment value of the platform. In response, the company developed an AI algorithm that finds and silences toxic content. He suggested that every social media company will have to do this going forward.

“And then once these things are established, they have to become either an industry standard or a regulated standard,” Schmidt said. The stakes extend beyond entertainment and income. “If we don’t solve this problem, we will lose our democracies,” he said.

Read more: SEC’s Gary Gensler on how AI is changing finance

[ad_2]

Leave a comment

Your email address will not be published. Required fields are marked *