IoE

Do we need AI regulation?

I’m with Elon Musk on this one: We should be wary of the power of artificial intelligence (AI).

Speaking at the US National Governors Association meeting earlier this week, Musk called for regulation of companies that develop AI.

“AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that,” he said. “I have exposure to the very most cutting-edge AI, and I think people should be really concerned about it. I keep sounding an alarm bell, but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.”

Musk contends that AI is a “rare case” where regulation must be more proactive than reactive. “I think by the time we are reactive in AI regulation, it’s too late,” he said.

You can watch the full interview with Musk below. (Note that the interview begins at about 26:45, and the portion discussing AI specifically happens between 48:05 and 53:52.)

Some of Musk’s critics contend that his urgent warnings about AI are more sales strategy than real concern. As Maureen Dowd wrote in a Vanity Fair article in April, “Some in Silicon Valley argue that Musk is interested less in saving the world than in buffing his brand, and that he is exploiting a deeply rooted conflict: the one between man and machine, and our fear that the creation will turn against us. They gripe that his epic good-versus-evil story line is about luring talent at discount rates and incubating his own A.I. software for cars and rockets.”

Whether that’s true or not, the points Musk raises about AI are valid, and we should consider them sooner rather than later. While his example of robots killing people is certainly extreme, it highlights dilemmas we will face as AI improves and we remove humans from the equation completely.

What if?

Consider a healthcare app that is created with expert knowledge from physicians but eventually is allowed to learn on its own and make recommendations without supervision. What if that app misdiagnoses someone resulting in death? Or what if an autonomous car decides to take an action that will kill a single passenger but save dozens of other lives. Who is responsible for the end results?

These are the kinds of questions tech executives explored at the World Economic Forum annual meeting in Davos, Switzerland, earlier this year.

“It’s one of the harder challenges: How do you take accountability for the decisions algorithms are making in a world where the algorithms are not being written by you but are being learned?” asked Microsoft CEO Satya Nadella.

In today’s world AI is supervised by humans and we apply ethics and have laws governing data labeling (labeling is how AI understands patterns – when you like a Facebook post or a song on Spotify, this is a ‘label’ which helps the application find other posts or music you may like). “You can easily say, let’s make sure there is no bias in label data – that is human inspection,” Nadella explained.

But we will reach a point when machines are no longer supervised by humans.

“Already the state of the art of deep learning and reinforcement learning is these adversarial networks where we are generating label data, not through humans but through networks,” Nadella added. “That’s when it becomes even more complicated. Whose black box do you trust? What’s the framework of law and ethics… Who’s in control of that?”

IBM CEO Ginni Rometty, speaking on the same panel, laid out several guiding principles for the ‘cognitive era’ including:

  • understanding the purpose of the AI you’re developing;
  • transparency in when and how it’s used, what kind of training is required and who owns the insights derived; and
  • an obligation to develop the skills of the world around it.

“It’s our responsibility as leaders putting these technologies out to guide them into the world in a safe way,” she said.

One way is for all companies to follow similar guiding principles and another is through cross-industry public private partnerships to address the difficult cases. “There will be regulation and rules,” Rometty said. “And there is a point with some of these decisions [made by AI], people should be involved in them. Even though a machine could do them, it maybe shouldn’t. We are still at the very beginning of that robust dialog.”

You can watch the full panel discussion here:

Musk also highlights job disruption as a critical issue we must address “because robots will be able to do everything better than us – all of us.”

We are experiencing this in our industry right now. Network operators are embracing AI in the form of machine learning because as humans we simply can’t keep up with the volume and velocity of changes necessary in software-defined networks made up of billions of nodes running millions of applications. This requires closed-loop automation, combined with policy, analytics and machine learning to autonomically provision, configure and assure networks and the services operators deliver to customers.

Soon there will be virtually no role in networking for humans. This, of course, means huge operational savings for network operators and better experiences for customers, but what does it mean for the thousands upon thousands of employees who will be replaced by AI technology?

PwC puts a positive spin on AI’s effect on jobs in its recent report Sizing the prize: What’s the real value of AI for your business and how can you capitalise?

“The adoption of ‘no-human-in-the-loop’ technologies will mean that some posts will inevitably become redundant, but others will be created by the shifts in productivity and consumer demand emanating from AI, and through the value chain of AI itself,” it states. “In addition to new types of workers who will focus on thinking creatively about how AI can be developed and applied, a new set of personnel will be required to build, maintain, operate, and regulate these emerging technologies. For example, we will need the equivalent of air traffic controllers to control the autonomous vehicles on the road. Same day delivery and robotic packaging and warehousing are also resulting in more jobs for robots and for humans. All of this will facilitate the creation of new jobs that would not have existed in a world without AI.”

Sharing the wealth

But eventually those jobs also will become obsolete. What will we do then? And what happens when lower-level workers who may not have access to education or retraining lose their jobs to automation? Oh wait, we already know what happens then: People elect radical populist leaders who promise to stop job loss and globalization.

This was another topic addressed at the WEF gathering. During a panel hosted by McKinsey, Nadella said these surging populist movements were the “biggest lesson of last year.”

“Is the surplus that is going to be created because of breakthroughs in AI…only going to the few, or is it going to be more inclusive growth? That is a very pressing challenge,” he said. “Clearly the thing that is top of mind for all of us given the political cycle, is if surplus is going to get created [by AI], I think we’ve got to talk about how the surplus is distributed.”

Indeed, we must talk about what to do. Just as we need 21st century regulation that ensures net neutrality but levels the playing field for all digital service providers, we need governments worldwide to be involved in setting the course for the development and deployment of AI technology – and its repercussions.

It was Joichi Ito, Director of the MIT Media Lab, who shared the ethical dilemma of the autonomous car during the panel at Davos: Should an autonomous car sacrifice one passenger to save the lives of many others?

“The majority of people said, ‘Yes, the car should sacrifice the passenger – but I would not buy that car’,” Ito said. “So it shows clearly that the market is not the way to make certain decisions.”

He added: “It’s important that lawmakers understand deeply what choice they have and what’s going on. You can regulate the research of [AI] and you can regulate the deployment, and they’re two very different things. What you want is thoughtfulness on both. You can’t just leave it to the market.”



Advertisement:
Share.

About The Author

Managing Editor

Dawn Bushaus began her career in technology journalism in 1989 at Telephony magazine, which means she’s been writing about networking for a quarter century. (She wishes she didn’t have to admit that because it probably gives you a good idea of how old she really is.) In 1996, Dawn joined a team of journalists to start a McGraw-Hill publication called tele.com, and in 2000, she helped a team at Ziff-Davis launch The Net Economy, where she held senior writing and editing positions. Prior to joining TM Forum, she worked as a freelance analyst for Heavy Reading.

Leave A Reply

Back to top