Personal data has become the new currency of the digital economy. But recent data hacks and security breaches have made consumers increasingly concerned about the protection of their data. In a fireside chat session at TM Forum Digital Transformation World in Nice we will discuss how telcos should bridge the digital trust gap and can get the balance right?It’s hard to pinpoint the exact moment that apathy turned into outrage. It might have been when it was reported that the British consulting firm Cambridge Analytica had obtained and deciphered the personal data of 87 million Facebook users, and then sold the insights gained to the Trump campaign. Or in October, when Google revealed that the personal information of 52 million Google+ users had been exposed. Or on the day in late November when hotel giant Marriott announced that hackers had accessed the records, including passport information, of some 500 million guests globally.
But there is no question that, as 2019 unfolds, data privacy and security have become central themes for all companies – including telcos – and for watchdogs from the media to the government.
The truth is that privacy has been under assault in the digital age for some time. Until recently, though, it just didn’t seem to bother a lot of people. One big reason: free stuff. For the price of a smartphone, consumers could access a previously unimaginable array of new apps, content providers, and digital services - all without paying. The social networking revolution epitomized by Facebook and Twitter has been predicated on a free experience. Google’s search, maps, email—all of it has been accessible to everyone, free of charge.
And yet there is a wrinkle. The past year has seen a rising wave of discomfort and controversy, as consumers, technology industry observers, investors and governments have begun to question the tradeoff at the heart of these free services - that in exchange for using them, you may not always be turning over money, but you are turning over data. Your own personal data. What you search for. What you buy. Even, oftentimes, your physical location at every moment. And that information is being analyzed, packaged, and sold.
No wonder then that
research has shown that 84% of consumers globally say that they have become moderately to very concerned with how companies are collecting and using their data. And 89% stated that companies need to be more transparent about how their personal data is stored and used.
The new level of skepticism about what technology has brought has taken hold just as society’s appetite for privacy risk is being tested in more extreme ways. Tens of millions of consumers have installed voice-enabled digital assistants in their homes, welcoming in listening devices connected to the cloud. Facial recognition technology is growing more sophisticated by the day and its implementation is spreading almost as fast. The rapid development of self-driving cars and digital medical implants is creating new types of systems that can could potentially be hacked. And artificial intelligence is being developed with data-hungry algorithms.
At the same time, a wave of regulatory resistance has sprung up around the globe. In a number of countries, regulators are drafting personal data protection laws or have already brought them into effect. For instance, the EU’s General Data Protection Regulation (GDPR) - a law designed to give individuals control over their private data - took effect at May 25 last year.
What this fast-shifting landscape means is a new level of uncertainty for every type of organization, including telecom organizations. The rules and standards for responsible behavior are changing rapidly - especially with emerging technologies such as AI arriving.
Which brings us to the question whether AI can be ethical? Data is crucial in the development of AI. Gobs of it. Creating sophisticated, effective artificial intelligence requires training the underlying algorithms on massive amounts of information—authentic, real-world and personal data. And that puts privacy on yet another collision course with policy.
AI development will almost certainly speed ahead. The pressure to find competitive advantage will push telcos forward in building the technology, with regulation playing catch-up. That raises the stakes for them to get the right privacy and security policies in place now, not later. AI has the potential to make them run more efficiently and more profitable. But it could also spawn costly mistakes, putting businesses at risk. That makes safeguarding data a core competency for getting AI right.
The trade-off between privacy and functionality is fast becoming a key issue. In today’s ever-more-digital world, how much risk are we willing to tolerate in the name of new and improving technologies? And then there’s the flipside: What are we willing to give up in order to limit unintended consequences?
People have mixed feelings about the tech trade-off. And it is only going to get harder for consumers to separate their private behavior from the connected world. And, voice-activated digital assistants or responsive visual recognition technology are changing the way that we react to the Internet, making it more like the air around us. The convenience is just too seductive for consumers to reject.
We found that telcos around the world trail only bank and credit card companies in the degree of trust consumers place in them. In emerging countries, they even top the bank and credit card companies. And, although trust has declined across the board for all organizations that handle personal data, their decline in trust in the last years is less for telcos than for all other surveyed industries.
That makes privacy a tremendous opportunity for telcos. If they make it easier for consumers to take control of and maintain their privacy while engaging in the digital world, they could gain a competitive advantage going forward.