Subscribe
Are we facing an AI-generated boom in privacy breaches?
Just because you're paranoid, doesn’t mean AI’s not watching you.
Published
10 September 2023
Creative
From corporate and government surveillance through to data breaches and cyber attacks—AI presents a plethora of security issues that product people need to anticipate.
Joan Tait had a rough day at work. She laid off an employee, had a difficult therapy session, and ran into an ex. But things are about to take an unexpected turn. She returns home only to find that her existence is being continually monitored. 
Her life is playing out for everyone she knows to see—almost in real-time—on a streaming service not so different from Netflix. The computer-generated show Joan is Awful is created on the fly using artificial intelligence with a digital version of Salma Hayek as the protagonist. 
That’s the storyline from a recent creepy episode of Black Mirror. With echoes of Nineteen Eighty-Four’s Big Brother, the episode takes aim at the idea of continual surveillance and the privacy challenges AI could pose in the future. While it might seem dystopian, as AI becomes more prevalent, there are increasing concerns about its impact on privacy.
Are we heading toward mass surveillance? 
2019 paper by the Carnegie Endowment For International Peace found that 75 countries already used AI surveillance. The AI-related activities included smart city platforms, facial recognition, and smart policing. While the author, Steven Feldstein, largely found that countries were using AI in ethical ways—such as to keep their citizens safe—other nations weren’t. 
“Some autocratic governments—for example, China, Russia, Saudi Arabia—are exploiting AI technology for mass surveillance purposes. Other governments with dismal human rights records are exploiting AI surveillance in more limited ways to reinforce repression,” he wrote. 
AI panopticon
The paper raised concerns about the use of AI by governments, stating that it could change governing patterns and give unprecedented powers to governments to monitor citizens and use their private information to bias elections. 
More recently, French senators approved algorithmic facial recognition technology for the 2024 Paris Olympic Games. It’s something that Agnès Callamard, Secretary General of Amnesty International, has called “dangerous.” 
“Re-stocking security apparatus with AI-driven mass surveillance is a dangerous political project which could lead to broad human rights violations,” she said. 
AI data breaches and cyberattacks 
Personal data issues are nothing new. Social media organizations have been under fire for years for their often unscrupulous data collection practices. TikTok, for example, has been found to perform “excessive data collection” that connects international data to China-based infrastructure. 
Meta’s privacy controversies are well-known, too. The social giant was at the center of the 2018 election scandal—in which they sold data to third-party Cambridge Analytica—a transgression that cost the company $5bn in fines. And recently, in May 2023, Meta was fined $1.3bn for data misuse—specifically transferring European data to the US. 
As AI progresses, there’s a risk that data breaches could become more widespread, even in the face of control and regulations. If secure data processes are not in place, that data could fall victim to security attacks—making it susceptible to misuse. 
Founding Director of the Centre for Artificial Intelligence Research and Optimization at Torrens University Australia, Seyedali Mirjalili (Ali), believes data breaches could occur where there are insufficient safeguards. 
“The absence of proper safeguards in AI systems enables their malicious use. This can lead to the development of various cyber-attacks targeting victims’ computers and infrastructure, with the goal of unauthorized data access or compromising computing systems,” he said. 
Developers handling data in new ways presents opportunities for breaches.
Dave Anderson, previously Dynatrace and DataRobot, who’s worked with global organizations on their data and AI strategies and runs the AI podcast Tech Seeking Human, shares the same sentiment. 
“AI is designed by humans and fed by data from sources we determine. The most likely scenario isn’t that the AI will lead to a data breach. It’s more likely that the way in which we extract the data could lead to the data breach,” he said. 
Given cybercriminals are becoming more sophisticated, this is important. The use of AI in hacking isn’t a future worry but a current reality. According to NATO, AI cyberattacks are already proving to be a “huge challenge.” 
Privacy attacks may also become automated with highly-trained tools. AI may be trained to automate relentless social engineering programs to obtain sensitive information. It’s something that will require even more sophisticated AI protection systems to detect and prevent. 
Take the case of the security firm Zscaler, which received a convincing scam call from the CEO asking for large quantities of money to be transferred. Fortunately, the company discovered that scammers had used AI to reconstitute the CEO’s voice. 
Further concerns surround data being sold to or accessible to third parties—as was the case in the Cambridge Analytica scandal, where it was used to persuade voters in the 2018 election. 
In other cases, using deep fakes—convincing but phony images of videos—could ruin reputations and spread misinformation. The use of deep fakes in pornography, for example, is of concern. AI-detection and monitoring company Sentity found that 95 percent of deep fake images are sex-related. They note, “...with increased sophistication, these AI-generated synthetic videos are also becoming increasingly difficult to detect.”
The global AI cybersecurity market is growing and is expected to reach $38.2 billion by 2025. But will it be enough?  
What can be done to protect privacy? 
For designers and developers of AI systems and those implementing AI into workflows, it’s critical to consider the risks.
According to Dave Anderson, developers under pressure to produce more and more advanced AI systems presents a threat to ethical use of data.
“In a race to get more powerful AI, developers are pushing the boundaries of ethical data capture, collating information, or pooling it, to get better outputs for the AI.” 
This is why it’s essential for those creating AI to do so ethically. Anderson recommends being transparent and ethical in data capture, transport, and storage. 
Ali Mirjalili also highlights the need for rigid data collection processes. 
“Anonymization, encryption, and access control are essential measures that can also assist AI developers and designers in enhancing data security and privacy,” he said. 
Responsible development & deployment of AI
Design ethics must be front of mind for those who are developing, training, or onboarding AI tools. Socially responsible design, which considers the wider societal implications of design, is more important than ever. 
For developers, designers, researchers, and UX teams, this may be the most significant ethical challenge they will ever face. All AI tools should be created in alignment with socially responsible design ethics to ensure the privacy and safety of the wider community. 
While legislation will play a critical role in the sector, it’s important to recognize waiting for guidelines may take too long. The onus may lie with those building these products—especially where personal data is collected and used for training. For those people and organizations, ethical decision-making must take center stage.
Public advocacy for those developing AI 
Elon Musk, Steve Wozniak, and 1000 AI experts have signed Pause Giant AI Experiments: An Open Letter advocating for a halt in the training of AI more powerful than GPT-4 for six months—until more is understood about the technology. The reason? The “profound risks to society and humanity.”
More voices are needed throughout the product community. As practitioners working with AI, there is the responsibility to learn, discuss, and assess the risks of such models—especially given the privacy of millions of people may be on the line. 
Advocating for safe systems and speaking out when privacy may be at risk is essential as we move into unknown territory. 
Ali Mirjalili claims developers must take ownership as they create AI models. 
“I strongly recommend that AI developers actively educate themselves and raise awareness about data security best practices and privacy concerns. Involving cybersecurity experts during the development process can also help minimize risks by reinforcing AI models against cyberattacks,” he said. 
Legislation and standards 
Ultimately, legislation may be critical in protecting users against data breaches. Governments are responsible for regulating the use of AI to minimize potential risks. But while many nations are discussing legislation, the process is slow. 
In the US, 43 bills have been introduced to regulate AI development and deployment. To date, though, only some of those bills have become legally binding. Pending legislation exists in the EU, while China has enacted a degree of AI regulation. Many other countries are considering legislation.  
Ethical principles will play an important factor, too. Australia has created eight Ethics Principles, one of which is that: “AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.” Meanwhile, the European Data Protection Board (EDPB) has created a ChatGPT task force. The Global Index on Responsible AI is currently being developed to support the safe implementation of AI globally. 
As AI escalates, these measures may be too little, too late. Controls ought to be put into place now if we are to avoid an uncertain future. 

Hungry eyes?

Subscribe and get inspiring content delivered straight into your inbox