Corruption News

Unchecked AI Rollout Is a Privacy Rights Disaster

0

AI can do a lot of what people can do, only faster and cheaper. That makes it incredibly dangerous, and with companies adding AI-powered algorithmic tools to their products, the data privacy risks to consumers are only rising. Cybersecurity expert Scott Allendevaux argues for companies to cool their jets and for government leaders to consider a digital bill of rights.

Imagine being held in a store under suspicion of shoplifting; now imagine that not only did you do nothing wrong, but you learn you’re being detained because of a faulty algorithm. 

Unfortunately, this humiliating scenario has unfolded multiple times at Rite Aid stores, where a facial recognition system, powered by an artificial intelligence algorithm exhibiting inherent bias against minorities, compelled retail clerks to confront individuals flagged as shoplifting suspects.

The practice continued for nearly eight years in what Rite Aid said was an experimental program deployed in a “limited number of stores.” Nevertheless, the effects of the practice were wide-ranging, subjecting an untold number of innocent shoppers to embarrassment, often in front of friends and relatives, and having to defend themselves against unfounded accusations.

“Rite Aid’s reckless use of facial surveillance systems left its customers facing humiliation and other harms,” said Samuel Levine, the director of the FTC’s Bureau of Consumer Protection, when announcing a 54-page complaint against the pharmacy chain.

Rite Aid stands as a noteworthy example illustrating the repercussions of deploying AI algorithms without rigorous testing and vetting to prevent unintended consequences. 

Many other enterprises are likely to experience similar issues if they adopt AI without meticulous reviews. The allure of the technology may distract them from recognizing potential pitfalls. It should go without saying: Do not allow your organization to fall into the same trap. My experience within the industry, however, gives me a sense of foreboding about the self-policing aspects of the industry. 

Similar scenarios will play out in the future. Instances of failure, exemplified by the Rite Aid case, will become more common and prompt regulatory authorities to take stringent measures in the near-term as AI technology continues to encroach on fundamental privacy rights. As data collection grows exponentially — think Internet of Things — threats loom only larger.

Baby steps

Most assuredly, we’ve come a long way from the scenario that reigned little more than five years ago, when widespread data breaches and a general lack of accountability were rampant. Individual privacy and security was a mere afterthought, as emerging technologies were embraced and introduced to the public with little regard to basic protections. One need only refer back to the troubling Equifax breach that exposed the credit reports of what was conservatively estimated at 150 million people.

With those episodes as a backdrop, a movement toward implementing stringent privacy safeguards is well underway. The FTC, which previously took action against Rite Aid, is now initiating measures to bolster privacy safeguards for children. Reports indicate that these efforts involve limitations on tracking by various services, including social media apps, video game platforms, toy retailers and digital advertising networks. This marks only the beginning of these initiatives.

Now, as AI takes hold and portends a vast shakeup in the way even the most mundane tasks of business processes are conducted, it is time to redouble data protection efforts. There needs to be a sense of urgency on a national data privacy initiative. 

We see some baby steps. One year following the launch of ChatGPT, President Joe Biden issued an executive order aimed at ensuring the widespread introduction of AI is “safe, secure, and trustworthy.” The order is designed to strengthen government oversight of technology to prevent adverse impacts, including the reporting of “red team” tests that detect hidden flaws in the technology.

As the deployment of technology accelerates, growing concerns about privacy have become evident. Recent measures taken by the European Union, including the European Data Protection Board’s restriction on Meta processing personal data for behavioral advertising, alongside efforts by state legislatures to protect personal privacy, underscore a rising apprehension surrounding the rapid advancement of AI.

Sounding the alarm

Much of the anxiety around AI comes from opaqueness by major players about the training data used to inform their respective models. Indiana University researchers have already detected flaws in current systems, workarounds that bypass what developers say are safeguards that prevent the access of private information.

The revelations by researchers should send out ear-piercing alarms that personal information is at high risk, and, left unchecked, could be the largest threat to personal privacy ever witnessed.

Let’s be clear and firm: While data protection compliance is a legal requirement, it is, more importantly, an ethical imperative. There’s one sure fire way to doom the reputation of your company — be fast and loose with personal data.

Just as the U.S. Constitution lays out certain fundamental civil rights and liberties, so, too, should a new digital bill of rights be codified to protect online activities and personal data. The European Union’s General Data Protection Regulation and California Consumer Privacy Act are good starts. Rather than 50 state privacy rights regulations (about a dozen have some degree of comprehensive measures), the nation requires a national standard.

Assuring data privacy should not be a cat-and mouse game of developers trying to stay one step ahead and outsmart regulators. In essence, people have the right to know what data is being collected, where it is stored and who has access to it. And not in some multi-page document couched in legalese. No, in the name of transparency, it should be easily digested, much in the same form and method a consumer can request their own credit report.

Stricter data protection rules could disrupt revenue streams at some social media concerns that count the sale of personal information into the business model. Some services now offered for free may have to initiate modest fees in the name of data protection. That may have to be the ultimate trade-off for the public.

Just as developers must have a data privacy consciousness, so, too, should users. A degree of responsibility falls on the public to monitor what data they freely share, plus monitoring what is in view. That means employing virtual private networks, encrypted messaging apps and demanding a level of transparency from those who collect data. 

Ultimately, the method to protect personal data is a trust-based partnership between developer and user, with regulatory authorities behind the scenes setting the standards.


Source link

Leave A Reply

Your email address will not be published.