Skip to main content

2019 Trends: AI Regulation Begins to Emerge

Roland Alston, Appian
April 1, 2019

(This is the second installment of our 5-part series, Winning the Future: 5 Digital Trends You Can't Afford to Miss in 2019 and Beyond. Read part 1 here.)

Artificial intelligence (AI) is one of the fastest moving technologies of the digital revolution.Fans believe that the rise of AI will make most people better off.

But the critics? Not so much. They worry about the growing rate at which AI systems are making increasingly unaccountable automated decisions that already effect many aspects of our lives. So, the big question is this: Do we need more regulation to ensure that AI doesn't invade personal privacy or become a tool for discrimination and surveillance?

https://twitter.com/FHIOxford/status/1093872185721176064

Big Tech Calls for More AI Regulation

The research group AI Now says that the public doesn't have the tools to hold algorithms and called 2018 a year of "cascading scandals". Prompted by the growing chorus of public outcry, even big tech has called for:

    • Legislation that would require companies to provide documentation about what their technology can and can't do

    • Companies to prioritize opening their algorithms to auditing

    • AI companies to remove barriers that would prevent algorithmic accountability in the public sector

    • Governments and public institutions to understand and explain how and why decisions are made that effect our access to healthcare, housing, and employment

We've reached an inflection point in the call for AI regulation. In Europe, regulators are calling for more scrutiny of AI. The European Union's General Data Protection Regulation (GDPR) is a case in point. As tough new privacy rules, GDPR prioritizes transparency in how companies collect and use personal data in Europe.

https://twitter.com/AINowInstitute/status/1070679571178160128

GDPR Privacy Law Seeks to Regulate AI

This is significant because AI is fueled by data. So, although it doesn't specifically target AI, GDPR has enormous implications for companies that collect and use consumer data for business purposes in Europe. Google, for example, was recently fined $57 million by the French data protection authority for violating GDPR data privacy law.

In the US, lawmakers are following closely behind with calls for regulation based on the European approach. California recently introduced its own version of GDPR called the Consumer Privacy Act which gives consumers more control over how their personal data. (The law takes effect in 2020.)And we can go further. Base on recent news reports:

    • Amazon wants to regulate facial recognition technology to protect individual civil rights, including letting people know when the technology is being used in public places

    • Apple has warned that flawed algorithms could magnify our worst human tendencies and has called on the U.S. to take a page out of the GDPR playbook

    • Google reportedly won't offer facial recognition through their cloud APIs until it can come up with policies to prevent people from misusing the technology

    • Microsoft has called on regulation to ensure that AI doesn't invade our personal privacy or become a tool for discrimination

    • 34% of consumers say the rapid evolution of machine learning will harm society,according to a new study from the Center for the Governance of AI and Oxford University's Future of Humanity Institute

    • 12% fear unregulated AI could lead to human extinction

To learn more about GDPR, check out:

https://youtu.be/hTZ1J0oQNkY

To learn how the Appian low-code application development platform supports GDPR compliance, check out:

For a more revealing look at the AI-regulation debate, check out:

(Tune in next week for the next installment of Winning the Future: 5 Digital Trends You Can't Afford to Miss in 2019 and Beyond.)