Passing the Turing test has long been considered the ultimate measure that machines have achieved true human levels of conversational intelligence, at least for those in the computer sciences community. If instead you prefer legal proof of AI’s success, then look no further than California SB 1001 – a new state law requiring “clear and conspicuous” disclosure of any ‘bot-based conversations with humans. It’s broadly applicable, applying to any “public-facing internet web site, web application, or digital application, including a social network or publication” and obligates the host to “inform persons with whom the bot communicates or interacts that it is a bot.”
With this law’s passage, it appears Sacramento lawmakers are convinced that ‘bots not only pass the Turing test, they are now able to “knowingly deceive, incentivize and influence” residents of the Golden State to part with their hard-earned money or vote in ways they otherwise may not have.
SB 1001 makes it unlawful “for any person to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.”
After hearing about this new law, and confirming for myself I wasn’t being spoofed, (it becomes effective July 2019), I felt three very contrasting reactions.
- First, as I head the Conversational AI practice at Cognizant, I proudly decided it substantiated the work our industry is doing with bots and natural language – we now deliver lifelike AI interactions, indistinguishable from humans. Awesome us!
- This led to my next reaction of how insulting it is of these bureaucrats, forcing us to tarnish our elegantly coded creations with a chatbot version of the Surgeon General’s tobacco warning. Shameful them!
- Finally, I considered a third perspective, that maybe this isn’t unchecked nanny-state overreach. It could be a thoughtful, preemptive effort to ensure transparency for an approaching reality where blended AI, always learning algorithms and hyper-personalized virtual agents intervene at every communication touchpoint. And, that is worth considering.
We’ve long had disclosures telling us those television commercial doctors are actually paid actors, fine print on magazine ads detailing product limitations or stipulations, and voiceovers on political ads letting us know the candidate “endorses this message.” We now find ourselves sorting through issues where big tech and far reaching social platforms are being challenged for shaping news, not just sharing it and centuries old communication channels are morphing – or even vanishing – overnight.
Given these consequences of digital transformation, it’s worth thinking about next-generation policies such as SB 1001 in context of the ethics of AI, not just it’s power. Considering the accelerating progress we’re seeing in natural language systems, emotion and sentiment analysis, computer vision and virtual reality, a thoughtful pondering of the value of clear disclosure may not sound like such a radical idea a few years from now.
As always, I would love to hear from you. Is California SB 1001over-reach, over due or not enough? Thanks for reading!
About me: I run Cognizant’s Conversational AI practice and love sharing ideas with others in this space. Feel free to provide your thoughts on this post and let’s connect on LinkedIn or Twitter.