Strategies & Tactics

Human League: Exploring Artificial Intelligence in Communications

November 2, 2018

[metamonworks]
[metamonworks]

Last June in Barcelona, Spain, during the annual summit of AMEC (the International Association for the Measurement and Evaluation of Communication), a workshop was held on the role of artificial intelligence in public relations.

Speakers included my nephew Jake Rockland, co-founder of Phoenix-based Somatic Labs, which designs technology that delivers information to users through haptic, or tactile, patterns that move across their skin. Joining him onstage at the workshop was Allyson Hugley, an industry expert formerly with Weber Shandwick. I asked them both how artificial intelligence might affect the future of communications.


What do you do in your current jobs, and what role does — or could — AI play in your work?


Rockland: As head of product at Somatic Labs, I make sure the experience of using our haptic interfaces is as delightful as possible — from the hardware itself that sits on your body to the software-design tools that power our haptic animations. In contexts where someone’s sense of sight and sound may be overwhelmed, such as when piloting a fighter jet, communication through touch can be used to increase situational awareness. Artificial intelligence is a design tool for the future of haptic technology.


Hugley: As president of measurement and analytics at Weber Shandwick, I led a global team of research specialists, data scientists and data engineers. Accelerating adoption of data and analytics across the agency network was a core aspect of my remit. Machine learning and artificial intelligence have played increasingly prominent roles in the agency’s analytics work.  


In terms of AI, what should communications practitioners be thinking about?
 

Hugley: The development of artificial intelligence is not a passive process left to machines. It is, and for the foreseeable future will continue to be, a process that requires human focus, talent and time. While AI has the potential to make data processes exponentially more efficient, the development and training phases are time- and human-resource intensive. Communications practitioners must plan for and manage expectations for successful AI-development timelines.


Rockland: Communicators need to understand why AI selects the outcomes it does. Consumers should know whether they’re communicating with AI or a person, and when AI has customized content for them.


What are the potential benefits and costs of AI in communications?
 

Hugley: A benefit is that automated processes driven by machine learning and artificial intelligence will significantly reduce the time people have to spend analyzing communications. These processes will also increase the volume and variety of data that can be explored to quantify and improve the impact of communication, which should create greater strategic value for communicators. A cost or potential risk of artificial intelligence in evaluating communication is that human subject-matter expertise might be devalued. We could lose sight of the crucial role that sector and subject-matter expertise play in the accurate interpretation of, and response to, data patterns.


Rockland: Applying AI to communications could give companies more personal engagement with customers by making processes for analysis more efficient. But it comes with the risk of handing off our personal engagements to AI systems, which may have serious, unintended social consequences.


What kinds of unintended social consequences?
 

Rockland: What terrifies me is that it seems our social fabric itself is being rewoven in ways that we don’t fully understand yet. We haven’t taken the time to consider all the repercussions. When I think about how social networks are designed with the help of psychologists to be as addicting as possible, without consideration for how it will affect things like depression, it really scares me because these systems are being implemented at an incomprehensible magnitude.


Hugley: The emergence of nearly blind faith in undisclosed, proprietary algorithms keeps me up at night. With so much pressure to implement faster and better solutions rooted in data, much of what is being presented to communications agencies is being oversold, sometimes with an appalling lack of transparency. Fundamental best practices of research — transparency and replicability — should not be abandoned as we push toward more advanced analytics solutions.


There have been many movies, TV series and books about AI taking control of our lives or running amok. Are such scenarios possible in the real world?
 

Hugley: There are social and economic risks — not to mention human-safety risks — associated with artificial intelligence that we simply cannot ignore. From author Arthur C. Clarke’s and filmmaker Stanley Kubrick’s mutinous HAL computer in “2001: A Space Odyssey” to myriad tech-gone-awry scenarios in a series like “Black Mirror,” stories about the potential risks of technology — particularly AI — should not be dismissed as mere entertainment. These are cautionary tales, akin to the fairy tales of old, designed to entertain us, but also to educate us about the risks.


Rockland: I think that the majority of the movies, TV series and books that play out potential doomsday AI scenarios are great entertainment but not necessarily a great model for what we should be most concerned about. This is not to say that considering the ethical implications of hypothetical replicants is not an interesting or important intellectual exploration, but rather that the concerns about how handing off a majority of our social interactions to the AI of companies like Facebook is a much more urgent concern.


Anything else to add?
 

Rockland: One thing that I think is really important to this topic is the recent introduction of GDPR — the General Data Protection Regulation. It took effect in May and regards data-protection and privacy for people in the EU, and will likely have significant effects on how AI is trained. The regulation introduces much more oversight for how data collected about consumers can and cannot be used without their explicit consent.

I am hopeful that the introduction of GDPR will force companies to be more explicit in their usage of AI.


Hugley: Weber Shandwick’s most recent research on the views of chief marketing officers about AI revealed that 55 percent expect the technology to have a greater impact on marketing and communications than social media has. The momentum toward more advanced and automated data solutions will only increase.

Successful communicators of the future will be those who most deftly navigate communications at the intersection of technology and data innovation. Proficiency in math and measures — even a solid grounding in statistics — is no longer enough.

David B. Rockland, Ph.D.

David B. Rockland, Ph.D., retired as CEO of KGRA in 2017, but continues as part-time chairman. He and his wife, Sarah Dutton, who recently retired from CBS News, have also started their own research and consulting firm to work with Ketchum and other clients at rocklanddutton.com.

Comments

No comments have been submitted yet.

Post a Comment

Editor’s Note: Please limit your comments to the specific post. We reserve the right to omit any response that is not related to the article or that may be considered objectionable.

Name:
Email:
Comment:
Validation:

To help us ensure that you are a real human, please type the total number of circles that appear in the following images in the box below.

(image of four circles) + (image of nine circles) + (image of six circles) =

 

 

Digital Edition