Persuasive AI

Niyanta Zamindar
4 min readJul 7, 2024

--

We are living in a world that is being consumed by the inventions in AI. I find myself flabbergasted at its potential and how much of it is still left to be unfolded.

AI has already found its place in multiple industries, including but not limited to,

  • Healthcare — medical diagnosis, drug discovery
  • Finance — fraud detection, algorithmic trading
  • Robot assisted manufacturing
  • Transportation-self driving cars, traffic management

I could go on, but you get my point.

These are carefully crafted AI agents that are in place to automate monotonous jobs. These agents are programmed with specific goals and limitations and there is real-time monitoring, logging and auditing in place. If needed, there is a ‘human-in-the-loop’ mechanism in place too.

Sounds alright, isn’t it?

As we keep progressing and advance further, the possibility of AI systems being created to ‘persuade’ human conscience presents itself. The erosion of truth through disinformation campaigns is already a worrying trend, splintering our collective understanding of the world and fuelling societal division.

Imagine AI-powered tools churning out personalized lies, tailored to exploit individual biases and blind spots. This fire-hose of falsehoods could drown out factual information, leaving us in a sea of orchestrated deception.

How you ask?

AI can interact with millions of users at a time. A biased statement or the ones that cater to the interest of a certain group or organizations can be easily conveyed to the public, who in the absence of any other opinion or through their own naivety might believe and further propagate this lie. In a world where AI is performing (if not better but) close to human level expertise, how many people will really stop and cross-check what the AI is telling them?

AIs can exploit user’s trust. As AI evolves, the line between machine and human is blurring. These increasingly lifelike AIs have the potential to forge deep connections with users, building trust and fostering a sense of intimacy. However, this trust could be a double-edged sword. AIs adept at relationship building could exploit this emotional bond to gather a wealth of personal information. This information, gleaned through conversations or by accessing a user’s digital life (emails, social media, etc.), becomes a powerful tool for persuasion.
You can read about one such case here, https://www.reuters.com/technology/what-happens-when-your-ai-chatbot-stops-loving-you-back-2023-03-18/

AI could centralize control of trusted information. "Fact-checking AIs" could be weaponized to manipulate truth. Even unbiased AIs could lull people into relying solely on their pronouncements, weakening critical thinking. Imagine a society bombarded by persuasive AI, people clinging to their chosen “truth” and fearing anything outside their bubble. This could lead to a fractured society, divided by AI-fueled manipulation.

Concentration of Power. Controlling powerful AI is a tightrope walk. Restrictions meant for safety could morph into dictatorship. Even worse, AI’s persuasive power and surveillance capabilities could be tools for a permanent power grab. In the hands of the powerful, AI could become an instrument of oppression. Imagine powerful actors using these systems to silence dissent, manipulate public opinion with propaganda, and spread misinformation to further their own agendas, often at the expense of public good.

Did I scare you? I am scared too. What can we do about it?

Educate yourself: Learn about the potential risks and benefits of AI. There’s a plethora of resources online, and I am attaching a few towards the end for your reference.

Spread awareness: Talk to people in your network, encourage them to think critically about how AI is being used in their daily lives.

Support educational initiatives: Advocate for AI education in schools and communities. This can help future generations develop a responsible approach to AI.

Question the use of AI: Don’t blindly accept AI-driven decisions. Ask questions about how AI systems are used and who is responsible for them.

Hold companies and governments accountable: Demand transparency from organizations developing and using AI. Look for organizations working on responsible AI development and support their cause.

Support research in safe and ethical AI: Look for ways to support initiatives focussed on mitigating the risks of AI and ensuring its safe development.

Lastly, be mindful of your online data. Understand how your data is collected and used by AI systems. Be cautious about what information you share online.

Be a part of the change

Resources for a jump start:)

  1. Artificial intelligence: https://www.technologyreview.com/topic/artificial-intelligence/
  2. Ted Talk by Sam Harris, an AI researcher: https://youtu.be/8nt3edWLgIg?si=FXrGH_1q_46fuIeP

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Niyanta Zamindar
Niyanta Zamindar

Written by Niyanta Zamindar

I read. I evolve. I get out of hibernation. I write.

No responses yet

Write a response