Putting The Bot On The Other Foot: 3 Things Chatbots Can Teach Us About Conflict

Whether making a product inquiry or trying to resolve a customer service issue, most of us have interacted with AI or chatbots. And most of the time, it’s with varying degrees of success. Anyone who uses Siri or Alexa is likely to have tales of queries gone wrong or frustrated attempts at seemingly simple requests. 

But in dismissing these as simplistic machines, what we may not appreciate is that they are often sophisticated tools. And they’re getting smarter and more common by the day. Their complex programing is designed not only to solve a wide range of queries but also to emulate more complicated interpersonal skills, such as minimizing conflict, essential in many customer service applications. By learning from bots, can you reprogram yourself to defuse conflict?

What’s it all a-bot? 

Businesses are increasingly turning to chatbots to manage basic interactions, notably lower-skill, repetitive processes. These targeted applications of conversation technology are typically used to automate communications, via text or speech, instead of interacting with a human.

It’s easy to understand some of the organizational benefits of using bots. A bot can handle high numbers of inquiries at a fraction of the cost of its human equivalent. However, there are also hidden costs. A customer frustrated or misdirected by a bot may take longer to deal with subsequently or take their business elsewhere.

As the sector grows, bots are getting more sophisticated or ‘intelligent.’ There are two main types of chatbot. The first are programmed with a set of rules and complete scripted actions based on keywords, such as a series of questions with “yes” and “no” answers. The second is the increasingly common AI-powered chatbot, which uses machine learning to interact more naturally. One area of particular focus for these bots – especially critical in the customer service arena – is avoiding and reducing conflict during their human-AI interactions. Here are three things we can learn, (as well as what we can’t learn) from chatbots.

  1. Communications skills 

Chatbots are usually programmed to ask a series of questions. Asking questions indicates interest, openness, and a desire to understand, all a helpful start in defusing a potentially difficult situation.

Generally, when having a challenging in-person conversation, open questions are best, as they allow the other person to give a freeform answer. Although some simpler, structured chatbots use closed questions – such as, ‘Are you looking to query an invoice: Yes / No’ – more sophisticated AI-based chatbots ask open questions that are programmed to analyze the answers.

Bots are increasingly learning to understand the nuances of human language. There is a whole field of AI dedicated to natural language processing, or NLP, i.e., developing the ability of computers to understand text and spoken words. These programs listen for context, such as intent, motive, mood, and meaning. This is something we tend to do naturally in our own interactions, and it can also be useful to keep a focus on a conflict situation.

  1. Reaction and de-escalation

In a human-to-human conflict situation, it can be challenging to remain calm. We hear things that make us feel defensive or trigger an emotional response. Being in a stressful dispute can affect our ability to think clearly and lead us to respond in a way we might later regret. Bots, however, have none of this emotional baggage. They can wait for a response, analyze it, and then reply. I am not suggesting that you behave as emotionlessly as a bot. Still, we can practice effective calming techniques to manage our own emotional reactions and help defuse a tricky situation.

Bots can also draw on more advanced conflict management techniques. Some may go as far as emulating empathy, for example, ‘That must have been frustrating for you.’ Showing empathy in this way can help to de-escalate tensions during conflict and calm strong emotions.

  1. Continual learning 

In the past, to learn and develop, chatbots would need further human input to analyze the success of questions and responses and reprogram accordingly. But now, by using machine learning, bots can learn from their inputs and adjust in real-time. They may compare questioning approaches against the time taken to resolve a query, or after asking directly for feedback, ‘How well did I do?’.

We can apply a similar approach to learning when dealing with our own conflicts. Often, when experiencing conflict, there can be a tendency to feel relieved that the situation is resolved and we just want to move on. However, here we can take a leaf from the bot’s book. There is much value in taking time to reflect on what we have learned. We can ask for feedback and even go back to the other person after some time to check in with them. This helps to build conflict resilience and will support us to manage a similar situation more effectively in the future.

What bots can’t teach us 

Of course, it’s probably not appropriate to model ourselves on WALL-E or even Alexa. To a certain extent, bots are only as good as their programming, especially script-based bots. AI can suffer from programming bias, with developers’ demographic or cultural backgrounds leading to gender bias, racial prejudice, and age discrimination. Bias also affects how we deal with interpersonal conflict. Our own beliefs taint our assumptions of other people’s intent and affect how we respond to an issue, often turning out to be way off the mark.

Many bots are also limited in their range of inputs, in that they analyze just text or verbal data without the benefit of visual cues. When we talk to a fellow human, we detect subtle changes in tone and volume, hear pauses or a frustrated sigh. When face-to-face, we take cues from movement and body language. This gives us much more information and context cues to ‘read’ mood, intent, and feelings. Bots are moving further in this direction with advances in facial recognition technology but still have a long way to go.

A critical area, especially when it comes to managing conflict, is that bots often focus on giving answers. In a person-to-person conflict situation, the best approach can be just to listen and not give answers, opinions, or solutions. During workplace mediation, a plea commonly voiced is “I just want to be heard,” and giving people the time and space to talk is often the most powerful aspect of dispute resolution.

So, by taking some of the principles that bots use to avoid conflict, we can perhaps reprogram ourselves to defuse conflict more effectively. We can at least prompt ourselves to reflect and learn from our experiences and improve for the future. As AI continues to grow and many more tasks are taken over by bot colleagues, the more advanced higher management skills, such as conflict resolution, are likely to become more important for us humans to provide in the future. Our preconceptions of bots might be negative, but we may be pleasantly surprised as they continue to develop.

But for the last word, let’s ask Siri. “Siri, are you the best chatbot?”. Siri: “I’m sorry, I don’t have an answer for that.” Well, at least it’s got modesty pretty much sewn up.

Click here” to view the original article or “click here” to view a PDF of the article

 

 

 

 

 

 


ocn imi