Round-the-clock ‘availability’ is one of the most important prerequisites of modern-day businesses, according to nearly 52% people (as reported in a recent research). This requirement can be best fulfilled by artificial intelligence (AI)-powered chatbots, that can simulate human conversations, and serve as the first point of interaction between organizations and customers. Apart from acting as the ‘virtual spokesperson’ of businesses, chatbots also help in significant reduction in expenses. By 2022, it has been estimated that the annual savings owing to the use of AI bots will be more than $8 billion. However, like any other form of innovative technology, chatbots come with a few disadvantages and potential risks. We will here focus on them:
The problem of rogue chatbots
With advanced machine learning capabilities, chatbots are increasingly becoming adept at ‘imitating’ human conversations. That, however, can prove to be a double-edged sword – since hackers can easily create bots that pose as buyers or suppliers, to strike up conversations with the in-house personnel of businesses. Over the course of a chat, this ‘rogue bot’ can convince users to share personal information and/or sign up for unauthorized/inappropriate, malicious content. Other forms of phishing attacks can also be launched by hackers through these bots. As a rule of thumb, users should stay away from sharing confidential information (say, credit card details) on a bot – until its security is verified. Links supposedly sent by vendors/buyers should be treated with caution as well.
Note: Users of the Tinder app have already been affected by a malware bot, posing as a female user.
Bots can be too mechanical
For all the advances in artificial intelligence and predictive behaviour (helping in more intuitive, contextual chats) – chatbots remain, in essence, glorified mechanical robots. They are pre-programmed by developers, and can handle queries/comments from humans only as long as the overall conversation flows in the ‘expected path’. As soon as something else – that has not been fed into the bot program – is asked, the chatbot’s performance gets affected. In most such instances, the program tries to manage the situation by putting forth more qualifying questions, which can be: a) repetitive and b) irritating for the customer. To be of practical use, a chatbot has to be capable of handling different scenarios and resolving queries as quickly as possible.
3. Risks of using standard web protocols
While the chatbot revolution definitely has more than its fair share of innovative features, there is one significant downside. These programs (built on platforms like Slack, Facebook Messenger, WhatsApp or SMS) typically use open Internet protocols, that have been in existence for long – and can be targeted by professional hackers relatively easily. In fact, chatbots have been referred to as the ‘next big cyber crime target’ precisely for this reason. Using chatbots with standard protocols is particularly risky in the financial sector (e.g., banks). To tackle these security ‘threats’ and ‘vulnerabilities‘, most financial institutions currently ensure that all types of data transmissions take place through reliable HTTPS protocols. Transport Level Authentication (TLA) is another cutting-edge technique for financial sector chatbots to enhance data security standards.
Note: Robust security is essential for bots that support speech recognition (voice technology) as well as the ones that are purely text-based.
Probable confusions affecting buying decisions
A big advantage of chatbots is that they allow buyers to check out products right inside the chatbox – doing away with the need to actually visit stores (or browsing through the different item categories on online shopping portals). However, a closer examination suggests that confusions can crop up in this regard. A person might ask a chatbot to show shirts of a particular size – and the computer program would show all items in that category. The customer has to narrow down the displayed range by mentioning his/her favourite colour, sleeves, collar and material. The process can be time-consuming (defeating the very essence of chatbots somewhat) – and it might well happen that the bot ultimately fails to show the product that a customer is looking for. This, in turn, obviously affects the purchase decision of the latter. For some cases, actually checking out the available stock yields a more satisfactory result than simply dealing with chatbots.
Low-level job openings being eaten up
‘Intelligent chatbots’ are ideal for low-level, repetitive jobs at organizations. Since they are programmed and have the latest AI support, these chatbots can do such ‘menial’ jobs much faster than human workers. While that is great from the business productivity perspective, a serious problem raises its head…about chatbots being likely to displace human workers from low-level positions in future. The threat is particularly serious in developing countries like India, where nearly 1.5 lakh new employees join the BPO sector every year. From the demand-side too, bots seem to have an upper-hand over humans – with 44% users in the United States stating their preference for chatbots (instead of humans) for customer services. While those at senior-level positions are not at risk, openings for ‘online marketers’ and ‘customer relationship managers’ are likely to dry up over the long-run.
Note: Chatbots are fairly easy to make for developers, thanks to the presence of the various bot development frameworks. The cost of making chatbots is not very high either.
Increased personalization can be a problem
Chatbots are becoming more ‘chatty’ than ever before. Ask Eva, the web chatbot made by Senseforth, whether she likes you – and it will shoot back with a cool ‘am still learning’ response. There are bots that can relate typing in capital letters to greater urgency – and accordingly, hand over the chat to a human employee. Things like personal food preferences, addresses, favourite dresses and a lot more are being shared on chatbots…at times without even realizing that the conversation is taking place with a piece of manipulatable software, and not a fellow human being. Deliberate impersonation can also be an issue. AI-powered chatbots typically store customer data for analysis and greater personalization in future – and there remains a risk of this data being ‘stolen’ by a third-party attacker, and used against the concerned individuals/businesses. An intelligent, friendly chatbot need not necessarily be a good one!
Fails the Turing Test
AI chatbot programs are supposed to simulate human behaviour closely. How good are they at doing this? While reports keep coming in about the witty replies and efficient responses of bots – the fact remains that most chatbots do not pass the famous Turing Test (conducted to gauge the ‘intelligence’ of machines). This brings up the risk of conversations being unfulfilling for potential buyers – inferior to what a traditional two-way conversation between humans would have been. To minimize these problems, experts recommend preparing chatbots in a way that they can introduce humans in the conversation – as and when required (with a message like ‘I am your AI robot. Let me connect you with our executive’). Chatbots might be very, very ‘intelligent’…but they cannot think for themselves. At least, not yet.
Note: The relatively ‘meh’ performance of Facebook M has shown once again that chatbot technology still has a long way to go.
8. Can be manipulated through social engineering attacks
‘Hitler was right’. That was what Microsoft Tay – the ambitious ‘AI with zero chills’ bot – started to tweet within a day of its launch in March 2016 (in a canned ‘Repeat after me…’ series of tweets). The bot had to be suspended – and when it was re-launched a week later, there were trouble again, as Tay started to constantly tweet ‘You are too fast, please take a rest’ – multiple times per second. The entire Tay episode serves as a classic example of how AI chatbots can be manipulated into an engine for spewing out racist, sexist and other offensive content. Developers have to be very careful while designing the security of computer programs. If any loopholes/bugs remain, things can go pear-shaped very quickly. Microsoft Tay (ironically, ‘Tay’ stood for ‘thinking about you’) was an unmitigated social media disaster.
9. Data handling on chatbot platforms
Although cloud security has become stronger than before, things are not quite foolproof yet. While using a chatbot, businesses have to be able to track the movement of data (provided by customers) – and follow a clear-cut policy on the location where and the duration for which the data will be stored. There cannot be any uncertainties over the identities of people who will be able to access the information (importance has to be given on ‘authorization’ and ‘authentication’) – and how the same would be used. In the medical and financial sectors in particular, the volume of sensitive personal information shared is high, and the importance of due diligence cannot be overemphasized. People should be able to ‘trust’ the chatbot (and consequently, ‘trust’ the business) while interacting with it.
10. Lack of individuality and generic conversations
Natural language processing (NLP) is one of the pillars of AI chatbots – and there is no scope for doubting that these software programs can behave like humans while chatting with end-customers. However, most of the chatbots do not have a definite personality of their own – and hence, comes across as too generic and impersonal (the much sought-after ‘human touch’ is missing). In addition, chatbot programs do not (or are unable to) factor in feelings of empathy and emotion – which are often critical while interacting with clients. Software developers should ideally provide a nice little backstory to their chatbots, along with a basic sense of humour (emojis, maybe?) – which will make them more relatable to final users. If a customer wants to know about the bot, the latter should not feel stumped.
Note: The CNN chatbot is a good example of a bot functioning like a machine. It fails whenever anything beyond its pre-programmed script comes up in the conversation.
11. Accuracy, trustability, accountability
Chatbots are still at a nascent stage. Mistakes in speech-recognition and NLP still happen rather frequently – and customer instructions are, as a result, not carried out properly. There are bots which are used to send out spammy, rejigged promotional content, which hurts the ‘digital trust’ factor of these tools. The onus is on chatbot developers to be fully transparent and frank about their AI programs – its features, capabilities and limitations. Developers/Brands also need to be fully accountable for the performance (good or bad) of their chatbots. To its credit, Microsoft came out and accepted full responsibility after the Tay fiasco. In a nutshell, the quality of service (QoS) still requires considerable improvement.
12. The often-overlooked need for encryption
Encryption might be one of the first things that come to mind when it comes to digital data security – but many chatbots on public platforms (e.g., Facebook Messenger) are not secure enough in this regard. If a chatbot is deployed on a non-encrypted platform, data transmissions through it might be hijacked by unauthorized third-party agents. Access to company databases and other such private information should not be given to such unsecure chatbot platforms. Ideally, every conversation that takes place on a bot should be encrypted – and the deployment should be done on a secure platform. In the absence of proper channel encryption, chatbots can be soft targets.
There are chatbots that miss out on performing more tasks over and above what they actually do (the Fandango chatbot, for instance, should be able to handle payments). Apart from being aware of the disadvantages and likely risks of automated bots, it is also important for final users to have reasonable expectations from the technology (after all, a chatbot is never going to replicate all the functionalities of a premium smartphone!).
The chatbot revolution is far from being a fad. AI bots have already started to revolutionize the standard of customer communications, and things will become even ‘smarter’ in the foreseeable future. As discussed above, there are problems and chatbots are not perfect yet – but these issues should be gradually ironed out. What remains to be seen is whether the ‘bots are the new apps’ prophecy would be fulfilled anytime soon!
Latest posts by Hussain Fakhruddin (see all)
- Hiring React Native Developers: 6 Skills To Look For - April 30, 2019
- Is Your ‘Great’ App Failing? These Might Be The Reasons! - April 25, 2019
- The Rise & Rise Of The Popularity Of React Native - April 18, 2019