Search
00
GBAF Logo
trophy
Top StoriesInterviewsBusinessFinanceBankingTechnologyInvestingTradingVideosAwardsMagazinesHeadlinesTrends

Subscribe to our newsletter

Get the latest news and updates from our team.

Global Banking & Finance Review®

Global Banking & Finance Review® - Subscribe to our newsletter

Company

    GBAF Logo
    • About Us
    • Profile
    • Privacy & Cookie Policy
    • Terms of Use
    • Contact Us
    • Advertising
    • Submit Post
    • Latest News
    • Research Reports
    • Press Release
    • Awards▾
      • About the Awards
      • Awards TimeTable
      • Submit Nominations
      • Testimonials
      • Media Room
      • Award Winners
      • FAQ
    • Magazines▾
      • Global Banking & Finance Review Magazine Issue 79
      • Global Banking & Finance Review Magazine Issue 78
      • Global Banking & Finance Review Magazine Issue 77
      • Global Banking & Finance Review Magazine Issue 76
      • Global Banking & Finance Review Magazine Issue 75
      • Global Banking & Finance Review Magazine Issue 73
      • Global Banking & Finance Review Magazine Issue 71
      • Global Banking & Finance Review Magazine Issue 70
      • Global Banking & Finance Review Magazine Issue 69
      • Global Banking & Finance Review Magazine Issue 66
    Top StoriesInterviewsBusinessFinanceBankingTechnologyInvestingTradingVideosAwardsMagazinesHeadlinesTrends

    Global Banking & Finance Review® is a leading financial portal and online magazine offering News, Analysis, Opinion, Reviews, Interviews & Videos from the world of Banking, Finance, Business, Trading, Technology, Investing, Brokerage, Foreign Exchange, Tax & Legal, Islamic Finance, Asset & Wealth Management.
    Copyright © 2010-2026 GBAF Publications Ltd - All Rights Reserved. | Sitemap | Tags | Developed By eCorpIT

    Editorial & Advertiser disclosure

    Global Banking & Finance Review® is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

    Home > Business > Chatbots Say Plenty About New Threats to Data
    Business

    Chatbots Say Plenty About New Threats to Data

    Published by Gbaf News

    Posted on August 21, 2018

    8 min read

    Last updated: January 21, 2026

    Image of Kim Leadbeater addressing the media about proposed changes to the UK's assisted dying law, emphasizing the removal of High Court judge sign-off to enhance the legislative process.
    Lawmaker Kim Leadbeater discusses UK's assisted dying law changes - Global Banking & Finance Review
    Why waste money on news and opinion when you can access them for free?

    Take advantage of our newsletter subscription and stay informed on the go!

    Subscribe

    Tags:Artificial IntelligenceCustomer interactioncybercriminalspayment card

    By Amina Bashir and Mike Mimoso, Flashpoint

    Chatbots are becoming a useful customer interaction and support tool for businesses.

    These bots are powered by an artificial intelligence that allows customers to ask simple questions, pay bills, or resolve conflicts over transactions; they’re cheaper than hiring more call centre personnel, and they’re popping up everywhere.

    As with most other innovations, threat actors have found a use for them too.

    A number of recent security incidents have involved the abuse of a chatbot to steal personal or payment card information from customers, or to post offensive messages in a business’s channel threatening its reputation. There is potential for worse with the possibility of attackers finding inroads with chatbots, either by exploiting vulnerabilities in the code to sit in a man-in-the-middle position and steal data from an interaction as it traverses the wire or sending the user links to exploits in order to access a backend database where information is stored. Attackers may also mimic chatbots, impersonating an existing business’s messaging to interact with customers directly and steal personal information that way.

    It’s an array of risks and threats that could be hidden in an innocuous communicational channel and are challenging to mitigate.

    Flashpoint analysts believe as businesses integrate chatbots into their platforms, threat actors will continue to leverage chatbots in malicious campaigns to target individuals and businesses across multiple industries. Moreover, threat actors will likely evolve the methods used to leverage chatbots in attacks as businesses move to enhance chatbot security.

     Few Chatbot Attacks Made Public

    Further complicating matters is that many attacks are going unreported. The attacks that are made public provide interesting insight into how attackers are leveraging chatbots.

     In June, Ticketmaster UK disclosed a breach of personal and payment card data belonging to 40,000 international customers. The threat actor group, identified as Magecart, targeted JavaScript built by service provider Inbenta for Ticketmaster UK’s chatbot. Inbenta said in a statementthat a piece of custom JavaScript designed to collect personal information and payment card data for the Ticketmaster chatbot was exploited; the code had been supplied more than nine months earlier. It was disabled immediately upon the disclosure.

    Microsoft and Tinder have also experienced issues with chatbots. In Microsoft’s case, the release of its AI chatbot Tay in 2016 was reportedly commandeered by threat actors who led it to spout anti-Semitic and racist abuse in an attack methodology classified as “pollution in communication channels.”

    On the popular dating app Tinder, cybercriminals used a chatbot to conduct fraudulent activity by impersonating a female who asked victims to enter their payment card information to become verified on the platform.

    Mitigations and Assessment

    Awareness about potential risks related to chatbots isn’t high. For their part too, attackers likely hadn’t set out to exploit chatbot vulnerabilities, but in targeting the supply chain or scanning for bugs in code, found themselves an available and relatively new attack vector with direct access to users and their information. In addition to man-in-the-middle attacks where chatbots can be mimicked, attackers can use them in phishing and other social engineering scams. Attackers can also use chatbots to provide users with links redirecting them to malicious domains, steal information, or access protected networks.

    Since most of these attacks can be essentially attacks against software, tried and tested security hygiene goes a long way as a mitigation. This entails starting with requiring multi-factor authentication to verify a user’s identity before any personal or payment card data is exchanged through a chatbot.

    Monitoring for and deploying regular software updates and security patches is imperative. Organisations should also consider encrypting conversations between the user and the chatbot, as this is also essential to warding off the loss of personal data.

    Companies may also consider breaking messages into smaller bits and encrypting those bits individually rather than the whole message. This approach makes offline decryption in the case of a memory leak attack much more difficult for an attacker. Additionally, appropriately storing and securing the data collected by chatbots is crucial. Companies can encrypt any stored data, and rules can be set in place regarding the length of time the chatbot will store this data.

    Finally, the rise in chatbot-related attacks should also reinforce the need for continuous end-user education to counter social engineering.

    By Amina Bashir and Mike Mimoso, Flashpoint

    Chatbots are becoming a useful customer interaction and support tool for businesses.

    These bots are powered by an artificial intelligence that allows customers to ask simple questions, pay bills, or resolve conflicts over transactions; they’re cheaper than hiring more call centre personnel, and they’re popping up everywhere.

    As with most other innovations, threat actors have found a use for them too.

    A number of recent security incidents have involved the abuse of a chatbot to steal personal or payment card information from customers, or to post offensive messages in a business’s channel threatening its reputation. There is potential for worse with the possibility of attackers finding inroads with chatbots, either by exploiting vulnerabilities in the code to sit in a man-in-the-middle position and steal data from an interaction as it traverses the wire or sending the user links to exploits in order to access a backend database where information is stored. Attackers may also mimic chatbots, impersonating an existing business’s messaging to interact with customers directly and steal personal information that way.

    It’s an array of risks and threats that could be hidden in an innocuous communicational channel and are challenging to mitigate.

    Flashpoint analysts believe as businesses integrate chatbots into their platforms, threat actors will continue to leverage chatbots in malicious campaigns to target individuals and businesses across multiple industries. Moreover, threat actors will likely evolve the methods used to leverage chatbots in attacks as businesses move to enhance chatbot security.

     Few Chatbot Attacks Made Public

    Further complicating matters is that many attacks are going unreported. The attacks that are made public provide interesting insight into how attackers are leveraging chatbots.

     In June, Ticketmaster UK disclosed a breach of personal and payment card data belonging to 40,000 international customers. The threat actor group, identified as Magecart, targeted JavaScript built by service provider Inbenta for Ticketmaster UK’s chatbot. Inbenta said in a statementthat a piece of custom JavaScript designed to collect personal information and payment card data for the Ticketmaster chatbot was exploited; the code had been supplied more than nine months earlier. It was disabled immediately upon the disclosure.

    Microsoft and Tinder have also experienced issues with chatbots. In Microsoft’s case, the release of its AI chatbot Tay in 2016 was reportedly commandeered by threat actors who led it to spout anti-Semitic and racist abuse in an attack methodology classified as “pollution in communication channels.”

    On the popular dating app Tinder, cybercriminals used a chatbot to conduct fraudulent activity by impersonating a female who asked victims to enter their payment card information to become verified on the platform.

    Mitigations and Assessment

    Awareness about potential risks related to chatbots isn’t high. For their part too, attackers likely hadn’t set out to exploit chatbot vulnerabilities, but in targeting the supply chain or scanning for bugs in code, found themselves an available and relatively new attack vector with direct access to users and their information. In addition to man-in-the-middle attacks where chatbots can be mimicked, attackers can use them in phishing and other social engineering scams. Attackers can also use chatbots to provide users with links redirecting them to malicious domains, steal information, or access protected networks.

    Since most of these attacks can be essentially attacks against software, tried and tested security hygiene goes a long way as a mitigation. This entails starting with requiring multi-factor authentication to verify a user’s identity before any personal or payment card data is exchanged through a chatbot.

    Monitoring for and deploying regular software updates and security patches is imperative. Organisations should also consider encrypting conversations between the user and the chatbot, as this is also essential to warding off the loss of personal data.

    Companies may also consider breaking messages into smaller bits and encrypting those bits individually rather than the whole message. This approach makes offline decryption in the case of a memory leak attack much more difficult for an attacker. Additionally, appropriately storing and securing the data collected by chatbots is crucial. Companies can encrypt any stored data, and rules can be set in place regarding the length of time the chatbot will store this data.

    Finally, the rise in chatbot-related attacks should also reinforce the need for continuous end-user education to counter social engineering.

    More from Business

    Explore more articles in the Business category

    Image for Empire Lending helps SMEs secure capital faster, without bank delays
    Empire Lending helps SMEs secure capital faster, without bank delays
    Image for Why Leen Kawas is Prioritizing Strategic Leadership at Propel Bio Partners
    Why Leen Kawas is Prioritizing Strategic Leadership at Propel Bio Partners
    Image for How Commercial Lending Software Platforms Are Structured and Utilized
    How Commercial Lending Software Platforms Are Structured and Utilized
    Image for Oil Traders vs. Tech Startups: Surprising Lessons from Two High-Stakes Worlds | Said Addi
    Oil Traders vs. Tech Startups: Surprising Lessons from Two High-Stakes Worlds | Said Addi
    Image for Why More Mortgage Brokers Are Choosing to Join a Network
    Why More Mortgage Brokers Are Choosing to Join a Network
    Image for From Recession Survivor to Industry Pioneer: Ed Lewis's Data Revolution
    From Recession Survivor to Industry Pioneer: Ed Lewis's Data Revolution
    Image for From Optometry to Soul Vision: The Doctor Helping Entrepreneurs Lead With Purpose
    From Optometry to Soul Vision: The Doctor Helping Entrepreneurs Lead With Purpose
    Image for Global Rankings Revealed: Top PMO Certifications Worldwide
    Global Rankings Revealed: Top PMO Certifications Worldwide
    Image for World Premiere of Midnight in the War Room to be Hosted at Black Hat Vegas
    World Premiere of Midnight in the War Room to be Hosted at Black Hat Vegas
    Image for Role of Personal Accident Cover in 2-Wheeler Insurance for Owners and Riders
    Role of Personal Accident Cover in 2-Wheeler Insurance for Owners and Riders
    Image for The Young Rich Lister Who Also Teaches: How Aaron Sansoni Built a Brand Around Execution
    The Young Rich Lister Who Also Teaches: How Aaron Sansoni Built a Brand Around Execution
    Image for Q3 2025 Priority Leadership: Tom Priore and Tim O'Leary Balance Near-Term Challenges with Long-Term Strategic Wins
    Q3 2025 Priority Leadership: Tom Priore and Tim O'Leary Balance Near-Term Challenges with Long-Term Strategic Wins
    View All Business Posts
    Previous Business PostCorero Study Confirms DDoS Attacks Damage Customer Trust and Erode Confidence
    Next Business Post” PayTech for Good “