CyberNews researchers discovered a new type of automated social engineering tool that can extract one-time passwords (OTPs) from users across the US, the UK, and Canada. The so-called OTP Bot can trick victims into sending criminals passwords to their bank accounts, email, and other online services – all without any direct interaction with the victim.
Getting a call from a scammer pretending to be a tech support agent isn’t fun. It’s certainly tedious for the potential victim listening to someone trying to rob them blind by exploiting their goodwill. It’s probably even tedious for the scammer – calling hundreds of people each and every day can make scamming seem like actual work.
Well, not anymore. Now, scammers seem to have their work cut out for them as a new type of bot-for-hire is taking the social engineering world by storm.
Meet OTP Bot: a new type of malicious Telegram bot designed to robocall unsuspecting victims and dupe them into giving up their one-time passwords, which scammers then use to access and empty their bank accounts. Worse still, the newfangled bot’s burgeoning userbase has been growing by the thousands in recent weeks.
- The bot can extract one-time passwords from victims in minutes.
- OTP Bot can steal OTPs for crypto exchanges, banks, and other online services like Gmail, Coinbase, Bank of America, Alliant, Chase, and more.
- CyberNews acquired a robocall recording of the bot, which reveals one of OTP Bot’s social engineering techniques.
- The OTP Bot Telegram channel is growing rapidly, with hundreds of new would-be scammers joining every day.
How OTP Bot works
According to CyberNews researcher Martynas Vareikis, OTP Bot is the latest example of the growing Crimeware-as-a-Service model where cybercriminals rent out malicious tools and services to anyone willing to pay.
Once purchased, OTP Bot allows its user to harvest one-time passwords from unsuspecting victims by entering the target’s phone number, as well as any additional information that the threat actor may have acquired from data leaks or the black market, directly into the bot’s Telegram chat window. “Depending on the service the threat actor wishes to exploit, this additional information could include as little as the victim’s email address,” says Vareikis.
The bot itself is being sold on a Telegram chat room that currently boasts more than 6,000 members, netting its creators massive profits from selling monthly subscriptions to criminals. Meanwhile, its users openly flaunt their five-figure gains from ransacking their targets’ bank accounts.
Jason Kent, hacker in residence at Cequence Security, argues that bot-for-hire services have already commoditized the automated threat market, making it incredibly easy for criminals to get into social engineering.
“At one time, a threat actor would need to know where to find bot resources, how to cobble them together with scripts, IP addresses and credentials. Now, a few web searches will uncover full Bot-as-a-Service offerings where I need only pay a fee to use a bot. It’s a Bots-for-anyone landscape now and for security teams,” Kent told CyberNews.
“For consumers, it makes it doubly hard to know who is calling, or to be able to confidently buy your kids a new game console.”
Gift cards make the scam go round
The most popular defrauding technique employed by OTP Bot subscribers is called ‘card linking’. It includes connecting a victim’s credit card to their mobile payment app account, and then using it to purchase gift cards in physical stores.
“Credit card linking is a favorite among scammers because stolen phone numbers and credit card information are relatively easy to come by on the black market,” reckons Vareikis.
“With that data in hand, a threat actor can choose an available social engineering script from the chat menu and simply feed the victim’s information to OTP Bot.”
Using a spoofed caller ID, the bot will then automatically call the victim’s number posing as a support agent and will try to trick them into sending their one-time password, which is required to log in to the victim’s Apple Pay or Google Pay account.
Having logged in with the stolen one-time password, the threat actor can then link the victim’s credit card to the payment app and go on a gift card shopping spree in a nearby physical store.
Scammers typically use linked credit cards to buy prepaid gift cards for one simple reason: they leave no financial fingerprints. This is especially convenient during the pandemic as mask mandates are enforced in most indoor spaces, making it even easier for criminals to hide their identities throughout the entire process.
In the following example taken from the bot’s Telegram channel, an OTP Bot user brags about purchasing thousands of dollars worth of prepaid gift cards with their victims’ linked credit cards during a three-day period:
Here’s another example posted by a threat actor on the OTP Bot Telegram channel, showing how quickly the bot is able to extract a password from a target:
In just two minutes, OTP Bot had managed to successfully capture the code and link the victim’s Alliant credit card to the threat actor’s Apple Pay app. One can only imagine how many victims the bot can defraud in 24 hours.
However, credit card linking is not the only feature supported by OTP Bot. The creators of the automated social engineering tool boast of being able to extract one-time passwords for Gmail, Coinbase, Bank of America, Chase, and more.
Straight from the bot’s mouth
While it may be hard to believe that a robocalling app can con you into giving up sensitive information in minutes, OTP Bot is designed to sound as convincingly as possible.
CyberNews managed to acquire an OTP Bot robocall recording where the bot pretends to be a support agent, warning a potential victim of an unauthorized party requesting access to their bank account. To block the request and secure the account, the victim is asked to dial in their banking PIN. After being provided with the PIN, the bot congratulates the victim with a job well done:
“Great! We’ve blocked this request and your account is now secure!”
OTP Bot then reassures the victim that any unauthorized transaction will be automatically refunded within 24-48 hours and cheekily refers them to a non-existent Action Fraud website for “community articles on how to keep your account safe.”
You can hear the full recording here:
When listening to the recorded call in isolation, it’s relatively clear that OTP Bot’s voice has been generated using a text-to-speech program. That being said, we couldn’t blame someone for picking up the call while in a busy office and mistaking the bot for an actual support agent.
Then again, for some people, disclosing their personal information to a bot might not even be an issue. According to a 2019 study by Zingle, 20% of users trust customer support bots more than actual humans, while a whopping 42% trust both robots and human support agents equally.
A growing hive of scams and villainy
Since its launch on Telegram in April, the service appears to be rapidly growing in popularity, particularly in the past several weeks. At the moment of writing, the OTP Bot Telegram channel has 6,098 members – a whopping 20% increase in mere seven days.
Some of the reasons behind the rapid growth appear to be ease of use and the bot-for-hire model, which allow inexperienced or even first-time scammers to successfully defraud their victims with minimal effort and zero social interaction.
In fact, some of the OTP Bot users brazenly share their success stories in the Telegram chat, bragging about their ill-gotten gains to other members of the channel:
Based on the success of OTP Bot, it’s becoming clear that this new type of automated social engineering tool will only continue to grow in popularity.
Indeed, it’s only a matter of time before countless new copycat services appear on the market and attract even more scammers hoping to make a quick buck off unsuspecting targets. Katherine Brown, the founder of Spyic, warns that with more bots on the market, the possibilities for social engineering and its abuse are endless. “This year we’ve already seen bots emerge that automate attacks against political targets to drive public opinion,” says Brown.
According to Dr. Alexios Mylonas, senior cybersecurity lecturer at University of Hertfordshire, the rise of social engineering bots-for-hire is even more concerning because of the stricter limits on our social interactions imposed by the pandemic. “This is particularly true for those who are not security-savvy. Threat actors are known to use automation and online social engineering attacks, which enables them to optimise their operations, to achieve their goals and the CyberNews team has uncovered yet another instance of it,” Mylonas told CyberNews.
“What is even more worrying is that this know-how is being offered for hire in a cloud-based manner (crimeware-as-a-service), providing an easier entry point to “script-kiddie” scammers.”
Mylonas is of the opinion that users should be more willing to “educate themselves and be on the lookout for threats like this, enabling them to become a harder target for cybercriminals.”
Mikail Tunç, principal security engineer at Mettle, however, believes that companies should do more to educate users about digital safety as well. “Security is a constantly moving target and the old, Ivory Tower ways of security are falling incredibly short of coming anywhere near,” says Tunç.
“Banks need to do better at continuously educating their customers in the right way. The walls of text sent to customers by email might check a box internally but they just don’t work. Continuous education and awareness in the right way are key.”
At the same time, Tunç argues that security teams need to design applications with their customers’ idiosyncrasies in mind. “Even the design elements and copy-text is incredibly important as it could be the difference between a pensioner losing their life savings or not.”
Anti-robocalling protocols: a step in the right direction?
Thankfully, when it comes to the fight against scam calls and vishing (voice phishing), there’s some positive news as well. Major mobile carriers like Verizon and AT&T are beginning to implement anti-robocalling protocols like STIR/SHAKEN, making it harder for social engineers to spoof their caller IDs and appear as tech support.
With that said, some experts argue that these measures won’t prevent scammers from calling their potential victims, so it might be a while before the robocall problem is solved or even considerably mitigated.
Travis Russell, director of cybersecurity at Oracle Communications, asserts that small telecom companies don’t have the resources to implement anti-robocalling protocols, which could leave some users at risk. According to Russel, a cloud service that supports STIR/SHAKEN would be the most elegant solution for smaller operators, as it would remove costly technical requirements for implementation.
“If this were offered as Software-as-a-Service, it would greatly reduce the cost for all operators, and may accelerate the implementation of STIR/SHAKEN. Couple this with a cloud-based analytics platform, and we could be well on our way to mitigating the scourge of nuisance calls ringing our phones,” says Russel.
Dr. Stephen Boyce, CEO & president of The Cyber Doctor, thinks that anti-robocalling protocols like STIR/SHAKEN are a step in the right direction. “However, a good number of robocalls still slip through the cracks. Continuous user education on spotting fraudulent robocalls is the best defence combined with the STIR/SHAKEN protocol,” Boyce told CyberNews.
In contrast, Jason Kent argues that anti-robocalling protocols are nothing more than a drop in the bucket. “STIR/SHAKEN are validation checks that are supposed to be implemented by June 30 of this year. You’ll note that robocalls are still a thing,” says Kent.
“Just yesterday, someone called me and asked if I knew why my number called them and told them their social security number was cancelled. I informed them it was a scam, and the scammers spoofed my phone number.”
“It’s past June 30, and still nothing has happened. The people behind these services have skirted the law for years, and will continue to do so,” Kent told CyberNews.
Don’t get duped: how to spot social engineering attacks
With all that in mind, knowing how to spot a social engineering attempt is still vital for keeping your money and personal information safe. Here’s how:
- Don’t answer calls from unknown numbers. If you do and someone you don’t know starts asking you for personal information, hang up immediately.
- Never give away personal data. This includes data like names, usernames, email addresses, passwords, PINs, or any information that can be used to identify you.
- Take it slow. Scammers often try to create a false sense of urgency in order to pressure you into giving up your information. If someone is trying to coerce you into making a decision, hang up or tell them you’ll call back later. Then call the official number of the company they’re purporting to represent.
- Don’t trust caller ID. Scammers can appear as a company or someone from your contact list by faking names and phone numbers. In fact, financial service providers never call their customers to confirm their personal information. In case of suspicious activity, they will simply block your account and expect you to contact the company via official channels to resolve the issue. As such, always stay alert, even if the caller ID on your phone screen looks genuine.
More from CyberNews:
Subscribe to our newsletter