It’s no secret that robots are inching to take over the internet. For all the frontier’s good intentions for human interaction and discovery, it was a matter of time before bots started to make their presence known. And that they did: bot traffic now makes up around half of the internet.
Detecting bot traffic has also come on leaps and bounds. We can usually judge their intent and differentiate between those on the good side (legitimate crawlers, scrapers, and aggregators) and bad bots used for click fraud or financial crimes (e.g. phishing). The problem is that digitalisation brings sophistication. With the bots becoming more dangerous, there has to be a counterbalance to, at the very least, contain bad bot behaviour in a war of attrition. A saving grace is that our defensive mechanisms are shaping up well, too.
How have we got here?
Using bots to gain a competitive advantage (without manual effort) in business is commonplace, or to comb and aggregate data. But the malicious bots really earned their stripes in high-profile instances where resellers would implement them to buy up popular stock en masse via commerce sites, away from actual buyers, to hike up reselling prices. Since then, bots are now evident in ‘the everyday’ online – from casual users seeing them responding or sharing social posts, to analysts and marketers tracking their interactive behaviours on paid advertisements and websites. They’re agnostic to which browser they impersonate (with Chrome being popular), as well as jurisdictions where they operate. Ireland experiences 71% of bad bot traffic, Germany 68%, and 34% in the United States.
E-commerce is an industry hard hit by bad bots with total unwanted traffic accounting for around 65%. It’s highly likely to be due to the prevalence of pay-per-click for online paid advertising in the sector, amounting to poor return-on-investment for businesses not receiving genuine link clicks. Other suffering industries include social media, gaming, tech and finance. In the latter’s case, DDoS and brute force attacks are popular threats alongside ‘credential stuffing’ to steal user details to gain unauthorised account access.
Where are we heading?
Such cybersecurity threats are getting more pronounced, more devastating and more stealthy. Bad bots are part of this problem in taking advantage of sophisticated digital systems. API usage is becoming more widespread through cloud computing (around 90% of developers use them) across all industries connecting servers, databases, and networks for deliverability; financial services, as an example, employ APIs to seamlessly connect services to core banking infrastructure. Flaws in their implementation, however, have allowed criminals to use bad bots to hack in and steal sensitive financial or customer data.
Cloud computing uptake, while promoting safety, can only be robust with greater security protocols and tests being implemented in developing APIs. For financial services and affiliated providers, the burgeoning open banking concept creates interoperability for applications at the user level, but clear targets for infiltration from bad bots.
Another popular actionable trend in the online space is artificial intelligence; AI runs much of our efficient data processing and automation tasks with good bots. Yet with data science capabilities growing by the day, the race to make AI ‘more human’ can be detrimental in the battle against bad bot traffic. The accuracy of machine learning has created a blurred space between what we’ve identified as the dual nature of bot traffic to a new form of ‘grey bot’. While high volumes of clicks and page views are easily not human, these threats are less obvious when scraping information, perhaps for malicious intent.
What the web-based world faces is a double edged sword from digitalisation, AI included. Cybercriminals are employing the vulnerabilities of ill-tested cloud architecture or applications and also developing automations to make their threats more widespread. For those fighting the good fight, AI can act back: detecting suspicious patterns, testing systems and alerting any weaknesses.
Reacting to bad bot traffic is a constant effort where companies need to proactively invest in the technology and expertise to battle both typical and evolving attacks. With those scalable safeguards in place, even an uncertain online future can be kept safe for all of us well-connected humans.

