Hackers use botnet to scrape Google for vulnerable sites
Page 1 of 1
Hackers use botnet to scrape Google for vulnerable sites
Some 35,000 sites that use vBulletin, a popular website forum package, were hacked recently by taking advantage of the presence of files left over from the program's installation process, according to security researcher Brian Krebs.
The hack by itself is fairly standard, but the way in which it was carried out shows how search engines like Google can unwittingly become a party to such hacking.
Krebs' findings were unearthed in conjunction with work done by security research firm Imperva, members of which believe the hacks are being executed by way of a botnet. The botnet not only injects the malicious code into the target sites, but also scrapes Google in a massively parallel fashion looking for vBulletin-powered sites that might make good targets.
Why scrape Google in parallel? As a workaround for Google's defense mechanisms against automated searches.
Such defenses work well against a single user scraping Google, since after a certain number of such searches from a single host, the user is presented with a CAPTCHA. This typically stops most bot-driven scrapes. But if a great many such searches are performed in parallel, it doesn't matter if each one of them eventually runs afoul of a CAPTCHA. Together, in parallel, they can still scrape far more than any one system alone can. (Krebs did not describe the size of the botnet used, however.)
http://www.infoworld.com/t/hacking/hackers-use-botnet-scrape-google-vulnerable-sites-228799
The hack by itself is fairly standard, but the way in which it was carried out shows how search engines like Google can unwittingly become a party to such hacking.
Krebs' findings were unearthed in conjunction with work done by security research firm Imperva, members of which believe the hacks are being executed by way of a botnet. The botnet not only injects the malicious code into the target sites, but also scrapes Google in a massively parallel fashion looking for vBulletin-powered sites that might make good targets.
Why scrape Google in parallel? As a workaround for Google's defense mechanisms against automated searches.
Such defenses work well against a single user scraping Google, since after a certain number of such searches from a single host, the user is presented with a CAPTCHA. This typically stops most bot-driven scrapes. But if a great many such searches are performed in parallel, it doesn't matter if each one of them eventually runs afoul of a CAPTCHA. Together, in parallel, they can still scrape far more than any one system alone can. (Krebs did not describe the size of the botnet used, however.)
http://www.infoworld.com/t/hacking/hackers-use-botnet-scrape-google-vulnerable-sites-228799
Page 1 of 1
Permissions in this forum:
You cannot reply to topics in this forum