Filed under: Internet · Date: Wed Apr 19 22:06:44 2006
I noticed a distributed trackback spam attack on my blog yesterday. Since I don't like to bother myself deleting dozends of fake trackbacks, I started digging the issue. I found out that during this month I've received numerous GET & POST requests to my trackback links from a total of 100 different IP addresses.
All the fake trackbacks had almost the same content. The trackback was titled "this is very good", it linked to either MSN, Yahoo!, or Google, and the body was a variation of "this is related story" text.
The trackback attacker had picked the user agent at random for each request, apparently from a largish database of user agent strings. And not all POST requests from an IP had a corresponding GET request. It would be natural for a weblogging software to do a GET before a POST to determine the trackback link for a story. It would require maintaining state on story reading to be able to discard these requests. Or alternatively, discard all POST requests to the trackback URI from known browser user agents...
When I integrated the trackback features from PSG to my blog I initially though of validating the trackbacks. Laziness won, and I left that protection out. Maybe I need to implement it anyways.
What puzzles me in this trackback spam is that the spammers are not linking to any pharmacy site, like the usual comment spam that I get... What is the point of linking to search engines with "this is very good" keyword?
that method allow us check nofollow links
hello from USSR :D