Bayesian poisoning perhaps - literally just crap flooding everything so the statistics on what is considered spam get distorted and allow future spam to get through.
It could also be a mapping probe, where first the bot spams all web forms it comes across with harmless looking statements like "nice site!" but it also includes some unique string of characters. Then later it can crawl the site and look for that unique string and map a connection between that form and text being displayed. The goal is to build up a list of easily spammable sites that can be exploited later.
Since your filter has caught 107k messages, is it Bayesian poisoning? Or is it not that crazy to assume that there are 107k individual spammers who are probing for vulnerabilities?
Up to now, I never really knew there was terminology and methods for e-mail spamming.