I'm bidding on this project since I had a lot of experience with similar scale web crawlers, and to be honest, I feel your pain :). I write web crawlers, bots and similar software for couple of years now, using various languages and frameworks. The last one I worked on was also very "angry" in terms of banning IP addresses, and I had to pull 16 million entries from the single website.
I already have developed several techniques for avoiding blocks, such as public proxy finding and randomizing, human user mimicking and several others.
Feel free to contact me even if you consider accepting my bid, or have any additional questions, I'd be happy to help :)
P.S. "ezinescrape", just to notice :)
Best Regards,
Srdjan