Hello ... i have experienced this terrible and neverending issue crawling a large directory composed by some millions of html , doc and pdf files .. after some day of succesful crawling, the dashoboard suddendly says as the title of this topic. The only solution is to stop the crawl and begin to crawl again but after some days again and again and again this error appears so the content cannot be fully crawled.
But after of all even the meaning is strange .. cannot crawl ... we are already crawling? what does it means? does it mean that a second crawling of the content source was requested? and if so, after the warning message, why the crawler stucks? i wait over a month but nothing happen .. i will try now the V3; maybe this strange issue won’t appear.
Note: a good search engine, has to crawl all the content source, regardless file sizes and folder sizes ... if in your content source there is the wanted info that you are looking for and your search engine is not able to index and find it .... ahemmmm ..
Best regards ..