Google's new search console interface is still missing some features from the original version, but has some nice new features. Specifically, you can see data on the indexed status of each of your pages including some clues as to why they aren't indexed. For example, if I click on 'index coverage' on the sidebar, GSC tells me I have 107 valid pages and 44 'excluded' pages. When I click 'excluded' I then get a breakdown of the pages: 'crawled - currently not indexed', and 'discovered - currently not indexed'. Does anybody have any insight on what either of these means and how to fix them? I can click eash reason and see a list of each URL in that category and can then click to 'test robots.txt', 'fetch as google', and a few other things. As far as I can tell the robots.txt has no issues, and google can fetch it just fine and the pages are not sending a 'noindex' directive. So why else would Google crawl some pages but not put them in the index? thanks