You probably will need a custom spider for this. Contact me for custom content extraction. Make sure you comply with Wikipedia's content license rules though.
You can download a dump in xml, and then filter out the categorie you are interested in. This way you can avoid spidering wikipedia.org.
You can download dumps of the databases from: http://download.wikimedia.org/ I don't think it's possible to download only specific pages though, but you could quite easily setup some sort of cron job to download the dump, extract it and parse just the XML files you wanted.