Just join the google webmaster tools and submit a site map. They will list at least your domain in the index in an hour. For the sub pages it can take up to a week. If you have a RSS feed like wordpress then add it to forums.digitalpoint.com, add a link in your signature, and then another one in what is your website field. Also, you can create a blogger.com account create at least 1-2 pages with a related link back to your website. -or- do a search on "dofollow" on this forum and find a nice related blog and post a comment with a backlink to your site. It just depends on how much "work" your willing to do and how dedicated you are. Build up backlinks and it will be spidered and indexed.
I don't know how it happens but I noticed for 2 of my domains that they were indexed by Google within 24 hours after I registered them. The only thing I did in those 24 hours was install WordPress, upload a template and some plugins. No content (except) for the standard WP installation. I did not link to the domain on any of the other sites I own, neither did I request for a link on any other site and neither did I suggest it to a SE or Directory. But when I listed the domain in Google Webmaster Tools the next day I got a message that told me there were already pages indexed. When I looked in Google results I saw that it were pages from my domain. Why. How. I realy don't know. But I was pleased. Yes, it will.
More than likely someone owned that domain name before. That is probably what happened. And no.... The fallacy that copied content has any effect is hocus pocus. There are webmasters that proved this to be a fact. They even put scrambled content up and still got SERP and rank. Meaning they posted gibberish and got ranked. The only problem that would arise is copyright infringement, but to be honest... Spiders can't read. And they don't speak English either. They key in on search terms, keywords and formulate relevance based on a set algorithm. But they don't understand any of the sentences or how the sentences are formed. So how in the hell could they tell who has content that looks or smells like any other content on the Internet? There are billions of websites around the Internet. For every one that has a topic, there are 50 million more with the same topic. The chances that 10 million say the exact same crap is pretty high. There are no original thoughts under the Sun. We are all just copycats in one form or another.