Yup, so how did this whole notion of "tomatoes cause cancer" start? Please note that this analogy is purely used to explain something I've picked up in research so often, namely: researching the right thing in the wrong way, or the wrong thing in the right way. Trivial as it may sound, you can actually get pretty screwed if you use the wrong techniques, the wrong methods of data collection, data capturing and data analysis, or choose the wrong samples and respondents.
If we look at the example of "tomatoes cause cancer", this becomes clear. We may ask the wrong questions of the wrong sample, in the wrong way, at the wrong time... or we may do all of the above perfectly well, but capture the data in the wrong way, capture the wrong data or completely miss the plot when analysing it.
If we, for example, asked some cancer patients (please excuse the example) whether tomatoes formed part of their diet, there's a fairly good chance they might say yes. This will give us the insight that, "well, if all people with cancer eat tomatoes, or if the majority do, mmm, then tomatoes may just be a contributing factor." In this example we're not only making the wrong assumptions, we've chosen a very select and completely wrong sample of respondents. Even worse, the format of the question posed to the respondents is just s-o-o-o wrong. Then, finally, the conclusions and assumptions drawn from the data... well, they're also wrong.
Besides the above, there are many other variables we haven't even considered. For example: have these guys we questioned had normal, natural tomatoes as part of their diet, or genetically modified ones? Maybe, for all we know, these people grew up close to Chernobyl or some or other nuclear plant where a leakage or explosion occurred, and their genes, their DNA, were altered because of the prolonged exposure to radioactivity. Hence, the fact that they have cancer has nothing to do with the fact that they ate tomatoes at some point in their lives.
There's another rather touchy point, and that is the use of pre-designed software packages to measure certain traits, trends and data collected. Let's say you want to test your brand strength out there in the world of consumers. You buy a package (pre-designed software) which should test this for you... and also, you'd like to rather do the research yourself and save on the costs of outsourcing this to a specialist or research company. What you may not be aware of is that this particular "brand strength testing software" is not necessarily customisable and also doesn't allow for some of the variables in YOUR particular country, industry and the channel plus consumers you focus on, or the specific product you want to test within your brand.
I suppose we can carry on for many more pages about what to do and what not to do, deductive analysis, qualitative vs. quantitative, cohort groups, yackety-smackety, blah, blah.
A basic first rule I follow in doing any research is, 'begin with the end in mind'.
In other words, what do you want to know? What do you want to find out? Scientists and social researchers will probably call this the 'hypothesis'. Beware not to "lead your research" by what you want to know, stay as objective as possible when looking at the results, and try to accommodate as many as possible variables that might influence your findings... all for the sake of accuracy. It may all sound daunting to the inexperienced, but it's really a simple matter of planning!