So, before I forget…. I wanted to commit to paper (eh, teh Internets) these random thoughts that I had before, during, and after research design class today…
- Do most grad students start out as “problem driven” researchers? That is, don’t we mostly come to graduate studies due to interests in real substantive (as opposed, usually, toÂ analytic) problems that we want to better understand? Nearly every grad admission essay I’ve ever read is aboutÂ substantiveÂ rather than analytical concerns. And, in part, the process of grad school is a bit of
indoctrinationtraining to adopt and use different analytic (theory and method) tools to answer substantive questions. Only then do we become methods-driven or theory-driven researchers.
- Are some scholars cognitively or psychologically more adept at certain types of research methodologies or their capacity to be analytically eclectic? For instance, are hedgehogs better at certain types of work than foxes? Can this explain some of the quantitative or qualitative divide? Or the positivist/post-positivist divide? Or, ifÂ interpretativeÂ methods are based on empathy and the ability of researchers to both know or understand in the common sense way and know as a stranger or expert (9-13, 19), then might some researchers be better able (due to personality) to empathize with their research subjects better than others? I can think of some people who’s ability to empathize with colleagues (not at my institution) suggests that they would be hard pressed to empathize with research subjects sufficiently to achieve verstehen. Isn’t it better, then, that they aren’t doing interpretive work? Or, is the ability to do meaningful interpretive work (pun intended) dependent on personality? Could we derive a model to predict this? Give researchers a personality test that measures ability to empathize, and then correlate that with some 3rd party coding/measure of the quality of their understandingÂ as demonstrated by their research? Does this mean that Foucault was capable of incredible empathy? I wonder.
- For class, we read pieces that claimed that quantifying things runs counter to the interpretive turn…. that numbers cannot capture the intersubjectivity of a text (broadly understood to include acts and artifacts as texts) and so have no place in interpretive work. Sure, I get that. But then what of the digital humanities movement, where folks are using textual analysis (counting of words) to describe and interpret and uncover patterns in texts? Admittedly, I don’t know enough about MLA -style interpretivism or the digital humanities, but the glimpses I’ve seen suggest that they don’t view counting words as antithetical to interpretation of texts. So, why can’t political scientists use textual analysis tools for interpretive analysis of texts? (And, in some ways, it seems to me, that many of the presentations I’ve seen of textual analyses are more descriptive than causal anyway….)