A Star to Steer By

Thinking about Project Management for the Humanities Researcher

John Laudun
5 min readFeb 28, 2024
Photo by Scott Graham on Unsplash

This semester I am teaching a class on Project Management in Humanities Scholarship. I have seen enough graduate students stumble when shifting from the managed research environment of course papers to the unmanaged research environment of the thesis or dissertation, that I thought it would be useful to try out some of the things we know about how best to manage projects in general as well as offer what I have learned along the way. The admixture of experts agree this works and this works for me I hope opens up a space in which participants can find themselves with a menu of options from which they feel free to choose and try. Keep doing what works. Stop doing what doesn’t.

We are a month into our journey together and almost everyone has finally acceded to the course’s manta of doing something is better than doing nothing (because the feeling of having gotten anything done can be harnessed to build momentum to get something more important done), but there are a few participants who are still frozen at the entry door to the workshop where each of us, artisan-like, is banging on something or other.

All of them have interesting ideas, but some are struggling with focus. I think this is where the social sciences enjoy an advantage. They have an entire discourse, which is thus woven into their courses and their everyday work lives, focused on having a research question. What that conventionally means is that you start with a theory (or model) of how something works; you develop a hypothesis about how that theory applies to your data (or some data you have yet to collect because science); and then you get your results in which your hypothesis was accurate to a greater or lesser degree.

Two things here: the sciences have the null hypothesis, which means they are (at least theoretically) open to failure.[1] The sciences also have degrees of accuracy. Wouldn’t it be nice if we could say things like “this largely explains that” or “this offers a limited explanation of that” in the humanities? Humanities scholars would feel less stuck because they would be less anxious about “getting it right.” We all deserve the right to be wrong, to fail, and we also deserve the right to be sorta right and/or mostly wrong. Science and scholarship are meant to be collaborative frameworks in which each of us nudges understanding just that wee bit further. (We’re all comfortable with the idea that human understanding of, well anything, will never be complete, right? The fun part is the not knowing part.)

The null hypothesis works very clearly when you are working within a deductive framework but it is less clear when you are working in an inductive fashion. Inductive research usually involves you starting with some data that you find interesting, perhaps in ways that you can’t articulate and your “research question” really amounts to “why do I find this interesting?” Which you then have to translate/transform into “why should someone else find this interesting?” Henry Glassie once explained this as the difference between having a theory and needing to data to prove it, refine it, extend it and having data and needing to explain it.

There is also a middle ground which might be called the iterative method, wherein you cycle between a theory or model, collecting data, and analyzing that data. Each moment in the cycle helps to refine the others: spending time with the data gives you insight into its patterns (behaviors, trends) which leads you to look into research that explores those patterns, trends, behaviors. Those theories or models then let you see new patterns in your texts that you had not seen before, or, perhaps, make you realize that, given your interest in this pattern, maybe you need different texts (data) to explore that idea.

I see a lot of scholars, junior and senior, stuck in the middle of this iterative method without realizing it and don’t know which moment to engage first. What should they read … first? (I have seen the panic in their faces.) What I tell participants in this workshop is that it doesn’t matter. They can start anywhere, but, and this is important, start. No one cares whether you start reading a novel (and taking notes) or reading an essay in PMLA (and taking notes). 99% of managing a project as an independent researcher is just doing something and not letting yourself feel like you don’t know where to start. Just start.

Will it be the out come be the project they initially imagined? Probably not. But let’s be honest, that perfect project they initially imagined lived entirely in their heads—as it does for all of us. It was untroubled by anything like work. (That’s what makes it ideal!) It was not complicated by having to determine where we might publish the outcome, who might be interested, to what domain you might contribute. It was also unavailable to anyone else, inaccessible to anyone else, and probably incomprehensible to anyone else. As messy and subpar as the things we do in the hours we have are, in comparison to that initial dream, they are at least accessible to others, who will probably find them interesting and/or useful.

To be clear, I usually press workshop participants and students to start with data collection / compilation (and not with a theory). Mostly that’s because I am a folklorist (and some time data scientist) and I feel at my most driven when a real work phenomena demands that I understand it. To a lesser extent, as comfortable as I am with my own theoretical background, I find the current explosion in all kinds of theories a bit overwhelming. I prefer to let the data tell me what data I need to go learn, else I might end up going down the rabbit hole of great explanations and never get anything done!

[1] The sciences are currently undergoing a pretty severe re-consideration of the “right to be wrong.” With the cuts in funding to so many universities — because, hey, the boomers got their almost free ride and shouldn’t have to pay for you — the American academy has shrunk, creating greater competition for the jobs that remain, which has meant that scientists often feel like they can’t fail. Failure must be an option when it comes to science, and scholarship. When it isn’t, we end up with data that has been, perhaps purposefully or perhaps unconsciously, miscontrued because the results need to be X.

--

--

John Laudun
John Laudun

Written by John Laudun

Cultural Informatics Researcher focused on Stories, People, Networks

No responses yet