Academic decision data: the invisible work of faculty committees

Topics in Higher Education

don’t chief academic officers know about the faculty talent decisions their schools and departments are making? Or about the actual committee work that’s going into them? We throw a few possibilities out there.

In addition to building software that aids the actual workload involved in faculty talent decisions, at Interfolio we’re also pondering bigger questions. We say, “How can new technology best aid the real work of higher education?”

One big umbrella of an answer, of course, is data. And under the data umbrella, one box that remains virtually unopened is the distribution and extent of faculty committee service.

It’s worth acknowledging that academic committee work is deliberately dispersed throughout the institution. It’s supposed to be up to a particular department’s faculty to choose their colleagues, review their professional character and contributions, and nominate them for the institutional investment of tenure and promotion.

But because faculty committees of all kinds are collaborating all over the place in concurrent, mix-and-match fashion, we know it’s challenging for provosts and deans to stay aware of patterns and inconsistencies around this substantial component of their colleagues’ work week.

For example, if you’re a provost at a university comprised of six schools, on what are you currently basing any recruitment guidelines for tenure-track positions? Can you confidently compare the tenure committee involvement of the average associate professor in the School of Engineering with that of their counterpart in the School of Arts & Sciences?

And at the institutional level, are you aware of how many faculty members, total, are even up for tenure or promotion every year, so that you can plan your budget accordingly?

Or if you’re in faculty affairs at the School of Medicine, and you suspect that certain departments or divisions are handling their recruitment process more sloppily than others, how would you get hold of evidence to assess that? Wouldn’t it give you a stronger basis for difficult conversations if you had access to factual information—like the composition of those committees, the number of candidates considered, or how often certain departments had run searches in the past few years?

Lest we seem to advocate some kind of Big Brother university full of secret scoreboards, consider a more universal benefit of better data on committee service: academic leaders who are better equipped to give talented, dedicated professors time to shine. Rather than going on anecdotal evidence, scattered observations, and general impressions, wouldn’t it serve the university’s mission if chief academic officers had clear metrics to let them know when faculty members are up to their ears in committee duties?