Recently, I found myself responding to one of those semi-regular “campus climate” surveys that universities (sometimes) like to do in order to take the proverbial temperature of the community with respect to diversity, equity, and inclusion. It’s a task that I routinely approach with a jumbled sense of necessity and cynicism.
The necessity, of course, stems from the fact that these really are important issues, that universities have a history of handling them poorly, and that it’s unlikely that things will improve significantly unless people speak up about the problems they see around them. A large-scale, campus-wide survey may be a very blunt instrument for measuring such concerns, but it’s also something that (in theory, anyway) administrators can’t fully ignore. If nothing else, they pay good money to conduct these surveys (since, generally, they outsource the actual work to an independent contractor as a way to help insulate the process from internal biases and manipulation, and to help keep responses as anonymous as possible), and it’s rarely a good look to spend millions on a big survey and then just ignore it.
The cynicism, on the other hand, stems from the fact that such surveys really are very blunt instruments, and that they seem designed to serve more as a kind of “diversity work theatre” than as actual diversity work. A big, visible survey shows that the university is “doing something” . . . even if all that it ever really does about diversity issues on campus is to conduct big, visible surveys. After all, it’s not as if such surveys commonly result in universities sending out press releases in which they acknowledge that (for instance) they struggle to attract students of color, or that their (handful of) BIPOC faculty feel isolated and undersupported, or some such.
And the bluntness of the instrument was in full effect in the specific survey I completed this past week. Much of this bluntness was the result of poorly designed questions. Lots of poorly designed questions. But one in particular jumped out at me.
In the demographic section of the survey, one of the questions was pretty much the standard US version of “the race question”: i.e., respondents were given 5-6 racial categories to choose from (plus a choice for “Other” that came with an additional option for whatever explanation someone might care to offer) and the instruction to check all the options that apply. So far, so good. As I have done for all of my adult life (even when told to check only one box), I checked 3 boxes, and the clicked the “continue” button at the bottom of the page . . .
. . . which led to a question I’d never been asked on a survey or official form before. This was one of those allegedly “smart” surveys, which pays attention to the answers you give and, if you give particular kinds of answers, asks follow-up questions based on your answers. And so, because I checked 3 different boxes to mark my racial identity, the “smart” survey asked me an astonishingly dumb question: Which of the 3 racial categories I checked is the one that I most identify with?
In other words, having acknowledged that the world contains people who self-identify in ways that will lead them to check multiple boxes, and then having given people the option to do precisely that, the survey authors then chose to say to those people (in effect), “Okay, but we know you didn’t really mean it when you checked multiple boxes, so please tell us which single box you really meant to check.”
The real face-palm part of all this, of course, is that this was a survey about diversity issues and campus climate. And what better way to undercut the ostensible purpose of such a survey than to fail to understand that people’s identities really are more complicated than discrete, database-friendly categories make them out to be, and to insist that the people who actively resist such simplistic categorization should do so anyway?
I left that question blank, and then wrote an angry email to the survey authors . . . which has, to date, not been answered. :/