Museums are reopening, but are they ready for tough questions about purpose, membership, and AI?
If you're reading this and not a subscriber to Museum Human, consider scrolling to the bottom and signing up now—it's free and is the only way to read the site's longer weekly post on the organizational culture of cultural organizations.
The New York Times had its own series on museums reopening: read them all, but I'll point out this one with various suggestions for post- late-covid museums (including one similar to my post on museum collectives) and this one on museums doing actual community-partnership work. Suggestion to museums: if you are already doing real, equal work with neighborhoods around your institution, make sure your entire staff knows it's happening; if not, don't assume a position of superiority, that the non-museum-going-audience is deficient, but instead take a page from Mike Murawski's book Museums as Agents of Change and realize that the community is in many ways superior to the museum, which can be just an institution with trustee wealth.
The annual American Alliance of Museums (alliance for what? against what? do museum really feel aligned? do their workers?) conference started recently with its kickoff day. Read through Twitter (also here) for some good recaps, including this and this on the session called "Crafting A Different Organization: Non-profit Structure, Behaviors, and Leadership Models," and how power structures in museums only want change that doesn't affect their position and privilege, especially when boards and leaders don't really know what their staff do—and neither to do visitors, really. This short twitter thread says it all:
Create “bets,” not projects. Calling initiatives “bets” might seem like semantics, but it’s more honest about their predictability and risk—how often have you seen a project with targets that are pulled from thin air or completely divorced from reality? With bets, teams feel greater license to double down or fold, depending on the actual results they see in the market. Of course, it’s also important to distinguish bets from essential operations work. Your organization must do what’s necessary and do it well if it’s going to survive, but be extremely critical about what’s truly mandatory—it’s easy for this category to expand.
I agree: a bet brings just the right amount of pressure, and willingness to tolerate failure as long as it brings learning, to bear. I've started noticing people using the word "pilot" when they expect something to fail. Which doesn't make it a bad thing, just that it begs the question—do leaders high enough up in the org chart allow themselves failure too easily? It's an issue of accountability in the org's culture, when failure at the top levels is transparently fetishized with performative humility. You can't have servant leaders who are just assigned their power, which is why orgs are so hostile to people who need any kind of accommodation—it puts leaders in a position of encountering vulnerability when the message is strength!
Many forms of work are becoming shrouded in the term “artificial intelligence,” hiding the fact that people are often performing rote tasks to shore up the illusion that machines can do the work. Already millions of people are needed to prop up supposedly automated services: tagging, correcting, evaluating, and editing AI systems to make them appear seamless. Others lift packages, drive for ride-hailing apps, and deliver food. Rather than representing a radical shift from established forms of work, the encroachment of AI into the workplace should properly be understood as a return to older practices of industrial labor exploitation that were well established at the turn of the twentieth century. That was a time when factory labor was already “augmented” with machines and work tasks were increasingly subdivided and tracked. Indeed, the current expansion of labor automation continues the broader historical dynamics inherent in industrial capitalism. A crucial difference is that employers can now use AI to observe, assess, and modulate the work cycle and bodily data—down to the last micromovement—in ways that were previously off-limits to them.
I found this especially important in the return-to-workplaces moment, as technology will inevitably be part of the way in which organizations try to gauge productivity for their differently-located groups of workers, whether at home (using institutional tech or logging through institutional VPNs—yes, security, but still). Will the productivity metric be algorithmic or human? Is there a way to get the best of both? This piece from the forum, by Lara Nachman, is especially important for museums, because it states that digital can be humanized, and vice versa:
Systems that leverage the strengths of both can deliver compelling, efficient, and sustainable solutions in overall task performance, in training the AI system, in training people—or all of the above. But creating collaborative systems poses unique challenges we must invest in solving. These systems need to be designed to perceive, understand, and predict human actions and intentions, and to communicate interactively with humans. They need to be understandable and predictable—and to learn from and with users in the context of their deployed environments, in the midst of limited and noisy data.
Expect more remote work links and articles soon. Be well and safe!
If you're reading this and not a subscriber to Museum Human, consider signing up for a free subscription below—it's the only way to read the site's longer weekly post on the organizational culture of cultural organizations. Thank you for reading!