Links of the Week: June 3, 2021: Museums, Docs, and AI

Links of the Week: June 3, 2021: Museums, Docs, and AI

5 min read


Museums are reopening, but are they ready for tough questions about purpose, membership, and AI?

If you're reading this and not a subscriber to Museum Human, consider scrolling to the bottom and signing up now—it's free and is the only way to read the site's longer weekly post on the organizational culture of cultural organizations.


So maybe we've made it through, or maybe we're just failing a little more slowly and many of us can't decide how we feel about that.

The museum where I work was the subject of a three-part PBS series which went in an unexpected direction thanks to some crises. Tougher reviews said that the documentary could have delved deeper into the model of this, or other, museums. (Read other reviews here, here, and here.)

The New York Times had its own series on museums reopening: read them all, but I'll point out this one with various suggestions for post- late-covid museums (including one similar to my post on museum collectives) and this one on museums doing actual community-partnership work. Suggestion to museums: if you are already doing real, equal work with neighborhoods around your institution, make sure your entire staff knows it's happening; if not, don't assume a position of superiority, that the non-museum-going-audience is deficient, but instead take a page from Mike Murawski's book Museums as Agents of Change and realize that the community is in many ways superior to the museum, which can be just an institution with trustee wealth.

SuperHelpful Letters continues its series of great guest posts (ahem) with this one about museum membership by FIVESEED (a museum audience membership consultancy) CEO Rosie Siemer. The necessary questioning of why people come to the museum and join fit with this Harvard Business Review article on the need to define customer purpose before one defines institutional purpose. Think of how a year ago museums were asking "how essential are we?" but now that it's time to get back to work, such queries have been subsumed into the rush to reopen workplaces (without asking, "why in-person work?" or even "why so many meetings?").

The annual American Alliance of Museums (alliance for what? against what? do museum really feel aligned? do their workers?) conference started recently with its kickoff day. Read through Twitter (also here) for some good recaps, including this and this on the session called "Crafting A Different Organization: Non-profit Structure, Behaviors, and Leadership Models," and how power structures in museums only want change that doesn't affect their position and privilege, especially when boards and leaders don't really know what their staff do—and neither to do visitors, really. This short twitter thread says it all:

My org-culture-guru Stowe Boyd link of the week is his noting a recent newsletter from org design firm NOBL, about how good initiatives should be thought of as bets, not projects. Here's the reasoning from NOBL:

Create “bets,” not projects. Calling initiatives “bets” might seem like semantics, but it’s more honest about their predictability and risk—how often have you seen a project with targets that are pulled from thin air or completely divorced from reality? With bets, teams feel greater license to double down or fold, depending on the actual results they see in the market. Of course, it’s also important to distinguish bets from essential operations work. Your organization must do what’s necessary and do it well if it’s going to survive, but be extremely critical about what’s truly mandatory—it’s easy for this category to expand.

I agree: a bet brings just the right amount of pressure, and willingness to tolerate failure as long as it brings learning, to bear. I've started noticing people using the word "pilot" when they expect something to fail. Which doesn't make it a bad thing, just that it begs the question—do leaders high enough up in the org chart allow themselves failure too easily? It's an issue of accountability in the org's culture, when failure at the top levels is transparently fetishized with performative humility. You can't have servant leaders who are just assigned their power, which is why orgs are so hostile to people who need any kind of accommodation—it puts leaders in a position of encountering vulnerability when the message is strength!

I'll end with this forum into artificial intelligence from Boston Review, and what will keep AI from furthering our dystopia. For perpetually-underresourced museums, AI has easy promise of easier processes, but can we avoid just scaling more inequality? The responses to the main forum piece are all worth reading, but I'll point out this one from Kate Crawford on the impact of AI on the nature of work, worth quoting at length, with my emphasis added:

Many forms of work are becoming shrouded in the term “artificial intelligence,” hiding the fact that people are often performing rote tasks to shore up the illusion that machines can do the work. Already millions of people are needed to prop up supposedly automated services: tagging, correcting, evaluating, and editing AI systems to make them appear seamless. Others lift packages, drive for ride-hailing apps, and deliver food. Rather than representing a radical shift from established forms of work, the encroachment of AI into the workplace should properly be understood as a return to older practices of industrial labor exploitation that were well established at the turn of the twentieth century. That was a time when factory labor was already “augmented” with machines and work tasks were increasingly subdivided and tracked. Indeed, the current expansion of labor automation continues the broader historical dynamics inherent in industrial capitalism. A crucial difference is that employers can now use AI to observe, assess, and modulate the work cycle and bodily data—down to the last micromovement—in ways that were previously off-limits to them.

I found this especially important in the return-to-workplaces moment, as technology will inevitably be part of the way in which organizations try to gauge productivity for their differently-located groups of workers, whether at home (using institutional tech or logging through institutional VPNs—yes, security, but still). Will the productivity metric be algorithmic or human? Is there a way to get the best of both? This piece from the forum, by Lara Nachman, is especially important for museums, because it states that digital can be humanized, and vice versa:

Systems that leverage the strengths of both can deliver compelling, efficient, and sustainable solutions in overall task performance, in training the AI system, in training people—or all of the above. But creating collaborative systems poses unique challenges we must invest in solving. These systems need to be designed to perceive, understand, and predict human actions and intentions, and to communicate interactively with humans. They need to be understandable and predictable—and to learn from and with users in the context of their deployed environments, in the midst of limited and noisy data.

After all, institutions can go in on tech without being creepy.

Expect more remote work links and articles soon. Be well and safe!


If you're reading this and not a subscriber to Museum Human, consider signing up for a free subscription below—it's the only way to read the site's longer weekly post on the organizational culture of cultural organizations. Thank you for reading!


cover image by Tamanna Rumee on Unsplash [description: a circle of paper clips on a blue background]


Creative Commons License

Links of the Week: June 3, 2021: Museums, Docs, and AI by Robert J Weisberg is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

I work on a bit of everything in museum content. I find human solutions to tech problems. I geek out on workflow. No, really. I learn and teach and write everything down.