Links of the Week: March 25, 2022: AI yai yai …

Artificial intelligence is supposed to do everything, but should AI do anything without people taking the lead?

a robotic head with a blank metallic faceplate facing the viewer
Same as the old boss? (Photo by Maximalfocus / Unsplash)

Artificial intelligence is supposed to do everything, but should AI do anything without people taking the lead?

If you're reading this and not a subscriber to Museum Human, consider scrolling to the bottom and signing up now—it's free and is the only way to read the site's longer weekly post on the organizational culture of cultural organizations and special subscriber-only events.

If you read enough of the business press like Harvard Business Review (never mind tech-centered pubs like Wired) you'll be flooded by breathless articles about the importance of data and AI. I'm not one to disparage the importance of data, but it's not as simple as collect-learn-act-improve. While some of the articles are properly skeptical, not enough of the media coverage of the topic asks just what processes (and people) are being replaced/outsourced to machines and their learning. Decisions about what to test and by whom are value-laden. Are we automating because we're trying to do too much? Not ending enough old projects? Collecting too much data?

You won't necessarily get those questions here (sounds like a thesis for a future post!), but some of the articles below do a good job in making sure that people are elevated, not the AI. So read on.

One: Let's start with this Wired piece I've mentioned before, about improving AI by allowing uncertainty in its work. (Here's my post from last year about museum-field uncertainty.)

Two: This HBR article emphasizes the human role in training AI for automation; but are orgs ready for the human commitment in terms of TMPR—Time, Money, People, and Resources? You can't just amass data, and the procedures to process them, without considering maintenance and sustainability.

Three: This piece from Forbes asks how AI will affect strategic plans. It's an important read if you think about strategy (I wrote about strategy in late 2020 here.) From the article:

A Harvard Business School report outlines that 85% of executive leadership teams spend under one hour per month discussing strategy, and 50% spend no time at all. Strategy planning is primarily restricted to an annual exercise with participation from a few senior leaders.
It’s no wonder that 95% of a company’s employees don’t understand its strategy. Ultimately, 90% of businesses fail to meet their strategic targets.
Superminds can transform this dated process of corporate strategic planning … . Today, machines’ involvement in this process is restricted to automating a few computations or tracking metrics. [Experts envision] a new approach that leverages greater human-machine collaboration.

Okay, "superminds" is a term for AI-aided collective organizational planning. Two of this article's four concepts—increase human involvement and inject objectivity—stem from gamification and marketizing decision-making, though it's unclear if employee engagement in the form of a game with badges and awards, or a stock market that workers "bet" on with funny money, is what workers need right now. The other two use AI to find patterns in human decision-making and scale it across the organization.

If there's a humanist advantage in this approach, it might be this:

This blueprint for a cyber-human strategy machine could enable organizations to respond continuously and rapidly to ongoing changes in their environment.

At least the article recognizes the following:

However, the approach does pose some inherent challenges. Culturally, it calls for organizations to switch from a hierarchical, closed planning process to one that’s more flat and inclusive. From a technical perspective, machine learning stumbles when predicting scenarios with few historical examples or those involving nuanced decision-making, as they involve fewer digital trails.

I don't know if museums as organizations know how to respond to challenges in a flat, inclusive way. Is it possible that a collective project in absorbing and working with machine learning might help?

Four: Here's another HBR piece on How Data Can Make Better Managers. Check out what it had to say about the role of computational leadership science (CLS) in employee experience:

A recent survey of 1,500 CEOs found that morale was their greatest challenge. Fortunately, there are CLS resources to co-create solutions with your employees. You can use open-ended survey questions infused with “natural language processing” to gain a better understanding of 1) the primary topics associated with morale in your organization and 2) how your employees feel you are addressing them. Then, you can use “collective intelligence” technologies to innovate morale-boosting solutions. This form of group decision-making increases engagement and grows your value as a leader.

OK—but do organizations already know how to do this without machines? If not, then what are they feeding the systems? And what about the pitfalls of creating an environment of constant morale monitoring?

Another concern is remote working and tracking productivity. Here, increasing CLS intelligence reduces hasty decisions like implementing excessive employee monitoring systems. You will learn that surveillance tech is a slippery slope only to be used with extreme caution. A healthy CLS alternative is transforming virtual environments into fruitful spaces for motivating your employees. For example, I am co-creating an AI-driven system that 1) visually maps who knows what and who is working with whom in organizations and 2) rapidly assigns the right people to the right job. The former provides a clear picture of existing relationships and how to lead de-siloed community-building while the latter assigns tasks better aligned with employee competencies — something proven to increase motivation. This helps you reduce employee dissatisfaction while increasing trust, commitment, and other outcomes indicative of great leadership.

Hmm. I'm still concerned that orgs are just giving up being able to figure this out without machines. Is HR really so tuned in the wrong direction that these employee issues are such mysteries?

The article continues, about hiring and DEIA:

Many organizations struggle with DEI in hiring, retention, and promotion. Certain individuals are better at landing top jobs than others — there is a bias against introverts even though they can add more value — and leaders frequently select people they want rather than people they need, subconsciously selecting individuals like themselves based on factors such as race, education, and socioeconomic background. Making matters worse, the majority of employers are using “totally meaningless” tools such as the Myer-Briggs Type Indicator or biased algorithms for processes such as recruitment.

Again, AI would only matter here if leaders were committed to rooting out "culture fit" hiring—and not just because they want diversity (whatever they think that is) but because these leaders are willing to question their own conception of their institution's culture—indeed, their conception of museum-field culture.

Five: As a bit of a break, check out this Medium piece called What Are We Going to Do With the Internet?

How do we address the fundamental flaws of the internet?
The current version of the internet that we use is neither wholly awful nor wholly terrific, but more and more it feels like a terrible mistake, even at the best of times. While the fix can’t be a massive global scheme to enrich crowd of aggressive tech bros — we have that already — something has to change. The era of global connectivity as it was imagined by Facebook, Twitter, et al. might not be over, but certainly no longer holds the promise it once did. Mass connection, linking ourselves to everyone everywhere hasn’t created the promised utopia — in many ways, it’s created the opposite. …
Maybe it would simply mean we use the internet less, more sparingly and carefully — and that in particular, that we would rely on it less to validate ourselves. Perhaps we might make our lives more our own again — in a word, more personal. In this scenario, the last thing we’d do with the internet is interact with everyone on Earth. That would be crazy.

An interesting take.

Six: Back to AI, and HBR has more to say:

Given the complexity involved here, the first step to making AI scale is standardization: a way to build models in a repeatable fashion and a well-defined process to operationalize them. … But with AI, many companies struggle with this process.
It’s easy to see why. Bespoke processes are (by nature) fraught with inefficiency. Yet many organizations fall into the trap of reinventing the wheel every time they operationalize a model. … One-off processes … can spell big trouble once research models are released into production. …
… AI is a multi-stakeholder initiative. … [tools] must make it easy for data scientists to work with engineers and vice versa, and for both of these personas to work with governance and compliance. In the year of the Great Resignation, knowledge sharing and ensuring business continuity in the face of employee churn are crucial. …

The article has a lot of lingo, but I think the more important takeaway is the importance of sharing AI knowledge around the organization. (I wrote about "digital for the rest of us" last year.) Read on about governance:

With AI and ML [machine learning], governance becomes much more critical than in other applications. AI Governance is not just limited to security or access control in an application. It is responsible for ensuring that an application is aligned with an organization’s ethical code, that the application is not biased towards a protected group, and that decisions made by the AI application can be trusted. As a result, it becomes essential for any MLOps tool to bake in practices for responsible and ethical AI including capabilities like “pre-launch” checklists for responsible AI usage, model documentation, and governance workflows.

The ideas are in the right place, but are leaders invested in making the necessary time for workers to implement these more enlightened policies?

Seven: Another break here, from one of my favorite doomscrollers writers on Medium, Umair Haque. Here he explains how dirty oil money circles the globe, making the point that our Western consumption lifestyle means that Russian oil is in everything we buy. It's important to keep these kinds of interrelationships in mind, not to keep us from acting but to spur us to delve into how deeply our structures and behaviors are intertwined.

Eight: These next two pieces aren't strictly about AI, but I wanted to add in some of the humanity behind surveillance capitalism with stories about the decidedly-non-tech-titan Tumblr. Here's one piece from the New Yorker on How Tumblr won despite, or because of, itself. I was going to do a Tumblr blog back in 2020, a sci-fi-oriented blog called "The Museum at the End of the World." I never posted there, putting pieces into Museum Human instead, such as here and here. I like the question, can museums have systems, even for visitors, that are popular because they're clunky? Must everything be optimized? (And, if so, why? Because we're too busy?)

Nine: Here's more on Tumblr from the Atlantic.

In the beginning—in the aughts—Facebook was Palo Alto and Tumblr was New York. …
For a lot of Millennials like me, this blend produced a political coming-of-age. In 2011, Occupy Wall Street organizers used Tumblr to publish thousands of selfies paired with individual stories about student loans, medical bills, and home foreclosures. “We wanted whoever was reading the blog to feel as overwhelmed as the people who were submitting the stories, and that’s why we published, like, up to 100 a day,” Priscilla Grim, one of those organizers, told me recently. The plan worked. Ezra Klein, then at The Washington Post, noted, “It’s not the arrests that convinced me that ‘Occupy Wall Street’ was worth covering seriously … It was a Tumblr called ‘We Are the 99 Percent.’”

I like the vibe.

Ten: Finally, back to HBR, as we consider if AI is the pathway to a more nimble platform ecosystem (I wrote about ecosystems models for museums here and here). Try this out:

There’s evidence that more traditional firms that employ AI aggressively are adopting an ecosystem (and perhaps eventually a platform-based) approach. In the Deloitte 2021 State of AI in the Enterprise survey, the two highest-achieving groups of AI users in the survey were substantially more likely to have two or more ecosystem relationships (83% among the two highest groups, versus 70% and 59% among the two lowest groups). Companies with more diverse ecosystems were 1.4 times as likely to use AI in a way that differentiates them from their competitors. In addition, organizations with diverse ecosystems were also significantly more likely to have a transformative vision for AI, to have enterprise-wide AI strategies, and to use AI as a strategic differentiator.
These firms may not have full-fledged platform business models, but creating broader ecosystem relationships is a first step toward AI-enabled platforms. Beyond that step, here’s how companies are turning themselves into platforms with AI.

I guess for museums the question would be, how could AI help the website user journey? Could AI help with workflows? Or even assist in the visitor journey, to make suggestions onsite? According to the article, orgs need to:

Identify the key decisions that AI needs to make, and gather the data to train models.

Well, again, who's making the decisions?  

I hope you've enjoyed these links. I'd love to get a mixed perspective on AI in museums, with one non-digital staffer for every colleague from Digital. Perhaps a future article.

Liberation

I won't claim that AI is a path to liberation—in the current organizational world of museums, AIs could hurt. But, if we commit to a less-hierarchical, more broadly-humanist museum structure, AI could help us gather and distill data (as long as the process is collaborative and open). Liberation is finding new ways to work in the tools that are being siloed in the organization.

If you're reading this and not a subscriber to Museum Human, consider signing up for a free subscription below—it's the only way to read the site's longer weekly post on the organizational culture of cultural organizations and learn about upcoming subscriber-only events. Thank you for reading!


cover image by Maximalfocus / Unsplash [description: a robotic head with a blank metallic faceplate facing the viewer]


Creative Commons License

Links of the Week: March 25, 2022: AI yai yai … by Robert J Weisberg is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.