Links of the Week: July 22, 2021: Tech Race to Space

Are museum workers really waiting on spacefaring tech billionaires to save us?

Plastic paper clips arranged in a circle against a blue background
All you links, gather round (image by Tamanna Rumee on Unsplash)

Are museum workers really waiting on spacefaring tech billionaires to save us?

If you're reading this and not a subscriber to Museum Human, consider scrolling to the bottom and signing up now—it's free and is the only way to read the site's longer weekly post on the organizational culture of cultural organizations.

Museum workers are waiting for billionaires (I use the term loosely, they're often only rich because of stock valuation which they themselves manipulate) on the board to save their institutions, but maybe the billionaire class has other ideas, as this piece in Boston Review describes:

The notion that private corporations ought to achieve something that states have been able to do since the 1960s—fly to space—is a peculiarly U.S. one. It combines domestic libertarianism and its idolatry of private individuals’ entrepreneurship with the more global ethos of neoliberalism and government outsourcing. However, despite these motivating philosophies, private companies have launched their schemes for space colonization using massive amounts of public money through government contracts.
… Today’s wave of space exploration, however, is being led by tech billionaires’ private space corporations for financial gain—and, if we believe Bezos and Musk, for the betterment of human civilization. But the rhetoric and history of celestial exploration reveal how the logics of capitalism, colonialism, and corporations have always been intimately, and violently, intertwined. And, as history shows us, allowing corporations the power to colonize space may result in outcomes that even states cannot control.

Zing! Of course, billionaire wealth is only possible through public investment in infrastructure and support through tax breaks and incentives (some inheritance doesn't hurt, either). These companies act like governments and go to war with their own ostensible regulators. Which communities and workers will be exploited to make the next launch pads?

Will there be museums on Mars or on the space station for the elites? What can museums be doing to prevent this future—and can they do so if they depend on this private wealth? Boston Review concludes:

For his part, Bezos looks at this as a utilitarian calculation, a numbers game. If humanity expands into space, he urges, “trillions of humans” can prosper, “which means thousands of Einsteins or Mozarts.” He fails to acknowledge that the genius of those future Einsteins and Mozarts exists now, on Earth, but unrealized and unrecognized in the very cycles of poverty Bezos dismisses as a short-term problem. Furthermore, and more importantly, the value of human life should not be based on some arbitrary utilitarian calculation of humans’ intellectual contribution to “civilization” or their ability to replicate the legacies of two white men.

Right now, however, there's a problem with the technologies managing our knowledge on Earth, as described in this article in The Atlantic about "link rot":

It turns out that link rot and content drift are endemic to the web, which is both unsurprising and shockingly risky for a library that has “billions of books and no central filing system.” Imagine if libraries didn’t exist and there was only a “sharing economy” for physical books: People could register what books they happened to have at home, and then others who wanted them could visit and peruse them. It’s no surprise that such a system could fall out of date, with books no longer where they were advertised to be — especially if someone reported a book being in someone else’s home in 2015, with an interested reader seeing that 2015 report in 2021 and trying to visit the original home mentioned as holding it. That’s what we have right now on the web.

Do you have servers and folders and subfolders with deity-knows-what in them? Duplicates of duplicates where maybe one of the dupes is just a little different? In a museum world of scarcity, digital stuff is like the only thing we have in abundance. I have a long wish list of mostly manual digital clean-up projects. Who decides which of these will get done, or what the next must-have digital format will be?

But I digress:

Into that gap has entered material that’s born digital, offered by the same publishers that would previously have been selling on printed matter. But there’s a catch: These officially sanctioned digital manifestations of material have an asterisk next to their permanence. Whether it’s an individual or a library acquiring them, the purchaser is typically buying mere access to the material for a certain period of time, without the ability to transfer the work into the purchaser’s own chosen container. This is true of many commercially published scholarly journals, for which “subscription” no longer means a regular delivery of paper volumes that, if canceled, simply means no more are forthcoming. Instead, subscription is for ongoing access to the entire corpus of journals hosted by the publishers themselves. If the subscription arrangement is severed, the entire oeuvre becomes inaccessible. …
Purchased books can be involuntarily zapped by Amazon, which has been known to do so, refunding the original purchase price. For example, 10 years ago, a third-party bookseller offered a well-known book in Kindle format on Amazon for 99 cents a copy, mistakenly thinking it was no longer under copyright. Once the error was noted, Amazon — in something of a panic — reached into every Kindle that had downloaded the book and retroactively deleted it. The book was, fittingly enough, George Orwell’s 1984.

The link between the tech titans and disappearing content isn't a bug, it's a feature, just like the lack of interest in maintenance and sustainability. And many of our productivity programs run on subscription models; if the programs don't vanish, they can still be rendered inoperable, or at least obsolete.

What's to keep today's tech titans from becoming obsolete, like Kodak, the master company of its time? Kodak's last hope might have been trying to get nearly a billion dollars of government funding to retrofit its infrastructure to make Covid vaccines. The plan failed.

Rochester, Kodak's home, eventually re-invented itself like many rust belt cities, with health care and university employment leading the way. But inequality, poverty, trust in institutions, and problems with public education persist. Can the institutions of these cities, like Rochester's George Eastman Museum, chart a way forward without the munificence of the wealthy?

After my tour of the business park, I went back to the Eastman Museum, which was in the process of building a large new entrance. I wanted to see if it matched my memory. The house itself looked smaller and less grand, and the elephant head in the main room—a reproduction of the taxidermied one Eastman had hung, which, decades ago, mysteriously disappeared—looked goofy. But there were still a few wonders: the sprawling gardens, the pristine library, and, in that low-ceilinged room on the second floor, the suicide letter. The display around it included a handwritten note from Eastman requesting to be cremated, a duplicate of his death certificate, and a small pile of metal. Unlike many of the objects in the museum, the metal pieces weren’t bequeathed by Eastman or donated by his family. The fragments, metallic bits from his coffin that survived cremation, had been tucked away for decades. According to the museum curator, a police officer had scooped them up and saved them, the same way you might save a newspaper from the day of some spectacular event, or a sock left behind by a pop star.
The museum curator also provided me with a map for a self-guided driving tour of everything in Rochester that might not exist without George Eastman: the art gallery, the music school, the hospital, the parks, the bridge, the YMCA, the children’s center, the college my dad graduated from, the college my sister was currently studying at. That wasn’t the whole list, but at this point I’m repeating myself. Okay, okay, I thought.

We're worried about the decisions that AI is making about our lives (also here with more in-depth questions about the ethics of AI, or here, where AI is just writing snippets of code) or about the effects that social media has on our humanity. What if the latter is actually a civilization-destroying plague? Read on, from Vox:

What we’re concerned about is the fact that this information ecosystem has developed to optimize something orthogonal to things that we think are extremely important, like being concerned about the veracity of information or the effect of information on human well-being, on democracy, on health, on the ecosystem.Those issues are just being left to sort themselves out, without a whole lot of thought or guidance around them.

We know that marginalized communities have found some solace in social media, but the damage that misinformation has wrought on the world for profit is there to be seen:

The vast majority of Covid-19 anti-vaccine misinformation and conspiracy theories originated from just 12 people, a report by the Center for Countering Digital Hate (CCDH) cited by the White House this week found. …
Among the dozen are physicians that have embraced pseudoscience, a bodybuilder, a wellness blogger, a religious zealot, and, most notably Robert F Kennedy Jr, the nephew of John F Kennedy who has also linked vaccines to autism and 5G broadband cellular networks to the coronavirus pandemic.

But here's the blockchain (also here) to the rescue, claims Harvard Business Review:

Traditionally, the publisher was the primary source of a piece of content’s reputation. You’re more likely to believe an article is accurate if you find it in the New York Times (or the Harvard Business Review) than on a website you’ve never heard of. However, reliance on institution-based reputation alone comes with some significant limitations. Trust in mainstream American media is lower than ever, with a recent poll finding that 69% of U.S. adults say their trust in the news media has decreased in the past decade. To make matters worse, in a digital media landscape driven by click-based ad revenue, even reputable publications are increasingly incentivized to prioritize engagement over clarity. When readers are largely getting their news from social media headlines, it can seriously impede their ability to distinguish credible journalistic outlets from interest-driven propaganda machines.

So the tools we need for marketing are the tools that actually reduce public trust in us (in the aggregate)?

That’s where blockchain can help. A blockchain-based system can both verify the identity of a content creator and track their reputation for accuracy, essentially eliminating the need for a trusted, centralized institution.
For example, one recent paper outlined a proposal for a system in which content creators and journalists could cultivate a reputation score outside of the specific outlets for which they write, adopting a decentralized approach to the verification of sources, edit history, and other elements of their digital content. Additionally, blockchain can be used to track the distribution of content, giving both consumers and publishers greater visibility into where disinformation comes from and how it moves throughout the digital ecosystem.

I don't know, sounds awfully close to a social-credit system for institutions to me:

Of course, as with any reputation tracking system, there are important questions to consider about who sets the standards, who contributes to the ratings, and who manages disputes (as well as the mechanisms for doing so). Moreover, any system designed to track and verify personal information will need to incorporate privacy and security best practices to meet both local and international regulatory requirements. That said, the decentralized nature of a blockchain solution can likely help to address many of these concerns, since it eliminates the need for a single, trusted institution to make these critical decisions.

The article even backtracks:

On the policy front, we’ve already seen a number of important initiatives at the local, state, and national levels. In 2019, the DEEP FAKES Accountability Act was introduced in the U.S. House of Representatives, proposing the use of blockchain to verify sources, watermarks, content creator identity, and other relevant information. The EU proposed a number of regulations governing how firms can use AI earlier this year, and several states already have laws regulating the use of deepfakes, largely in relation to elections and deepfake pornography (although none to date have seen case law to test their effectiveness). Of course, any policy-driven efforts must balance regulation with privacy and freedom of expression — recent internet blockages and outcries over censorship are just the latest illustrations of how an authoritarian regime may use policies ostensibly meant to prevent disinformation to instead silence whistleblowers and opposition. Given these concerns as well as the rapid pace of technological development, practical policy solutions will have to focus on regulating malicious behavior and mitigating harm rather than creating blanket technology regulations.

Uh, sure. What could go wrong?

This is all taking place against a backdrop of private-tech-public-sector spying (also a series of articles in The Guardian) and the idea of data taking over the world, as per this (paywalled, so I quote at length) article in Financial Times:

A couple of years ago, staff at a Google “tech incubator” called Jigsaw made an important breakthrough: they realised that while their company has come to epitomise the power of technology, there are some problems that computers alone cannot solve. Or not, at least, without humans.
Jigsaw, wrestling with the problem of online misinformation, quietly turned to anthropologists. These social scientists have since fanned across America and Britain to do something that never occurred to most techies before: meet conspiracy theorists face-to-face — or at least on video platforms — and spend hours listening to them, observing them with the diligence that anthropologists might employ if they encountered a remote community in, say, Papua New Guinea.
“Algorithms are powerful tools. But there are other approaches that can help,” explains Yasmin Green, director of research and development at Jigsaw, which is based in an achingly cool, futuristic office in Manhattan’s Chelsea district, near the High Line. Or, as Dan Keyserling, Jigsaw chief operating officer, puts it: “[We’re using] behavioural science approaches to make people more resilient to misinformation.”
The results were remarkable. Previously, groups such as anti-vaxxers seemed so utterly alien to techies that they were easy to scorn — and it was hard to guess what might prompt them to change their minds. But when the Jigsaw team summoned anthropologists from a consultancy called ReD Associates, who listened with openminded curiosity to people, it became clear that many of the engineers’ prior assumptions about causation in cyber space were wrong.
For example, the techies had assumed that “debunking” sites needed to look professional, since that was what they associated with credibility. But conspiracy theorists thought that “smart” sites looked like they were manufactured by the elite — something that matters if you want to counter such theories.
So these days Google’s staff is trying to blend anthropology with psychology, media studies and, yes, data science to create tools that might “inoculate” more internet users against dangerous misinformation. “We can’t do this just based on what we assume works. We need empathy,” says Beth Goldberg, Jigsaw research project manager, who was trained in political science but has now also acquired anthropology skills.

What's here for museums? The institutions need people (I won't use the term "people tools") to not just manage tech but to stand above the tech. Data is always going to be hard for users to control, and that is by tech-titan design. I use a VPN for privacy and security and I get shut out of Google and other sites all the time. Can we see the need for data on users as a transaction in which users take part, rather than just an offering of data for ease of use? It's an equation that built every tech titan empire.

The museum world's Seb Chan wrote about (soft) tech as a "community garden." We need these kinds of relationships because we're in a tech world where fascism has become easily commodified. What is the role of the tech titans in contributing to evil? What is the role of tech consumers in resisting evil? And can museums resist the siren song of "easy" tech in favor of authentic human relationships? Holding on to their people and making their workplaces equal and inspiring for all is a start.

So on the one hand we hope data can save us from climate collapse, while we acknowledge that green jobs themselves can't save the economy. People have already decided not to believe in climate collapse or else they're dealing with it as a business reality.

The tech titans are all about making us panic on the one hand and soothe us on the other. What's wrong with some panic, when we have crises to deal with? Here's historian John Ralston Saul on the subject of panic, in his 1992 book Voltaire's Bastards, from which I'll quote at length because I think it has a great deal to say about the return to workplaces and the dismissal of continued anxiety, born of trauma, which many workers are feeling now:

The control of emotions and the prevention of panic is always presented by technocrats as a sign of their professionalism. To this day one of the first ploys used by professionals, when caught in public debate with nonprofessionals, is to suggest that the amateurs have panicked and that it is ignorance which leads to panic.
But a reexamination of the argument of professional cool over amateur panic, which was first used to great effect during World War I, leads one to question its value. Wouldn’t it have been better for the staffs of the various armies to have panicked, instead of duly carrying on their mutual and pointless murder of the men under their command? Is not the inability to panic a sign of stupidity or of some serious character flaw?
The ability not to panic has been turned into one of the great virtues of the last hundred years. Not only military, but all sectors of leadership are judged on this ability. Everywhere we hear businessmen, bankers, bureaucrats, politicians and generals calming us with expert tones; indicating that we may relax and follow their expert lead. The rational method has become the cool approach of the insider.
What is this air of superiority based upon? Where are the examples to prove that cool knowledge advances the cause of civilization? In reality the ability to panic has always been one of the great strengths of those in positions of command.
To panic doesn’t necessarily mean to turn and run. Intelligence and a sense of dignity usually allow the maintenance of external composure. Self-doubt combined with dignity is central to competent leadership. A man or an organization, even a society, capable of profound, internal panic is able to recognize when he or it is on the wrong track and perhaps to identify the error by giving in to the need for complete reevaluation. Out of that reevaluation may come the right track.
The man of reason, as we currently understand him, is incapable of this panic. He carries about within himself such expertise and structure that he has absolute assurance. Thanks to his intellectual tools, he can always prove, even when surrounded by self-generated disaster, that he is right. If on the field of battle military or civil things do not work out, then the circumstances are at fault. The commander of reason is equipped with sufficient self-confidence to persist no matter how wrong he is.
Sooner or later he can prove it reality will see the light. The ability to respond to circumstances Sun Tzu’s key to strategy is only possible, of course, if the leader is able to scramble his preconceptions. The internal strength required to let oneself panic lies at the heart of that ability. Not only has twentieth-century military training ignored that strength. It has concentrated actively on stamping out any signs of such individual intelligence in the professional officer.

I don't think this is what the anti-vax and anti-science people are doing with their smug and loud claims of "personal freedom" and "it's my body." They are actively proclaiming that the public good is not their problem. Saul is describing a more general obsession with expertise and smarts, which ironically take the ability to think and decide away from the vast majority of workers.  

Museums can't leave all this humanity in the hands of a few billionaires, especially the ones using tech to manufacture the wealth they'll use to leave the rest of us behind; if the current museum model relies on that, then we have a lot more work to do in museums to forge our own future.

Enjoy the links, see you next week!

If you're reading this and not a subscriber to Museum Human, consider signing up for a free subscription below—it's the only way to read the site's longer weekly post on the organizational culture of cultural organizations. Thank you for reading!


cover image by Tamanna Rumee on Unsplash [description: plastic paper clips arranged in a circle against a blue background]


Creative Commons License

Links of the Week: July 22, 2021: Tech Race to Space by Robert J Weisberg is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.