Planet Code4Lib

May 2026 Early Reviewers Batch Is Live! / LibraryThing (Thingology)

Win free books from the May 2026 batch of Early Reviewer titles! We’ve got 247 books this month, and a grand total of 2,956 copies to give out. Which books are you hoping to snag this month? Come tell us on Talk.

If you haven’t already, sign up for Early Reviewers. If you’ve already signed up, please check your mailing/email address and make sure they’re correct.

» Request books here!

The deadline to request a copy is Tuesday, May 26th at 6PM EDT.

Eligibility: Publishers do things country-by-country. This month we have publishers who can send books to the US, the UK, Canada, Ireland, Germany, Italy, Australia, Spain, Poland, Sweden and more. Make sure to check the message on each book to see if it can be sent to your country.

The Stars That Fell: PoemsNo More PatientsMad Dogs & Englishmen: A Tale of the BarghestHeroes of PALMAR: How One IDF Unit Revolutionized Combat Medicine in GazaWhen Eichmann Knocked on Our Doorאיש כפי נחלתו: שנים-עשר שבטי ישראל בנחלות אבותיהםBeast BallerzOwl KingA Door Is to OpenTom's Wild RideWhen I'm a MoshomIsis of Egypt: Goddess of ThronesDeath at King's CrossThe Most Dangerous ManNightberriesFlowers for GaiaGrayduckThe Red Jack SocietyLove Letters to the Dirty SouthDames, Dishes, and Degrees: Faculty Wives in AmericaDames, Dishes, and Degrees: Faculty Wives in AmericaBunnies in the Berry RowA Child With No NameFaces in the FlamesThe Blind Woman of SorrentoRemember the Sweetness: PoemsL. Ron Hubbard Presents Writers of the Future Volume 42The Designer Shoe Shop By The SeaManufacturing a DuchessThe Great American Medical Show: The Good, the Not-So-Good, the Bad, and the UglyThe Trumpster Fire Escape Almanac: Facts to Plan Your Expat LifeMjede: The Three DaysObjects of Desire: StoriesSeaflame: The Key SparkThe Trail CutterShooting Up: A Memoir of Love, Loss, and AddictionRunning Wild Press Short Story Anthology, Volume 9Invisible GirlsZombies of the Upper East SideIncident in RomaniaDebtThe Three-Cornered HatCute FACTopia!: Follow the Trail of 200 Super-sweet FactsTake Me To Your ParadiseSandy Writes Her StoryNicolina GatsbyJonathan's JournalHeracles: The Lion of NemeaPurity in Peril: Religious and Civil PersecutionAll the Right FavorsPeculiar Perspectives: Life Viewed Through a Mellow Side-EyeBeyond the Edge of the Known WorldLifeguard: A Love StoryUnleashed: How to Bring Out the Best in Your DogBand on the Run: Xenophon and the First Great Mercenary Army's Epic Escape from PersiaWalrus: The Remarkable Life of Eco-Warrior David GarrickThe Mark of EternityA Fight for Justice: The Compelling Story of Temporary Foreign Workers and Human RightsPhantom of the GalleriaThe Highlands of YoreAutistic Ghost Stories and Other Chilling SituationsLittle Voices Big Futures, Baby Beginnings: A Parent's Guide to Infant Speech Milestones, Early Signs of Delay, and Simple Habits to Help Your Baby Communicate with ConfidenceWe Want So Much to Be OurselvesLearning and Development Essentials: A Practical Guide to Designing Learning Programs, Driving Business Impact, and Achieving Organizational ExcellenceBusiness Sustainability Essentials You Always Wanted to KnowStakeholder Management for Project ManagersOrganizational Development Essentials You Always Wanted to KnowGraph Machine Learning Essentials: Foundations, Hands-On Implementation, Graph Neural Networks, PyTorch Geometric, and Applied Use CasesLiving With Spirit: My ExperiencesCaptured by the Vampire KnightLabyrinthineSeal of RomeThe Romance LoopBeatrice and the Dirty DiggersBeatrice and the Dirty DiggersDr. GnollDr. GnollBrutal Country: Ten Short StoriesThe Constellation of Forgotten ThingsI Am JoeyInherent BoundWhat We Carry Forward: What Endures Across Borders of Family, Faith and TimeDon't Find Love. Let Love Find You.The Shallows of AvalonLetters from the Ruins: Mercy for Every Wounded HeartThe Stone LotusAfter Cambodia, There Was Us: A Mother-Daughter Story Across TimeWords Were The EnemyMama Said: An Angels of Darkness AnthologyNew Life for a Dead ManOrphans: My Life in an Outlaw Motorcycle Club and BeyondViktor & Delphineea: The Story of an Unusual FriendshipShadows AwakeningStill Standing Tall: A 3-Time Retiree’s Guide to Conquering Anxiety in Today’s World, Finding Your DNA, and Working with PurposeStained Glass: A Reflective History of AntisemitismSketches of AliceKiera and Lamby: TokyoGod and the First Families: Parenting, Trauma, and Healing in the Book of GenesisGreenInfluence God's Way with Us(Extra)Ordinary: 35 Men and Women of the Bible Whose Faith Changed EverythingThe EndAEQUALIS: If Women Rule the WorldStone and FleshThe Final RevolutionWhispers on FlowersWhispers on FlowersThe Rental: A Cosmic Horror TetralogyThe Luminous DarknessBroadway for Beginners: A Tourist's Guide to Broadway and off-Broadway in New York CityThe Best of Broadway (and Beyond): A 2026 Review of Last Year's Standout ShowsThe Emperor of SevilleScaredy Cats Scratch BackHope Verdad Presents Short Stories about LoveShibby MageeThe Heavenly Father: A Biblical PerspectiveGod TycoonRook's SongbirdDune QueenThe WindowBeckham Bumblebee Can't Do It Alone: A Story for Young Kids About Teamwork, Listening, and Pollinating a GardenMath Heals: On the Gift and Weight of Being HumanThe Invisible TrailProud Jenny JayWaves Toward the Pebbled ShoreWonderful HalfCatamorphosisIn the Queen's ServiceThe Inheritance KillerTactical Intimacy: The TIS Method. The Science of Lasting Longer, Confident Performance, and Deep Intimate Connection for MenThe Paper PrincessThe Complete Expert-To-Author Guide: Plan, Write, and Publish Your Nonfiction BookWhere Worlds PartThe Ivory PinionThe Coin of ForeverEmberglow Falls Academy: The Legacy of MagicEmberglow Falls Academy: The Rising StormTurning to the Dark Side: What Star Wars Teaches Us About How a Good Person Turns BadAm AI Human: A NovelFrom Burnout to Breakthrough: A Jesus-led Journey from Exhaustion to RenewalNyxalath: Heirophant of VeilsPMP® Fast Track Study Guide: Crack the Exam in 30 Days or Less: The Starter Guide - Everything You Need to Know Before You Start StudyingThe Wedding StoppersThe Zionists Who Hate JewsMatelda: In Silence We ForgiveNotes on HopeLift Off: Omnibus 1 - Grampa Was an AlienWarp Speed: Omnibus 2 - Grampa Was an AlienSchooled in LoveShift ItThe Echo She Left BehindThe Question of When: A Practical Guide to Knowing When It's Time for Assisted Living, Memory Care, or Skilled NursingInfernal Tramps: Tales of Weird TerrorChatGPT for Genealogists: From First Prompts to Advanced WorkflowsInto the DaxNurs und Maryams Abenteuer: Zweisprachiges Kinderbuch Deutsch-Arabisch: Geschichten über Ehrlichkeit und Herzlichkeit | Mit 14+ interaktiven AktivitätenMonna’s Grand Adventure: A Storybook & Coloring Journey: 18 Illustrated Tales, Adventure Map, 18 Single-Sided Coloring Pages & Achievement Certificate!The Family LiarStriking JusticeMagical Elemental Atoms: Count the Protons and ElectronsThe Boardman WatchesThe Devil of Tarsyn ForestWho Is Singing?Ghost Hauler: Fifty TeethThe Pesach Diaries: A Hilarious Journey Through Passover Cleaning, Chaos, and Family SurvivalNothing New under the Sun: Why Modern Systems Keep Recreating Ancient Power StructuresLegends of Mexico: Quetzalcoatl Coloring Book for Kids Ages 5-9 : A Fun Color and Learn Activity Book with Stories, Drawing Pages, and Educational Activities Inspired by Mexican LegendsRepatriated: Re-RootsCountry Club SummerSix Thousand Years Ago Today: One Day in the Largest City on EarthIn the Serpent's Shadow: Where Power Breeds PoisonThe K Age Vol. IThe Chronicles of NlogoniaThe God-Imprinted AI Playbook: Guidelines for Flourishing Amid Artificial IntelligenceDeadly GroundNever ForgiveMurder at the Boxing MatchThe Archivist's WarThe Girl Who Collected Moths: All the Ways She Stayed, and the Love I Did Not Leave With40 Miles to Happy: The Love Story of a Rancher and His Wife4th Man Surf Club: Jesus at Walmart Season #2The Slow Path to Wellness: How Slow Travel Heals at Every AgeThe Frog Who Missed the BugThe Boy Who Cried SkunkThe Tale of the Bamboo Cutter: A Japanese FolktaleMalevolentConvergence of the StarbornNotes from My Teddies: The Shadow of Zahhak24 Hours to ForgetEncoded Minds: A Biological ThrillerThe Caspian AmuletA Perfectly Normal Childhood (and other lies I tell myself)Sire, Oleander Isn't Dead! (Yet)Gone for a Soldier365 Ways I Love You from Your Wife: A 5 Minute Guided JournalFunny Things HappenCold VowsMythos: A Simulacrum 4.6 NovelDear AI, I Killed Her: 16 Sessions About the Dead Girl in a Blue DressNo Winning This War: Purpose UnearthedWhatever It Takes To Keep From Losing My Wife To Alzheimer's: A Husband’s Journey Through Love, Loss, and Unwavering DevotionThe Hogman's Homunculi and the Angelwing MassacreTunguskaMakerbornMoon Shadowork JournalThree and Thirty Pieces of InsanityTwins: A Coming-of-Age NovellaThe Girl in the PipesSpydr M Cee: Gods of the CypherLife with Less of MeThe Great Bathroom Humor Cover-Up: An Investigation into the Lost History of Bodily Function ComedyFart, Laugh, and Be Happy: Inspiring Bathroom Humor Stories to Uplift Your SpiritDragon's BetrayalFriendliesOcean Superheroes: How Ocean Animals Help Protect Our PlanetBorealisHer Runaway LadyTurquoise Soul: Whispers in the MindThe Alphabet LoversPittedBreak the Stillness TrapHow to Stay Disciplined Without Motivation: A Practical Guide to Showing up Every Day—Even When You Don't Feel Like ItThe Cave of Past and PresentSpeak of the DevilThe CommuteNature AgainstRetirement Planning Simplified: The Complete Step-by-Step Guide to Retirement Income, Social Security, and Medicare - Cut Taxes, Avoid Mistakes, and Retire WellTogether is a Distant StarBlind ItemConsumptive CurThe Prince's MagicianThe Prince's MagicianThe Florist's Budding DesireStarling & The Moon BladeOdysseyBreaking the Simulation: An Ancient Path Back to RealityEternalA Bride for GriffinThe FallWalking Along the Ancient Tokaido Road - A Pilgrim's Path: Adventures and Transformations (Vol. 1: Departure)The CrossingTabletop Toolkit: The Game Master's Guide: Build and Run Memorable Adventures for Any Tabletop RPGWalking Along the Ancient Tokaido Road: A Pilgrim's Path: Adventures and Transformations (Vol. 1: Departure)Strategic Insights for AI Governance and Leadership 2026Compromised: How America’s Computer Superstore Sold It’s Soul and Lost It’s WayThirty Days: The Story of NVIDIA's Survival and the AI RevolutionA Tale of Two Chinas: A Fifteen-Year Odyssey Through China's Cultural Heartlands

Thanks to all the publishers participating this month!

Alcove Press American Taboo Press Autumn House Press
Bellevue Literary Press City Owl Press CMU Press
Crooked Lane Books Cynren Press Entrada Publishing
eSpec Books Espresso Publishing House Flat Sole Studio
Galaxy Press Gefen Publishing House Hawthorn Quill Publishing
Heritage Books Hybrid Sequence Media Identity Publications
Inferno Books Infinite Books It’s Alive! Books
Kinkajou Press LaPuerta Books and Media NeoParadoxa
OC Publishing Picket Fire Pocketbook Press
Prolific Pulse Press LLC PublishNation RIZE Press
Ronsdale Press Rootstock Publishing Running Wild Press, LLC
Shadow Dragon Press Shilka Publishing Silent Clamor Press
Simon & Schuster Tundra Books Type Eighteen Books
University of Nevada Press University of New Mexico Press Vibrant Publishers
W4 Publishing, LLC What on Earth!

DLF Digest: May 2026 / Digital Library Federation

A monthly round-up of news, upcoming working group meetings and events, and CLIR program updates from the Digital Library Federation. See all past Digests here

Hello DLF Community!

There’s growing momentum as the Call for Proposals for the 2026 Virtual DLF Forum officially opens, inviting contributions that reflect this year’s focus on practical strategies, community-grounded work, and shared challenges across the field. With recommendations from members of the Forum Program Committee and DLF Committee for Equity and Inclusion (CEI), which continues to advance inclusive and equitable practices across the GLAM community through open monthly meetings and a new Zotero resource library, we have expanded this year’s CFP to more intentionally include and center participation from Historically Black Colleges and Universities (HBCUs), Tribal Colleges and Universities (TCUs), Hispanic-Serving Institutions (HSIs), and Minority-Serving Institutions (MSIs). 

We’re also introducing a new digital storytelling format that encourages partnerships between librarians, archivists, and community collaborators to share not just project outcomes but the relationships and processes behind the work, and to help attendees imagine how these approaches can be adapted in their own contexts (learn more about the new format on the DLF blog). We hope you’ll consider submitting a proposal and sharing the CFP across your networks! 

Warmly,

-Shaneé

This month’s news

This month’s open DLF group meetings:

For the most up-to-date schedule of DLF group meetings and events (plus conferences and more), bookmark the DLF Community Calendar. Meeting dates are subject to change. Can’t find the meeting call-in information? Email us at info@diglib.org. Reminder: Team DLF working days are Monday through Thursday.

  • AIG Metadata Assessment Group: Friday, 5/1, 2pm ET / 11am PT.
  • DLF Born-Digital Access Working Group (BDAWG): Tuesday, 5/5, 2pm ET / 11am PT.
  • DLF Digital Accessibility Working Group (DAWG): Tuesday, 5/5, 2pm ET / 11am PT USA. 
  • DLF AIG Cultural Assessment Working Group: Monday, 5/11, 1pm ET / 10am PT.
  • AIG User Experience Working Group: Friday, 5/15, 11am ET / 8am PT
  • AIG Metadata Assessment Group: Friday, 5/22, 2pm ET / 11am PT.
  • DLF Climate Justice Working Group: Tuesday, 5/26, 3pm ET / 12pm PT.
  • DLF Open Source Capacity Resources Group: Wednesday, 5/27, 1pm ET / 10am PT.
  • DAWG Policy & Workflows: Friday, 5/29, 1pm ET / 10am PT.

DLF groups are open to ALL, regardless of whether or not you’re affiliated with a DLF member organization. Learn more about our working groups on our website. Interested in scheduling an upcoming working group call or reviving a past group? Check out the DLF Organizer’s Toolkit. As always, feel free to get in touch at info@diglib.org

Get Involved / Connect with Us

Below are some ways to stay connected with the digital library community and us: 

Contact us at info@diglib.org.

The post DLF Digest: May 2026 appeared first on DLF.

Five for Friday – AI policy examples for libraries / Artefacto

The dust is settling, the bubble has yet to burst and more libraries than ever have their AI policy in place, for users, for staff or hopefully for both. If you’re still working out where to start, we hope this post can help. Many of these exist at an institutional level, where, for example, a [...]

Continue Reading...

Source

A Framework for Books and AI in the Public Interest / Dan Cohen

Bookshelves in a library receding into the backgroundColor of Reading” by MarLeah Cole, CC BY 2.0

Two years ago, Dave Hansen, the Executive Director of Authors Alliance, and I wrote “Books Are Big AI's Achilles' Heel,” a piece on how the leading AI companies may have unimaginable sums of money and vast data centers, but are badly in need of what humble libraries have in abundance: books. Those companies, of course, understood this weakness and were trying to fill in the gap in any way they could. There are now dozens of lawsuits by authors and publishers against these tech firms for downloading and storing digitized books from the sketchier corners of the internet.

Dave and I proposed an alternative pathway, spearheaded by libraries and oriented not toward commercial uses but toward the public good:

A library-led training data set of books would diversify and strengthen the development of AI. Digitized research libraries are more than large enough, and of substantially higher quality, to offer a compelling alternative to existing scattershot data sets. These institutions and initiatives have already worked through many of the most challenging copyright issues, at least for how fair use applies to nonprofit research uses such as computational analysis. Whether fair use also applies to commercial AI, or models built from iffy sources like Books3, remains to be seen.

Library-held digital texts come from lawfully acquired books — an investment of billions of dollars, it should be noted, just like those big data centers — and libraries are innately respectful of the interests of authors and rightsholders by accounting for concerns about consent, credit, and compensation. Furthermore, they have a public-interest disposition that can take into account the particular social and ethical challenges of AI development.

Thanks to the Mellon Foundation, this planning project was funded, and we held workshops across the United States with librarians, scholars, technologists, authors, and publishers to imagine what such an initiative might look like, how it might function, and what it would take to bring it into existence. We’re delighted to release the final report from that yearlong study, The Public Interest Corpus: A Framework for Implementation, co-authored by Dave, Thomas Padilla, Giulia Taurino, and myself.

From the introduction:

The rapid advancement of artificial intelligence represents one of the most significant technological transformations of the twenty-first century, with profound implications for research, education, creativity, and civic life. Yet the development and deployment of AI systems is increasingly concentrated among a small number of well-resourced technology companies. This concentration stems not merely from access to capital and resulting computational infrastructure advantages, but also from asymmetric and unregulated access to training data.

While access to large-scale datasets is the main prerequisite of state-of-the-art language models, scholars and researchers have drawn attention to the importance of data quality in textual corpora used for AI training. Many have pointed to the need for curated, high-quality datasets, especially from library collections, which contain humanity’s most comprehensive and editorially refined record of knowledge, culture, and expression.  

Currently, many academic researchers are denied access to this data for their own AI research due to a variety of legal, technical, and financial constraints. Our work on this project demonstrated a need for publicly accessible, research-oriented, computation-ready textual corpora to support academic work and non-profit AI development. The Public Interest Corpus initiative responds to this existing imbalance and pressing need by leveraging the unique position of research libraries to expand access to books data for academic and nonprofit AI training and computational research, thus ensuring that less-resourced institutions and individuals can gain equitable access to valuable data sources.

The report outlines our sense of how to move forward with books and AI, and seeks to address some hard issues that emerged from in-depth conversations we held, such as copyright questions and the needs of different users.

Given more recent technical developments, such as the Model Context Protocol, we also believe that the Public Interest Corpus will be able to serve not just noncommercial AI researchers, but also a broader audience among the public, students, and scholars. For instance, as I have noted in this space over the last year, AI shows great potential for creating a new digital entryway to the library, improving access and discovery by locating relevant books better than current library systems. For many, their interaction with AI will end after this phase of discovery and access; these library patrons will go on to read the books they have found rather than train new AI models with them. We should be enabling and encouraging these lighter uses of vectorized books as well as the heavier, more complex applications. Additional use cases emerged over the last year involving not one book or a million books, but collections at intermediate scales — what one might do with ten, a hundred, or a thousand books as part of a course, thesis, or research topic.

In the report, we also map out how the Public Interest Corpus should:

  • provide a secure technical environment for accessing data and provide the means to authenticate users and mitigate potentially infringing user behaviors

  • continually refine its data in order to increase the quality of the data we have about our books

  • encourage users to attribute books in their research through social and technical means

  • seek an environmentally sustainable infrastructure and mode of operations for its services

The team hopes that our report is a starting point rather than an endpoint, and we are currently working to make further progress toward implementation. My thanks again to the Mellon Foundation for generously supporting our work, and to Dave, Thomas, and Giulia, our helpful advisory board, and the many people we spoke to in 2025 for collaborating and advancing the Public Interest Corpus idea.


Dormant Digital Assets / David Rosenthal

PsiQuantum's computer
Four and a half years ago I wrote The $65B Prize about the potential reward for developing a "sufficiently powerful quantum computer" capable of cracking Bitcoin's encryption. It was based on work by Aggarwal et al, who were then projecting it would happen between 2029 and 2044. The $65B was the notional value of the wallet containing the million Bitcoin Satoshi Nakamoto mined originally.i But I noted that:
Chainalysis estimates that about 20% of all Bitcoins have been "lost", or in other words are sitting in wallets whose keys are inaccessible. That is around another 3.6 million stranded Bitcoin or at the current "price" about $234B.
So the potential prize was almost $300B.

Nearly a year ago I followed up with The $740B Prize. There are two reasons why the prize was then bigger but is now smaller than that:
  • Bitcoin's "price" had then increased from about $65K to around $107K, but it is now around $76K.
  • Because the "market cap" of Michael Saylor's Strategy was 1.6 times the "market cap" of its stash of Bitcoin, it was possible to use Saylor's algorithm to amplify the prize. But the factor has decreased from 1.6 to 0.81, so the algorithm no longer works.
But the threat to Bitcoin, and other cryptocurrencies, is far worse than I described in either of these two posts. The date is closer and the range of threats much broader. Follow me below the fold for the details.

Ryan Babbush et 8 al's 57-page Securing Elliptic Curve Cryptocurrencies against Quantum Vulnerabilities: Resource Estimates and Mitigations is a comprehensive overview of, and an improvement to, the state of the art in Cryptographically Relevant Quantum Computers (CRQC), that is quantum computing applied to breaking the Elliptic Curve Discrete Logarithm Problem (ECDLP) underlying the cryptography used by most cryptocurrencies:
This whitepaper seeks to elucidate specific implications that the capabilities of developing quantum architectures have on blockchain vulnerabilities and potential mitigation strategies. First, we provide new resource estimates for breaking the 256-bit Elliptic Curve Discrete Logarithm Problem over the secp256k1 curve, the core of modern blockchain cryptography. We demonstrate that Shor’s algorithm for this problem can execute with either ≤ 1200 logical qubits and ≤ 90 million Toffoli gates or ≤ 1450 logical qubits and ≤ 70 million Toffoli gates. ... On superconducting architectures with 10−3 physical error rates and planar connectivity, those circuits can execute in minutes using fewer than half a million physical qubits. We introduce a critical distinction between “fast-clock” (such as superconducting and photonic) and “slow-clock” (such as neutral atom and ion trap) architectures. ... We survey major cryptocurrency vulnerabilities through this lens, identifying systemic risks associated with advanced features in some blockchains such as smart contracts, Proof-of-Stake consensus, and Data Availability Sampling mechanism, as well as the enduring concern of “abandoned” assets.
They identify three classes of attacks that such a CRQC would enable:
  • On-Spend Attacks: Attacks targeting transactions in transit. When a blockchain user broadcasts a transaction, an attacker must derive the private key within the window of time allowed before the transaction is recorded on the blockchain. This requires a quantum computer fast enough to solve ECDLP within the transaction settlement time of the target blockchain which ranges from hundreds of milliseconds to a few minutes (e.g., about 400 milliseconds for Solana, about 12 seconds for Ethereum, about 10 minutes on average for Bitcoin). On-spend attacks are also known as “short-range” or “just-in-time” attacks
  • At-Rest Attacks: Attacks targeting public keys that remain exposed onchain or offchain for long periods of time, such as dormant wallets with reused keys. The attacker has days (or more) to derive the private key. At-rest attacks are also known as “long-range” or “long-exposure” attacks
  • On-Setup Attacks: Attacks targeting fixed public protocol parameters that produce a universal reusable backdoor into a cryptographic protocol. The backdoor is created by means of a one-time off-line quantum computation on a CRQC and subsequent attacks utilizing it are executed on a classical computer. For example, an on-setup attack may involve the use of Shor’s algorithm to recover the so-called “toxic waste” discarded in a powers-of-tau trusted setup ceremony. While the Bitcoin blockchain is immune to on-setup attacks, some scaling solutions, such as Ethereum’s Data Availability Sampling mechanism, and privacy protocols, such as Tornado Cash, are vulnerable to this especially insidious attack mode.
Some quantum computer architctures will be capable of all three, but some will not be fast enough for On-Spend attacks:
The resource estimates we describe below indicate that superconducting, photonic, and silicon spin qubit CRQCs, with their fast gates and short quantum error correction cycles, will be able to solve ECDLP in the span of a few minutes and thus, to launch on-spend attacks. By contrast, the elementary operations on neutral atom and ion trap devices are about two to three orders of magnitude slower. As a consequence, we do not expect CRQCs in these slower architectures to be able to launch on-spend attacks. We will refer to the former as fast-clock CRQCs and to the latter as slow-clock CRQCs
Babbush et al Fig. 1
There have been major improvements in both hardware and software since the previous estimates. In particular, software:
We are reporting here that our team has developed logical circuits to break ECDLP on elliptic curves over finite fields with n-bit prime modulus and n-bit group order requiring approximately 4.5n space. ... At n = 256 bits, the circuits use either 1200 logical qubits and 90 million Toffoli gates or 1450 logical qubits and 70 million Toffoli gates. In terms of the spacetime volume (a key resource which in particular drives the quantum error correction overhead), these estimates represent roughly an order of magnitude improvement over the most efficient prior work when applied to a single ECDLP instance. ... Our findings apply directly to ECDLP on secp256k1 — an elliptic curve widely used in digital signatures on popular blockchains, such as Bitcoin and Ethereum.
And hardware design:
The physical resource estimates we have discussed here (e.g., half a million physical qubits) assume relatively benign hardware capabilities, such as a planar architecture with degree-four connectivity and 10−3 physical gate error rates (i.e., consistent with a scaled up version of Google’s quantum processors that have been demonstrated experimentally). More aggressive hardware assumptions — such as the “bicycle” architecture used for 2-gross qLDPC codes — could drop qubit counts closer to one hundred thousand physical qubits, but this approach requires non-local degree-seven connectivity that has yet to be demonstrated in actual superconducting qubit devices.
There are now many companies trying to turn the designs into working fast- and slow-clock hardware:
Google Quantum AI, IBM Quantum, Amazon, D-Wave, Rigetti and IQM are developing superconducting qubit architectures; PsiQuantum and Xanadu are building photonic quantum computers while Diraq and Intel are working on spin qubit devices. ... Simultaneously, many companies, including IonQ, Quantinuum (a subsidiary of Honeywell) and Alpine Quantum Technologies are pursuing ion trap quantum processors while others, such as QuEra, Infleqtion, Atom Computing, Pasqal, and Logiqal are developing neutral atom devices.
Babbush et al thus argue that, if the first CRQC is fast-clock, all three attack types will arrive simultaneously:
These facts imply that a superconducting CRQC capable of performing at-rest attacks against static holdings recorded on the blockchain would likely also be capable of executing on-spend attacks against active transactions. As we discuss in more detail later on, we do not expect meaningful scaling challenges between a quantum computer with 1200 logical qubits and one with 1450, so, in order to focus and simplify subsequent discussion, we assume that first-generation fast-clock CRQCs may be able to solve ECDLP on secp256k1 and similar elliptic curves in about 9 minutes on average.
A major problem with current techniques for stealing cryptocurrency is that the proceeds need to be rapidly laundered because the thefts are detectable. But if the contents of a vulnerable wallet move to an invulnerable one, it is likely that the "owner" of the private key was taking a sensible precaution, not that some CRQC cracked the key. This is especially true of dormant assets; no-one is watching the wallet.

Although the paper's analysis of On-Spend and Setup attacks is fascinating and important, much of this post will focus on the At-Rest attacks on Bitcoin that my previous posts discussed. Babbush et al summarize the problem:
Dormant digital assets, including those abandoned or inaccessible due to lost private keys, pose a distinct and critical challenge. We highlight the example of Bitcoin’s Pay-to-Public-Key (P2PK) locking scripts, which secure over 1.7 million BTC. The total amount of dormant quantum-vulnerable bitcoin may reach 2.3 million BTC when all script types are considered. Unlike active wallets that can migrate to new standards, dormant assets cannot be “fixed” via forks that enable PQC protocols for future transactions. They represent a fixed target — tens or hundreds of billions of dollars in value that will eventually become accessible to a quantum attacker. The community will soon face difficult, unprecedented decisions regarding the fate of these assets, forcing tradeoffs between the immutability of cryptographic property rights and the economic stability of the network.
Babbush et al Fig. 4
Bitcoin wallets are vulnerable to an At-Rest attack if their public ECDSA key is visible on the blockchain. Over time, the way that transactions are encoded on the blockchain, the "scripts", have evolved. Babbush et al's Figure 4 shows this evolution.

A transaction contains an unlocking script, proving that the private key owns the wallet, and a locking script that transfers coins to the recipient. Some script types reveal the public key and are thus vulnerable to an At-Rest attack, some reveal only its hash and are thus immune unless the script is re-used.

Babbush et al Fig. 5
Babbush et al's Figure 5 shows the numbers of Bitcoin secured by the various script types. The shaded areas represent the script types that are vulnerable to At-Rest attacks as soon as any type of CRQC exists. A little over 1.7M BTC (~$130B) are vulnerable even if the script has not been re-used, around another 5.2M BTC are vulnerable if the script has been re-used. Thus the total at risk is currently "worth" around $525B.

As I have been writing, the part of the problem that cannot be solved by upgrading to post-quantum cryptography is what Babbush et al call "Dormant Digital Assets":
Inevitably, some vulnerable assets will not migrate to post-quantum protocols in time or possibly ever, perhaps because their owners do not learn of the threat until it is too late or perhaps because they have lost their private keys. The Ethereum blockchain’s contract accounts present similar long-tail migration issues. Thus, in addition to planning and executing upgrades to cryptographic protocols, each cryptocurrency community also faces challenges regarding quantum-vulnerable assets and smart contracts that may linger on public blockchains for an extended or indefinite period of time.

Despite lack of unambiguous precedent, many jurisdictions could classify accessing abandoned cryptographic assets, such as the P2PK coins, without authorization as theft. However, we maintain that if protocol changes are not made, vulnerable assets will eventually be cracked by quantum computers and taken irrespective of the law. In the absence of a clear resolution, these assets are likely to become a lucrative target for bad actors. We quantify the scale of some of the dormant assets at stake in Figure 13.
Babbush et al Fig. 13
After all, "code is law". The total is about 2.3M BTC "worth" about $175B. It might take months (fast-clock) or years (slow-clock) for a single CRQC to compromise the wallets with the 1.7M BTC. Of course, the attackers would choose the wealthiest wallets first, working left-to-right across Figure 13, and there is no reason to assume that they would only have a single CRQC, so the bulk of the loss would happen more quickly.

Bitcoiners have identified three responses to the problem that they could take, if it were possible to achieve consensus on which:
  • Do Nothing: accept that the 2.,3M BTC would be stolen and become part of the circulating supply, thus putting downward pressure on the "price".
  • Burn: implement a soft-fork that renders the content of vulnerable wallets unspendable after a certain date. Provided the date is before the first CRQC, this removes them from the circulating supply and avoids downward pressure on the "price". It does conform to the "not your keys, not your coins" mantra.
  • Hourglass: accept that the 2.3M BTC would be stolen but mitigate the effect on the "price" by limiting the rate at which these assets could be spent and thus enter the circulating supply.
As usual, concensus in the Bitcoin community is likely to be hard to achieve, giving an advantage to Do Nothing:
Those who consider digital property rights fundamental tend to have strong objections to the Burn proposal. Large Bitcoin holders are likely concerned about a potential supply shock and its effect on Bitcoin price. Miners may welcome Do Nothing and Hourglass proposals due to potential increase in transaction fees and volumes. The diversity and complexity of the Bitcoin community makes the ultimate outcome of these ongoing debates hard to predict. Indeed, an informal poll at 2025 Presidio Bitcoin Quantum Summit in San Francisco saw roughly equal support for each of the three categories of solutions.
Babbush et al add a fourth option, one or more sidechains to which public-spirited CRQC operators could, for a fee, send the contents of wallets they compromised, and other sidechains holding holding cryptographic proofs of ownership of the dormant assets in question. These sidechains would form a somewhat complex and thus risky ecosystem, and would be costly. Operating an early CRQC will be expensiuve, so the public-spritied operators would need to charge significant fees. Figure 13 shows that the bulk of the value is in the first 1000 wallets, so it is likely that this solution would leave something like 100K dormant wallets uneconomic to compromise.

The authors also review threats to other cryptocurrencies. One obvious threat is:
the objective of preserving transaction confidentiality on privacy-preserving blockchains, such as Zcash and Monero, cannot be fully achieved due to retroactive degradation of ECDLP-protected privacy of known addresses by quantum-capable adversaries.
Ethereum is by far the largest of the systems that are already taking proactive quantum-proofing steps. This is important because Etereum is far more exposed than Bitcoin to quantum attacks, as Babbush et al recount:
The account model uses vulnerable elliptic curves as a core component of onchain identity, putting all accounts that have carried out transactions at risk including high value accounts, such as exchange hot wallets. Smart contracts with exposed admin keys that cannot be easily rotated (without draining and replacing the contracts themselves) create a logistical bottleneck for security upgrades that puts “low ether, high leverage” accounts and contracts responsible for tokenized real-world assets, oracles, bridges, guardians, etc. at risk. Moreover, the potential compromise of validators threatens the integrity of the Proof-of-Stake consensus mechanism itself, creating an existential risk to the chain’s continued operation. Finally, the vulnerability of Data Availability Sampling mechanism opens it up to on-setup attacks that can be launched without a quantum computer using a reusable exploit created once on a CRQC.
For example, Tornado Cash is a smart contract whose administrative public key is 0x0000, which indicates that adminstrative control has been relinquished. The Presumably it will continue to function unless and until Ethereum decides to stop executing contracts with this key. Or, possiby, a CRQC could find the priovate key for 0x0000. Tornado's wallets have exposed keys, so could be drained unless each user removes their funds before the attack.

Babbush et al have a section addressing Public Policy Options for the Challenge of Dormant Assets. They start by arguing that government action to protect the "price" of BTC and similar cryptocurrencies by mandating the Burn option would be highly unlikely to succeed. They then argue that one approach would be to use existing laws on lost, abandoned and unclaimed assets:
if an owner of dormant coins has known for years that their assets are at risk and has failed to transact them to a post-quantum address, then they may be deemed to have failed to assert their rights through inaction.
But they point out many difficulties with this approach. For example, in the US the Revised Uniform Unclaimed Property Act (2016) is a model for relevant state laws on abandoned property, allowing it to be transferred to the custody of the state. The law assumes that the assets are in the custody of a "holder", a business such as a bank, but:
no party involved in the operation of the Bitcoin blockchain clearly meets the legal requirements to be the “holder” of the dormant coins. Indeed, none of them possess or control the coins since none of them know the private key.
They also discuss the:
spectre of dormant assets falling to rogue actors as a national security risk
and suggest that some governments will decide to use CRQCs to grab and maybe burn dormant assets. What they mean is the US is worried that the North Koreans might acquire a CRQC. They are the masters of stealing cryptocurrency via conventional techniques, as we apparently see with the recent compromise of Kelp DAO.

As regards Bitcoin, the authors recommend that governments establish a legal framework for dormant digital assets similar to that for conventional abandoned assets, and that the Bitcoin community decide to implement the Burn option. Given the current difficulty of passing stablecoin legislation and the history of consensus in the Bitcoin community, I would expect that neither will happen in time.

As regards Ethereum, the more sophisticated technology and governance, combined with the absolutely catastrophic effects a CRQC could have on the ecosystem, give some confidence that timely mitigations are possible.

New OCLC Research Report: The Library Beyond the Library / HangingTogether

Having recently released a Data Insights briefing on the Italian presence in the global published record, I’m inspired to introduce our latest OCLC Research report with a quote from Machiavelli:

Men in general judge more from appearances than from reality. All men have eyes, but few have the gift of penetration. (Niccolò Machiavelli, The Prince)

What’s the connection? Read on …

Our new report, The Library Beyond the Library: Recasting the Library Value Proposition for Visibility and Impact, begins with the observation that academic libraries are taking on important new roles throughout the research lifecycle: from publishing, to research data management, to impact assessment. In doing so, their value proposition to the rest of the institution is evolving, and at the same time, becoming more complex and potentially more opaque to campus stakeholders.

Libraries have a long-standing, well-understood value proposition centered around collections—a perception that has persisted even as libraries have developed new offerings across a wide range of emerging areas of research support. Fixed ideas about library roles and impact create a challenge: despite significant library investments, institutional stakeholders often don’t understand, recognize, or are simply unaware of library service offerings in these new areas. Instead, the traditional collections-centric view of the library endures.

Collection stewardship remains a vital aspect of the library mission, but increasingly, academic libraries face a disconnect between their evolving services and institutional perceptions. As Machiavelli observes, appearances often overshadow reality. Academic libraries offer valuable capacities and expertise well-calibrated to meet institutional research needs and priorities, but the perception that library impact is limited to its traditional role of collections steward will nonetheless prevail if nothing is done to correct it.

This has real consequences. Perceptions of the library’s value proposition based on fixed ideas of its role and impact make it difficult for the true scope of library capacities and expertise—the “reality”, as Machiavelli expressed it—to filter through to institutional stakeholders. This makes it hard for the library to get a seat at the table for institution-wide discussions and policy-making on topics like data governance, research metrics, or open research practices; moreover, it can lead to diminished influence, and ultimately, reduced funding.  

This is a challenge, but also an opportunity for libraries to clarify their continued relevance to the institutional research enterprise. And that brings us back to OCLC Research’s new report, The Library Beyond the Library: Recasting the Library Value Proposition for Visibility and Impact. Based on in-depth interviews with international research library leaders, desk research, and accumulated insights from our previous studies of research support services, The Library Beyond the Library helps libraries navigate these trends by providing a framework and insights that support strategic planning aimed at elevating the library’s visibility and impact within its parent institution.

While the report findings were derived in the context of research support services in academic libraries, we believe they apply equally to many other areas of strategic importance to academic libraries, such as institutional priorities for student success, as well as to public and other types of libraries.

What do we mean by “the library beyond the library”? It’s an operational principle that emphasizes engagement “beyond the library” with the broader institutional environment, in support of the institutional research and learning mission. In our report, we argue that this operational principle increasingly shapes libraries’ ability to fulfill their mission, retain influence, and demonstrate impact and value.

The library beyond the library principle focuses on engagement through three channels:

  • Strategic Alignment—Aligning library priorities with institutional goals
  • Collaboration—Partnering with other institutional units to advance shared priorities
  • Storytelling—Communicating the library’s evolving value proposition to stakeholders

These channels of engagement translate into important strategic questions for libraries:

  • How can library services and expertise support, or in some cases evolve in response to, institutional priorities?
  • What partnership opportunities exist with other institutional units, and how can libraries structure them effectively?
  • How can libraries construct and communicate compelling narratives about their evolving value and impact to stakeholders?

The reality of the academic library on campus has expanded well beyond its traditional role of collections steward, but its appearance to many stakeholders—its perceived value proposition—often has not kept pace. This creates risks, because visibility combined with a clear stakeholder understanding of impact drives influence, inclusion in institutional decision-making, and funding.

“The library beyond the library is not a slogan, but a practical response to mitigate these risks as part of a process of updating and communicating the library’s evolving value proposition. By investing in intentional strategic action across all three framework channels—strategic alignment (tying services and expertise to institutional priorities), collaboration (building relationships and shared commitments with key units), and storytelling (making impact clear to stakeholders)—the library can demonstrate it is and will continue to be a dynamic partner in shaping the future of scholarship and research at its parent institution.

We invite you to read the Library Beyond the Library report and consider how you might use the framework at your institution to assess current services and expertise, identify cross-institutional partnership opportunities that showcase library capacities, and reimagine narratives about the library value proposition.

The post New OCLC Research Report: The Library Beyond the Library appeared first on Hanging Together.

Weekly Bookmarks / Ed Summers

These are some things I’ve wandered across on the web this week.

🔖 Talk Talk - Live at Montreux 1986

From Aquarium Drunkard:

By July 1986, Talk Talk were still a functioning live unit touring behind The Colour of Spring. But something had already shifted as evidenced by this set from that summer’s Montreux Jazz Festival. Listen closely and you can hear the architecture beginning to loosen: tempos breathe, arrangements open, and familiar material begins to drift toward something less fixed, less performative.

This would be their final tour. Within a year, Mark Hollis and company would retreat into the studio to begin work on Spirit of Eden, a record that all but rejects the idea of live translation. As such, this Montreux performance exists as a kind of threshold document, one that captures the band onstage one last time before the music folds inward on itself.

🔖 The West Forgot How to Make Things. Now It’s Forgetting How to Code

The skills you need to be effective now are different. Technical expertise alone isn’t enough anymore. You need people who can take ownership, communicate tradeoffs, push back on bad suggestions from a machine that sounds very confident. Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate. The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.

🔖 There Will Be a Scientific Theory of Deep Learning

In this paper, we make the case that a scientific theory of deep learning is emerging. By this we mean a theory which characterizes important properties and statistics of the training process, hidden representations, final weights, and performance of neural networks. We pull together major strands of ongoing research in deep learning theory and identify five growing bodies of work that point toward such a theory: (a) solvable idealized settings that provide intuition for learning dynamics in realistic systems; (b) tractable limits that reveal insights into fundamental learning phenomena; (c) simple mathematical laws that capture important macroscopic observables; (d) theories of hyperparameters that disentangle them from the rest of the training process, leaving simpler systems behind; and (e) universal behaviors shared across systems and settings which clarify which phenomena call for explanation.

🔖 EHRAG: Bridging Semantic Gaps in Lightweight GraphRAG via Hybrid Hypergraph Construction and Retrieval

Graph-based Retrieval-Augmented Generation (GraphRAG) enhances LLMs by structuring corpus into graphs to facilitate multi-hop reasoning. While recent lightweight approaches reduce indexing costs by leveraging Named Entity Recognition (NER), they rely strictly on structural co-occurrence, failing to capture latent semantic connections between disjoint entities. To address this, we propose EHRAG, a lightweight RAG framework that constructs a hypergraph capturing both structure and semantic level relationships, employing a hybrid structural-semantic retrieval mechanism. Specifically, EHRAG constructs structural hyperedges based on sentence-level co-occurrence with lightweight entity extraction and semantic hyperedges by clustering entity text embeddings, ensuring the hypergraph encompasses both structural and semantic information. For retrieval, EHRAG performs a structure-semantic hybrid diffusion with topic-aware scoring and personalized pagerank (PPR) refinement to identify the top-k relevant documents. Experiments on four datasets show that EHRAG outperforms state-of-the-art baselines while maintaining linear indexing complexity and zero token consumption for construction.

🔖 The Night Manager (British TV series)

The Night Manager is a British spy thriller television serial based on the 1993 novel by John le Carré and adapted by David Farr. The six-part first series, directed by Susanne Bier and starring Tom Hiddleston, Hugh Laurie, Olivia Colman, Tom Hollander, David Harewood and Elizabeth Debicki, began broadcasting on BBC One on 21 February 2016.

🔖 What Will It Take to Get A.I. Out of Schools?

Immordino-Yang told me that the ultimate goal of any school assignment is not the finished project itself but the experience of having done it—an experience that A.I. tools are intended to abbreviate or obviate. With their prettifying intrusions and impatient, lurking presence, they block and reroute a young person’s natural, gradual progression toward cognitive maturity, “especially one who is still developing the neuropsychological substrate for creating narratives and thinking through arguments over time,” Immordino-Yang said. “It’s a fragile process, and it’s being interrupted.”

🔖 iocaine

This software is not made for making the Crawlers go away. It is an aggressive defense mechanism that tries its best to take the blunt of the assault, serve them garbage, and keep them off of upstream resources. Even though a lot of work went into making iocaine efficient, and nigh invisible for the legit visitor, it is an aggressive defender nevertheless, and will require a few resources - a whole lot less than if you’d let the Crawlers run rampant, though.

Before you deploy it, be sure you understand that iocaine does not make the bots go away. It tries to poison them, so they’d go away forever in the long run. If you’re looking for a way to return the favour, to “reward” these crawlers for their relentless assault, this is the tool you’re looking for.

🔖 AI as a Fascist Artifact

“AI” is being introduced increasingly into government processes: “AI” is promised to bring more efficiency into the administration, is supposed to “reduce bureaucracy”. But bureaucracy is not just an annoyance but one of the central tools that democratic societies have established to realize the core idea of democracy: Transparency in the application of power in order to be able to control said power. Democracy is not just about voting but about ensuring that all power – especially by the state – is used in accordance with the law and in a fair way. Stochastic “AI” systems break that promise. The “AI” just says that you do not get the support you need. No idea why, might be a bug or a deeply racist training data set or something else. Nobody knows. Now it is on you to prove that you are in the right, it is on you to fight for your right because the processes that were supposed to protect your rights are hollowed out in order to make them faster: We are forcing marginalized, disenfranchised people to fight against a black box trained on the data that already contains their disenfranchisement

🔖 You’re about to feel the AI money squeeze

Investors have poured hundreds of billions of dollars into companies like OpenAI and Anthropic to help them scale and build out their compute. Now, they’re expecting returns. After years of offering cheap or totally free access to advanced AI systems, the bill is starting to come due — and downstream, users are beginning to feel the pinch.

🔖 RustFS

Instantly replace MinIO & S3. Zero GC, maximum throughput.

🔖 Finishing Things

Exactly! Left behind? You can’t leave me behind fast enough. I’ve never wanted to be left behind so bad in my life. I’m utterly incapable of FOMO about this stuff. Do I have vague future concerns about my career and what must (surely) be a coming economic crash? Sure, but there is absolutely nothing that has convinced me that I’m “missing out” on anything.

At this point, all efforts by boosters and sloppologists just make me feel more defiant.

🔖 Go Ahead And Use AI. It Will Only Help Me Dominate You.

The tepid, conformist nature of your AI-assisted prose will only make my unexpected bons mots stand out more sharply. While you lean on a technological crutch of grammatical mediocrity to drag your essays over the finish line, I’ll be metaphorically zipping past you on my “magic carpet” of words emerging directly from my own declining and unpredictable brain. Over time, the intellectual box into which AI has seduced your creative process will suffocate you, leaving your bereft readers little choice but to drift into my subscription base.

🔖 Friends Don’t Let Friends Use Ollama

Ollama gained traction by being the first easy llama.cpp wrapper, then spent years dodging attribution, misleading users, and pivoting to cloud, all while riding VC money earned on someone else’s engine. Here’s the full history, and why the alternatives are better

Metastablecoin Fragmentation (updated) / David Rosenthal

A fundamental problem for decentralized systems like permissionless blockchains is that their security depends upon the cost of an attack being greater than the potential reward from it. Various techniques are used to impose these costs, generally either Proof-of-Work (PoW) or Proof-of-Stake (PoS). These costs have implications for the economics (or tokenomics) of such systems, for example that their security is linear in cost, whereas centralized systems can use techniques such as encryption to achieve security exponential in cost.

Shin Figure 3
Now, via Toby Nangle's Stablecoin = Fracturedcoin we find Tokenomics and blockchain fragmentation by Hyun Song Shin, whose basic point is that these costs must be borne by the users of the system. For cryptocurrencies, this means through either or both transaction fees or inflation of the currency. The tradeoff between cost and security means that there is a market for competing blockchains making different tradeoffs. In practice we see a vast number of competing blockchains:
Tether’s USDT sits on 107 different ledgers. ... USDC sits on 125.
The chart shows Ethereum losing market share against competing blockchains.

Shin's analysis uses game theory to explain why this fragmentation is an inevitable result of tokenomics. Below the fold I go into the background and the details of Shin's explanation.

Background

In 2018's Cryptocurrencies Have Limits I discussed Eric Budish's The Economic Limits Of Bitcoin And The Blockchain, an important analysis of the economics of two kinds of "51% attack" on Bitcoin and other cryptocurrencies based on PoW blockchains. Among other things, Budish shows that, for safety, the value of transactions in a block must be low relative to the fees in the block plus the reward for mining the block.

In 2019's The Economics Of Bitcoin Transactions I discussed Raphael Auer's Beyond the doomsday economics of “proof-of-work” in cryptocurrencies, in which Auer shows that:
proof-of-work can only achieve payment security if mining income is high, but the transaction market cannot generate an adequate level of income. ... the economic design of the transaction market fails to generate high enough fees.
Source
Bitcoin's costs are defrayed almost entirely by inflating the currency, as shown in this chart of the last year's income for miners. Notice that the fees are barely visible.

It has been known for at least a decade that Bitcoin's plan to phase out the inflation of the currency was problematic. In 2024's Fee-Only Bitcoin I wrote:
In 2016 Arvind Narayanan's group at Princeton published a related instability in Carlsten et al's On the instability of bitcoin without the block reward. Narayanan summarized the paper in a blog post:
Our key insight is that with only transaction fees, the variance of the miner reward is very high due to the randomness of the block arrival time, and it becomes attractive to fork a “wealthy” block to “steal” the rewards therein.
So Bitcoin's security depends upon the "price" rising enough to counteract the four-yearly halvings of the block reward. In that post I made a thought-experiment:
As I write the average fee per transaction is $3.21 while the average cost (reward plus fee) is $65.72, so transactions are 95% subsidized by inflating the currency. Over time, miners reap about 1.5% of the transaction volume. The miners' daily income is around $30M, below average. This is about 2.5E-5 of BTC's "market cap".

Lets assume, optimistically, that this below average daily fraction of the "market cap" is sufficient to deter attacks and examine what might happen in 2036 after 3 more halvings. The block reward will be 0.39BTC. Lets work in 2024 dollars and assume that the BTC "price" exceeds inflation by 3.5%, so in 12 years BTC will be around $98.2K.

To maintain deterrence miners' daily income will need to be about $50M, Each day there will be about 144 blocks generating 56.16BTC or about $5.5M, which is 11% of the required miners' income. Instead of 5% of the income, fees will need to cover 89% of it. The daily fees will need to be $44.5M. Bitcoin's blockchain averages around 500K transactions/day, so the average transaction fee will need to be around $90, or around 30 times the current fee.
Average fee/transaction
Bitcoin users set the fee they pay for their transaction. In effect they are bidding in a blind auction for the limited supply of transaction slots. Miners are motivated to include high-fee transactions in their next block. If there were an infinite supply of transactions slots miners' fee income would be zero. In practice, much of the timethe supply of slots exceeds demand and fees are low. At times when everyone wants to transact, such as when the "price" crashes, the average fee spikes enormously.

There was thus a need for a consensus mechanism that did not depend upon inflation. In 2020's Economic Limits Of Proof-of-Stake Blockchains I discussed a post entitled More (or less) economic limits of the blockchain by Joshua Gans and Neil Gandal in which they summarize their paper with the same title. The importance of this paper is that it extends the economic analysis of Budish to PoS blockchains. Their abstract reads:
Cryptocurrencies such as Bitcoin rely on a ‘proof of work’ scheme to allow nodes in the network to ‘agree’ to append a block of transactions to the blockchain, but this scheme requires real resources (a cost) from the node. This column examines an alternative consensus mechanism in the form of proof-of-stake protocols. It finds that an economically sustainable network will involve the same cost, regardless of whether it is proof of work or proof of stake. It also suggests that permissioned networks will not be able to economise on costs relative to permissionless networks.
Source
In 2022 Ethereum switched from Proof-of-Work to Proof-of-Stake, reducing its energy consumption by around 99%. This chart shows that, like Bitcoin, until the "Merge" the costs were largely defrayed by inflating the currency. After the "Merge" the blockchain has been running on transaction fees.

Shin's Analysis

Here is a summary of Shin's analysis.

Notation

  • There is a continuum of validators i.
  • For validator i ∈ [0;1], the cost of contributing to governance is ci > 0.
  • The blockchain needs at least a fraction of the validators  contributing to be secure. Shin writes:
    There are two special cases of note: = 1 (unanimity, corresponding to full decentralisation where every validator must participate for the blockchain to function) and = 0 which corresponds to full centralisation, where one validator has authority to update the ledger.
    = 1 is impractical,lacking fault tolerance. = 0 is much more practical, it is the traditional trusted intermediary.
  • If the blockchain is secure, each contributing validator earns a reward p > 0. A non-contributing validator earns zero.
  • The validators share a common cost threshold c*. If ci < c*, validator i contributes, if ci > c* validator i does not.

Argument

Each validator will want to contribute only if at least - 1 other validators contribute, which poses a coordination problem. The case of particular interest is the validator with ci = c*. Shin writes:
Intuitively, even though the marginal validator may have very precise information about the common cost c*, the validator faces irreducible uncertainty about how many other validators will choose to contribute. It is this strategic uncertainty — uncertainty about others' actions — that is the central feature of the coordination problem.
This "strategic uncertainty" is similar to the attacker's uncertainty about other peers' actions that is at the heart of the defenses of the LOCKSS system in our 2003 paper Preserving peer replicas by rate-limited sampled voting.

Shin Figure 6
Because the marginal validator's ci = c*, the decision whether or not to contribute makes no difference. Sin's Figure 6 explains this graphically. Rectangle A is the loss if k < and rectangle B is the gain if k > . Setting them equal gives:
c* = (p - c*)(1 - )
which simplifies to:
c* = p(1 - )
Shin and Morris earlier showed that this is the unique equilibrium no matter what strategy the validators use.

Result

What this means is that successful validation depends upon the reward p being large enough so that:
p c 1 − κ̂
Shin writes:
Note that the required reward p explodes as → 1. This is the central result of the paper: the more decentralised the blockchain (the higher the supermajority threshold), the higher must be the rents that accrue to validators. In the limiting case of unanimity ( = 1), no finite reward can sustain the coordination equilibrium.
Shin Figure 1
This yet another result showing that a reasonably secure blockchain is unreasonably expensive. The complication is that, much of the time, transactions are cheap because the demand for them is low. Thus most of the time validators are not earning enough for the risks they run. But:
When many users want to transact at the same time, they bid against each other for limited block space, and fees spike — much as taxi fares surge during rush hour. Figure 1 shows how Ethereum gas fees exhibited sharp spikes during periods of network congestion, such as during surges in decentralised finance (DeFi) activity or spikes in the minting of non-fungible tokens (NFTs). These spikes are not merely a reáection of excess demand; they are the mechanism through which the blockchain extracts the rents needed to sustain validator coordination.
Note that these spikes mean that the majority of the time fees are low but the majority of transactions face high fees. It is this "user experience" that drives the fragmentation that Shin describes:
When demand for block space is high, fees rise and validators are well compensated. But high fees deter users, especially those making small or routine transactions. These users are the first to migrate to competing blockchains that offer lower fees — blockchains that can offer lower fees precisely because they have lower coordination thresholds (and hence less security). The users who remain on the more secure blockchain are those with the highest willingness to pay: institutions, large DeFi protocols, and transactions where security and censorship resistance are paramount. This sorting of users across blockchains is the essence of fragmentation.
Shin notes that:
The fragmentation argument is the flipside of blockchain's "scalability trilemma," as described by Vitalik Buterin, who posed the problem as the impossibility of attaining, simultaneously, a ledger that is decentralised, secure, and scalable.
Source
It is worth noting that Buterin's trilemma is a version for PoS of the trilemma Markus K Brunnermeier and Joseph Abadi introduced for PoW in 2018's The economics of blockchains. See The Blockchain Trilemma for details.

Shin's focus is primarily on the effects of fragmentation on stablecoins. He notes that:
Rather than converging on a single platform, stablecoin activity is scattered across many chains (Figure 4). As of late 2025, Ethereum held the majority of total stablecoin supply but was facing competition from Tron and Solana, each of which had attracted tens of billions of dollars in stablecoin balances. Each chain serves different geographies and use cases: Ethereum for institutional settlement, Tron for low-cost remittances, Solana for retail payments and DeFi activity.
This fragmentation among blockchains would not matter much if stablecoins were interoperable between them, but they are confined to the blockchain on which they were minted:
A USDC token on Ethereum is not the same as a USDC token on Solana — they exist on separate ledgers that have no native way of communicating with each other. Transferring between chains requires the use of bridges: specialised software protocols that lock tokens on one chain and issue equivalent tokens on another. These bridges introduce additional risks, including vulnerabilities in the smart contract code — bridge exploits have accounted for billions of dollars in cumulative losses — and they impose costs and delays that undermine the seamless transferability that is the hallmark of money. The result is a landscape in which stablecoins from the same issuer exist in multiple, non-fungible forms across different blockchains, fragmenting liquidity and undercutting the network effects that should be the strength of a widely adopted payment instrument.

Discussion

As I've been pointing out since 2014, very powerful economic forces mean that Decentralized Systems Aren't. So the users paying for the more expensive transactions because they believe in decentralization aren't getting what they pay for.

Source
As I wrote in 2024's It Was Ten Years Ago Today:
The insight applies to Proof Of Stake networks at two levels:
  • Block production: over the last month almost half of all blocks have been produced by beaverbuild.
  • Staking: Yueqi Yang noted that:
    Coinbase Global Inc. is already the second-largest validator ... controlling about 14% of staked Ether. The top provider, Lido, controls 31.7% of the staked tokens,
    That is 45.7% of the total staked controlled by the top two.
Source
In addition all these networks lack software diversity. For example, as I write the top two Ethereum consensus clients have nearly 70% market share, and the top two execution clients have 82% market share.
Shin writes as if more decentralization equals more security even though it doesn't happen in practice, but this isn't really a problem. What the users paying the higher fees want is more security, and they are probably getting because they are paying higher fees. As I discussed in Sabotaging Bitcoin, the reason major blockchains like Bitcoin and Ethereum don't get attacked is not because the (short-term) rewards for an attack are less than the cost. It is rather that everyone capable of mounting an attack is making so much money that:
those who could kill the golden goose don't want to.
Shin Figure 3
In any case what matters for Shin's analysis isn't that the users actually get more security for higher fees, but that they believe they do. Like so much in the cryptocurrency world, what matters is gaslighting. But what the chart showing Ethereum losing market share shows is that security is not a concern for a typical user.

Update 22nd April 2026

Shin warned about the risks introduced by bridges:
"These bridges introduce additional risks, including vulnerabilities in the smart contract code — bridge exploits have accounted for billions of dollars in cumulative losses
Sidhartha Shukla covered the latest example of what Shin was warning about in a trio af articles. First, Crypto Hack Worth $290 Million Triggers DeFi Contagion Shock:
Hackers exploited a cross-chain bridge on Saturday, draining nearly $300 million from a key piece of decentralized finance infrastructure and setting off a ripple effect across multiple crypto platforms.

The attacker siphoned about 116,500 rsETH — a token issued by Kelp DAO that represents “restaked” Ether — by targeting a bridge built using LayerZero, a system that allows different blockchains to communicate. The total losses are estimated at roughly $293 million, making it the largest DeFi exploit of 2026.
Siphoned isn't quite the right word. The attackers were able to mint tokens purpoting to represent staked ETH, but the ETH didn't exist. Second, Crypto Hack Sparks $9 Billion Outflows From Top DeFi Lender":
The hackers deposited about $200 million of the tokens they stole on Aave as collateral for borrowing another cryptocurrency, according to cybersecurity researcher PeckShield. That move sparked fears among depositors about possibly worthless collateral on Aave, causing a rush for the exit, crypto portfolio manager Pratik Kala said.

All told, Aave has recorded some $9 billion of net outflows since Saturday, when news of the heist first emerged, data from industry tracker DefiLlama shows. Total value locked on the platform — a measure of assets held there — plunged by more than a third to $17.5 billion.
It is the equivalent of a bank run caused by missing or devalued collateral. Third, Hackers Behind $300 Million Crypto Theft Now Laundering Loot:
Wallets linked to the roughly $300 million hack of Kelp DAO — a decentralized-finance protocol — have begun moving funds through services designed to obscure their trail, according to blockchain security firm Cyvers. About $175 million worth of stolen assets was shifted into two new wallets and is being routed through platforms including THORChain, Umbra and BitTorrent, Cyvers said.

The activity picked up on Tuesday, shortly after Arbitrum, a network running on the Ethereum blockchain, froze around $75 million of the stolen assets. Arbitrum described the measure as an emergency action taken following input from law enforcement.
This illustrates a fundamental problem with "tokenization", which was once the next big thing. The existence of the backing depends upon the security of the mint. This must be a "smart contract" with (a) no bugs and (b) secure private key(s) owned by (c) humans immune from social engineering. What could possibly go wrong?

Update 24th April 2026

I just figured out an implication of Shin's fragmentation mechanism. Suppose you are a stablecoin user on blockchain A. You encounter a fee spike and decide to move to cheaper, less secure blockchain B. You need to pay a bridge to move your stablecoins to blockchain B, but you decide this investment in cheaper transactions is worth it. You are never going back to blockchain A, not just because it is too expensive, but also because you now have sunk costs of the move, and also because you would have to pay the bridge to move back. So Shin's mechanism implements a ratchet; users will flow from more expensive to cheaper blockchains but will not flow in the opposite direction. Thus the security afforded the median user will decay over time.

Open Refine: Repeating Groups of Operations (the Easy Way) / Library | Ruth Kitchin Tillman

I’m currently working on a project which involves a lot of spreadsheets exported from our ILS using the same process. To get started with each in OpenRefine, I load the spreadsheet as a project and apply the same 7 manipulations, including the steps for blanking down within records I previously described. These are OpenRefine-specific, not something I could do during export or before adding to OpenRefine.

After doing the same 7 operations on the 20th spreadsheet, I started wondering about reproducibility. Surely, there’s a way to do the same set of actions without all those clicks?

This question led me on a 3-hour journey learning how to manipulate projects with the API… only to realize in the process that there was a much, much simpler way. Today I’ll be sharing the simpler way, but I will write a post about the API, since when I brought it up in a group chat with plenty of regular OpenRefine users, no one else was familiar with it either.

Short and Sweet: Reusing Operations in OpenRefine

OpenRefine logs all of your actions in the “Undo/Redo” tab on the left sidebar. Despite visiting this area many times to undo and redo my actions, I had never experimented the “Extract” or “Apply” buttons above the list of operations performed. They’d faded into the background.

Yeah, it turns out those are really useful.

When you click Extract, you’re presented with some options: you can select which functions you want to export, see and copy the code, or export it to a text file. Apply is the inverse: just paste in the code or upload an exported text file and run the result.

For a lot of power users, this info may be enough to get you oriented and experimenting. But if you’re interested in more (we’re still on the simple stuff) or want a walkthrough with screenshots and sample code, carry on!

Operational Background

For some reference, I’m going to describe the data I have and the operations I was performing. I’ll also include some sample data for you to practice with below.

My spreadsheets have the following columns:

  • Catalog Key
  • Title
  • 856
  • Barcode
  • Item Type
  • Library
  • Home Location

They are based around the Item represented in the second half, so the first half just repeats bibliographic data for each entry. As described in the previous post I needed to get this data into the shape of an OpenRefine record by “blanking down” the Catalog Key. Then, for easier visual processing, I did the same to the Title and 856 fields.

Multi-value 856s are represented by multi-line entries in the cell, so I can blank down the field without losing any data. So I wanted to apply cell transformations to both those fields to make them unique within a record. Then blank that down. Then re-transform to set them back to the original data. Each of those actions involves navigating a menu. If you try to do this manually, you’ll see why I started wondering about duplication.

Longer Reuse Walkthrough With Screenshots

Until you’re very familiar with the syntax of operations or have some to reuse and update, the first step will be creating a project in OpenRefine. Operations are performed based on column name, so you will need to be sure that the column names in this data source are the same as the ones you plan to reuse it on.

Next, perform the series of actions you plan to repeat on other projects. While you can also perform actions you don’t plan to export and reuse, the operations shouldn’t depend on anything you’re not exporting. If you need to perform a manual action in the middle, like something which needs visual review, you’ll need to export the sets of steps on either side of it into two phases.

Once you’ve completed the actions you wish to repeat, click the “Undo/Redo” subsection of the left sidebar.

Click “Extract.”

Screenshot of the left sidebar with the word Extract up at the top circled in red.

In the window that pops up (modal), select which functions you want to export. The preview on the side will update with the relevant code. Use the Export button to generate a text file and save it somewhere appropriate.1

Screenshot of the popup window showing JSON of the functions I’ve previously performed

The easiest way to test how these functions work is to simply use the Undo to bring your project back to the last step before the sequence you exported.

Screenshot of rolled back steps grayed out in the sidebar

Now click the “Apply” button in that same area.

screencap of the Apply button circles

That opens a popup where you can either upload the same text file you exported or paste raw JSON.

screencap of popup with browse circledg

Load or paste to see the operations:

screencap of popup with an actual set of operations in it and Run operations circled

Click “Run operations”

Screenshot of the OpenRefine project, with the sidebar showing that all operations have been run. There are 28 records in record view.

Voila!

As long as the other projects for which you’re using it have the same column names and data that can be parsed the same way by these operations, you should be able to save time in the future. In my case, I am running the same report against different sets of record IDs in the system and getting an item-focused export, so my data should always look as consistent as any data from the ILS does2.

Much of the time, my OpenRefine projects are different enough that knowing this wouldn’t have been useful. But with an ILS migration on the horizon, I suspect I’m going to be doing many OR-based analyeses of different sets of records for the same things. I’m hoping this can make it a little less tedious.

Sample Data

If you’d like to test on a sample project, I’ve made one for you!

Simply:

  1. import the CSV into OpenRefine with its standard defaults,
  2. go to the Undo/Redo, where you’ll see “0. Create Project”,
  3. click Apply,
  4. Browse and upload the history.json file,
  5. Run Operations

and you should see something that looks like this:

Screenshot of the OpenRefine project, with the sidebar showing that all operations have been run. There are 28 records broken into a record view.


  1. You could also copy the text and save it somewhere, like Joplin, if that fits your workflows better. ↩︎

  2. This is a joke. But in this case, the operations are consistent and simple enough that even an 856 with no subfield coding and no indicators won’t cause a problem here. ↩︎

Reflection: My eighth year at GitLab and working on Product Operations reporting to the Chief of Staff to CPMO / Cynthia Ng

It’s about 2 months early for my 8 year work anniversary reflection. However, my work situation has changed, and it feels like the right time to be reflecting on the past (almost a) year. It’s interesting that 8 years sounds like a long time, but since I’ve changed divisions a couple of times, it also … Continue reading "Reflection: My eighth year at GitLab and working on Product Operations reporting to the Chief of Staff to CPMO"

2026-02-03: Detecting and reconstructing trustworthy edit histories using web archives / Web Science and Digital Libraries (WS-DL) Group at Old Dominion University

 

The CDC ACIP vaccine recommendations page includes a human-readable (left) and machine-readable (right) last updated date. 

In early 2025, US federal websites were undergoing rapid changes with the new presidential administration. In February 2025, we analyzed the rate of page deletions across administrations, and further analyzed the content of changed CDC pages in summer 2025. One surprising thing that we found was that while CDC pages contained a “last updated” date, many of the dates were incorrect: pages contained silent, unannounced updates, generally related to relevant executive orders.

How does trustworthiness change over time?

Silent, unannounced changes that do not match the last modified date of a webpage reduce the trustworthiness of the page. We can study three different change markers: the HTTP last-modified property, meta tags that contain last updated information, and text on the page that is viewable to the user containing update information. These markers are listed in order from most machine readable with standard syntax to most human readable with flexible syntax. The CDC webpages do not have the HTTP last-modified property, but have the other two markers. While the server should automatically update the HTTP last-modified property, the other two properties could have varying levels of automation, from being linked to updates in a content management system to being manually updated by a human. A page with content edits that are always reflected in its last updated markers is trustworthy. A page that has content edits that do not match its last updated markers is less trustworthy. Since we can use web archives to find these changes, we can analyze the trustworthiness gaps of the last updated dates, how often the gaps occur, and if that rate changes over time. 

In February 2025, we analyzed the changing rate at which webpages were deleted on US government websites. We found that different presidential administrations have completely different webpage deletion rates, with higher deletion rates correlating with Republican presidential administrations since 2008. Similarly, Tsoukaladelis et al. analyzed silent, unannounced changes on news article webpages in 2022 and detected a correlation between the Allsides media bias score (both left and right) and amount of silent changes by publisher.

Future work in this area will (1) determine a baseline for silent changes on government websites by administration, (2) determine baselines for the news publishers identified by Tsoukaladelis et al. to examine how the silent update rate changes over time, and (3) identify a third type of webpage exhibiting this phenomenon to analyze the change over time as well.

What features contribute to trustworthiness, and how can web archives currently be used to further support or refute trustworthiness?

Figure 2: Change Presentation Continuum. CC BY-NC-SA, adapted from Wikimedia Commons

Figure 2 shows our change presentation continuum. Each website can be categorized based on both its initial properties presented on the live web, as well as from additional captures available on web archives. Table 1 shows an example of each type of change on the continuum along with a description of each type of change presentation.

Change presentation (most trustworthy to least)

Explanation

Example

Past versions

All past versions of the webpage are available to view, giving the user the highest level of trust

Wikipedia

All past versions are available for every page


Updates summary

The most recent version of the webpage is available to view, along with a dated list of updates

https://web.archive.org/web/20040128055949/http://immortalised.net/lupdates.html List of changes with dates the changes were made


Update summary

The most recent version of the webpage is available to view, along with a dated description of the most recent update

https://www.merriam-webster.com/dictionary/quixotic Last update date and sentence describing change


Update date correct

The webpage contains a date representing the most recent update, but no update summary

https://www.cdc.gov/winter-weather/safety/index.html Last update date but no information on what was updated. A web archive can be used in order to verify the correctness of the date


Update date incorrect

The webpage has been changed more recently than the update date, negatively affecting trustworthiness

Copyright date

The webpage contains a copyright date, which could be used to infer a most recent change date

https://www.fairfaxcounty.gov/topics/copyright-privacy The only date on the webpage is the copyright date, which is different from the current year, inferring no changes since the new year.


No date

There is no date information anywhere on the webpage, giving the user no information about any changes that have occurred

https://info.cern.ch/hypertext/WWW/TheProject.html There is no date information on the webpage.


Table 1: Change presentation continuum examples based on live web presentation

As shown in the two examples below, web archives can be used to either improve or deteriorate a page’s trustworthiness rating. This means the rating of any site can change more towards either of the extreme ends of the continuum, by using web archives to provide additional captures.

Example 1: more trustworthy: Rakuten Viber Messenger’s Terms of Service includes both a last updated date and a summary for the most recent update. Based on these live web characteristics, it would be labeled “Update summary” on the continuum. The Wayback Machine contains a few captures each month of this webpage. After examining captures in 2025, we can conclude this page is updated a few times a year. Using the additional information from the captures in web archives, we could increase the trustworthiness level from “Update summary” to “Updates summary” for this webpage.

The current change presentation wording (update summary level) is “Last updated: October 21, 2025. We’ve recently updated our terms and policies. View the summary of changes here.” Using web archives, this could be expanded to include more change information, specifically an updates summary list, which is a higher level of trustworthiness.

  • Updated October 21, 2025 (diff). View the summary of changes here.
  • Updated May 22, 2025 (diff). View the summary of changes here.
  • Updated March 24, 2025 (diff). View the summary of changes here.

Example 2: less trustworthy: CDC webpages contain a last updated date, as shown in Figure 1. The initial level for these webpages based on the live web would be “Updated date correct.” In our work, "Coming Back Differently: An Exploratory Case Study of Near Death Experiences of Webpages," we showed that the last updated dates on CDC webpages were inaccurate: words on the pages not in compliance with early 2025 executive orders were removed without updating the last updated date. Therefore, by using web archives, the trustworthiness of these webpages has decreased to the “Updated date incorrect” category.

Figure 3: This CDC webpage experienced changes between January 24 and February 10, as shown in the Wayback Machine Changes Tool, but the last updated meta tag date for both pages is January 8. From Frew et al., Coming Back Differently, Figure 4.

How could web archives be further used to detect and reconstruct trustworthy edit histories?

In the continuum shown in Figure 2, the most trustworthy level is categorized as past versions, and an example website meeting this level is Wikipedia. In order to guide our work of using web archives to amplify the trustworthiness (or lack thereof) of a page’s change presentation, we surveyed the features of the edit histories shown to users on Wikipedia and how researchers used those features to guide their work. We examined peer-reviewed publications from 2018 - 2025 that contained the phrase “wikipedia edit history.” 

  • Authors: Researchers used authorship information about which articles the same author edited, parse edit conflicts on a page, count edits, track the frequency of the edits of an author over time, verify the trustworthiness of individual authors, and also used IP addresses as a proxy for author data of anonymous editors.
  • Edit properties: Researchers used page edit counts, the “minor” flag, and the time of the edit in their work.
  • Filtering: Researchers filtered the data (every edit on Wikipedia since inception) by single article, article subject, time of edit, and tag.
  • View: Researchers viewed the data as a graph, tuple, time series, or as text (for natural language processing or large language model training).
  • Researchers also used the edit history data to follow redirects and to identify vandalism.
  • Data: about half of researchers used a derived, cleaned data set and the other half used either Wikipedia dumps or other raw downloads.

Clearly, Wikipedia edit history is extremely useful to researchers who are looking for examples of a variety of edit types. So, why would researchers have a need for web archives when so much Wikipedia edit history data is already available to them? The answer is, not every research would need web archives, but some would. It depends on what type of changes the researcher needs examples of for their work. There are three pathways: cases where Wikipedia edit history contains information not found in web archives, such as when author information is needed; cases where either Wikipedia or a web archive could suit the needs of the researcher, in which case the cleaned and semantic data from Wikipedia would probably be more suitable; and finally, cases where the data in a web archive would be more suitable than Wikipedia. Faruqui et al., and others, have shown that the language on Wikipedia is different than in other contexts, so this is a good starting point for coming up with additional web archive-preferred uses.

Conclusion

We found that websites are communicating inaccurate last updated dates, which affects their trustworthiness. We and other researchers have found that the change rates of silent updates changes over time. We enumerated levels across a change presentation continuum on the live web, and showed how web archives can be used to provide further evidence for or against a webpage’s trustworthiness in this manner. We conducted a literature review of Wikipedia edit history use cases, and used that to start informing how web archives can be used to detect and reconstruct edit histories in a way that will be useful to researchers.

References

  1. Frew et al. Establishing a Baseline by Administration for the Takedown of US Government Webpages using Web Archives.
  2. Frew et al. Coming Back Differently: An Exploratory Case Study of Near Death Experiences of Webpages.
  3. Tsoukaladelis et al. The Times They Are A-Changin’: Characterizing Post-Publication Changes to Online News.
  4. Faruqui et al. WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling Language and Discourse.
-Lesley

2026-04-21: Was Snopes.com making silent updates to its articles before 2021? [Rating: TRUE] / Web Science and Digital Libraries (WS-DL) Group at Old Dominion University

Figure 1: Snopes.com has no public captures on the Internet Archive’s Wayback Machine prior to October 2021.

Snopes.com is a well-known fact checking website. It has been rated trustworthy by multiple rating organizations [1]. However, in 2021, BuzzFeed broke a story that, in fact, Snopes was plagued by plagiarized articles, and that authors were being told to plagiarize and then silently make updates to their articles after publication in the name of SEO.

One of the byproducts of the BuzzFeed investigation is that there are no publicly available captures of Snopes.com in the Wayback Machine prior to 2021. Snopes’s official statement regarding this is that the founder, who was named responsible for the plagiarism, had a policy against Wayback captures; but, now that he was removed from duty, the board was going to change the policy going forwards. In fact, having no captures before 2021 means that the archives available give a more trustworthy historical view of Snopes.com than exists in reality. This kind of manipulation is a form of data craft that misuses web archives [2]. Regardless, users do not have a way to access snopes.com captures before 2021 in the Wayback Machine.

Finding archived copies of Snopes.com before 2021

The Wayback Machine is not the only web archive. In addition, the Wayback Machine accepts donated crawls from other organizations. One of these organizations is Common Crawl, which started crawling in 2008. It seems almost certain that Common Crawl would include snopes.com captures, though probably not with enough frequency to show the silent edits on individual article pages. Searching in a random month (July 2016) within the plagiarism window shows that there are over 12,000 captures of snopes.com pages and resources during that month. Only 453 of these pages were archived a single time during this month, so future analysis with this approach may be possible.

Figure 2: In July 2016, there were over 12,000 captures of snopes.com pages via Common Crawl, with most having 2 or more captures. Source: index.commoncrawl.org


Another organization that has captures of snopes.com before 2021 is Archive-It. Specifically, one of the very first public captures of Snopes.com in the Wayback Machine belongs to an Archive-It organization which turns out to be Mark Graham, the Director of the Wayback Machine. Mark’s personal collections have been instrumental in analyses about news websites [3] and websites in other countries [4]. He has over 300 Snopes articles archived with frequency starting around 2018 in his collection.

We also used MemGator to investigate if there are pre-2021 captures of snopes.com in additional archives; we found that there are over 13,000 captures of the main page pre-2021 in archives including the Icelandic Web Archive, the Australian Web Archive, Archive Today, the Portuguese Web Archive, the Government of Canada Web Archive, along with Archive-It. Over 12,000 of the captures are on Archive-It.

Looking for Silent Updates

Previously, we found silent updates occurring on federal websites in February 2025 [5]. Other researchers also found silent updates on news websites in 2022 correlated with political bias of the news organization [6]. We are also researching how websites announce their changes and how trustworthy the announcements are [7]. Since Snopes.com is regarded as a trustworthy website, we would not have expected silent updates prior to reading the BuzzFeed article.

First, let’s take a moment to observe the current change presentation of Snopes.com. Snopes.com includes a last updated date, and a list of all changes. This would mean that the live web version of Snopes is presenting itself as “Updates summary,” the second most trustworthy level, according to the change presentation gradient [7].

Figure 3: Example of updates summary presentation on a snopes.com article.


However, using Mark Graham’s captures, we have found a number of pages with silent updates.

Example 1: Missing last updated information

The news story “White House Press Secretary Blasted for Sharing Infowars Video to Bar Reporter” states it was published on November 8, 2018. However, a quote from the article states, “On 11 November 2018, White House Counselor Kellyanne Conway admitted…” Based on the published date of November 8, this doesn’t seem possible. Examining a capture from Archive-It, this article used to include a manual last-updated date of November 12. It doesn’t appear Snopes included an updates summary in 2018 like it does in 2026. Sometimes, examining the machine-readable headers gives additional information; in this case, however, the x-archive-orig-last-modified header is the same as the capture time. The Snopes sitemap for that month indicates the page has been updated as recently as March 2025.

Figure 4: This snopes.com article originally stated it was updated, as shown above in an Archive-It capture. On the live web, as shown below, there is no mention of an update on this article anymore.


Example 2: Removal of AP articles

Snopes.com used to include exact text versions of Associated Press articles. Now, however, they redirect to the actual AP article. Given the plagiarism backhistory, this is not surprising. A capture on Archive-It shows the direct text of an AP article on snopes.com. The article link on snopes.com today redirects to the AP article rather than duplicating the text.

Example 3: Silent Update - Word-switching

The most interesting silent update we found was an article where many words had been replaced with synonyms after the fact. The article, “Did Obama Admin Build Cages That House Immigrant Children at U.S.-Mexico Border?” was published July 2, 2019 and includes no update date. However, we examined the capture from July 30, and it contained completely different text. Searching the web for this original text shows multiple citations. However, by August 9, the text changes to the version that matches the live web.

Figure 5: Differences between the July 2, 2019 and live web snopes.com article “Did Obama Admin Build Cages That House Immigrant Children at U.S.-Mexico Border?” show that the article was edited with a word-switching pattern. 


Example 4: Updates declaring SEO

David Mikkelson’s 2007 article on Mr. Rogers had an SEO update in 2022, which was disclosed in its updates summary. The original title is widely referenced, as determined by searching the web. If you search the web for “Updated SEO” on snopes.com, you can find other articles where they have noted updating the title as well. One of the reasons why Snopes has been rated more trustworthy and less biased by the organization Media Bias / Fact Check is because its titles are questions, which is interpreted as more neutral than a title that supports one conclusion over another. 

Figure 6: Google search for “Updated SEO” on snopes.com showing multiple articles with titles changed.


Outlook

Originally, we thought that a trustworthy website like Snopes.com would not have silent updates. However, after reading the BuzzFeed article about the rampant plagiarism on the site, we decided to investigate. We found evidence of silent updates using web archives beyond the Wayback Machine. This reduces Snopes.com’s trustworthiness level. Future work could determine if their plagiarism problem has truly ended, or if silent updates are still occurring post-2021.

References

[1] Yang et al. Are Fact-Checking Tools Reliable? An Evaluation of Google Fact Check. https://arxiv.org/abs/2402.13244v1

[2] Acker et al. The weaponization of web archives: Data craft and COVID-19 publics. https://misinforeview.hks.harvard.edu/article/the-weaponization-of-web-archives-data-craft-and-covid-19-publics/ 

[3] Weigle et al. Right HTML, Wrong JSON: Challenges in Replaying Archived Webpages Built with Client-Side Rendering. https://doi.org/10.1109/JCDL57899.2023.00022

[4] Ben-David et al. The Internet Archive and the socio-technical construction of historical facts.  https://doi.org/10.1080/24701475.2018.1455412

[5] Frew et al. Coming Back Differently: An Exploratory Case Study of Near Death Experiences of Webpages. https://digitalcommons.odu.edu/computerscience_fac_pubs/404/ 

[6] Tsoukaladelis et al. The Times They Are A-Changin’: Characterizing Post-Publication Changes to Online News. https://doi.org/10.1109/SP54263.2024.00033

[7] Lesley Frew. Detecting and reconstructing trustworthy edit histories using web archives. https://ws-dl.blogspot.com/2026/02/2026-02-03-detecting-and-reconstructing.html 


--Lesley

Neoliberal Time and the Promise of Slow Librarianship / Meredith Farkas

I’ve meant to post about this for a while, but one of the book chapters I wrote back in 2023-24 during my sabbatical has finally come out (the other is estimated for early 2027, <sigh…>). The book Slow Librarianship: Reflections and Practices edited by the wonderful Ashley Rosener has finally been published by Library Juice Press! I deposited a copy of my chapter, “Neoliberal Time and the Promise of Slow Librarianship” in Knowledge Works Commons so that anyone can freely access it. I’d love to hear your thoughts about it! I know a lot of people writing about slow librarianship have cited my blog posts because there isn’t a lot out there on slow librarianship and I’m not a big scholarly publishing type, so hopefully this will be useful to those folks looking for a more traditionally-published source to cite.

I’m also really honored to be keynoting the Conference on Academic Library Management (CALM) next month, talking about slow management in a fast world. Ironically, I had thought of submitting a proposal to give a talk on slow librarianship at the conference but talked myself out of it because I’m not currently a manager. It’s amazing that after 20 years in this profession, I still suffer from impostor syndrome. CALM has consistently been my favorite library conference to attend and it’s a conference that has embraced slow practices in everything they do so I’m especially honored to have been asked to speak. The conference is free to attend and my talk will be recorded, so I’ll be sure to share it once it’s up on the web.

The Role of a New Machine / Dan Cohen

A photograph of half of a biege and blue computer keyboardMarcin Wichary, Data General keyboard, CC BY-NC 4.0

Stop me if you’ve heard this one before:

A crack team of hardware and software engineers, inspired by breakthroughs in computer science and electrical engineering, are driven to work 18-hour days, seven days a week, on a revolutionary new system. The system’s capabilities and speed will usher in a new era, one that will bring transformative computing to every workplace. The long hours are necessary: the team knows that every major computer company sees what they see on the horizon, and they too are working around the clock to take advantage of powerful new chips and innovative information architectures.

The team is almost entirely men, men whose affect and social skills cluster in a rather narrow band, although they are led by a charismatic figure who knows how to persuade both computer engineers and capitalists. This is a helpful skill. Money, big money, is flowing into the sector; soon it will overflow. Engineers are constantly poached by rival companies. Hundreds of new competitors arise to build variations on the same system, or to write software or build hardware that can take advantage of this next wave of computing power. Some just want to repackage what the computer vendors produce, or act as consultants to the companies that adopt these new machines.

The team solves one problem after the next, day and night, until the machine is complete. They focus, overfocus, block out the other concerns of the world. Their wives are ignored, as are the kids. The work is too important.

* * *

Such is the story of Data General and the group that built the computer system code-named “Eagle,” which would be successfully marketed as the Eclipse MV/8000. My summary above comes from Tracy Kidder's wonderful book The Soul of a New Machine, published in 1981. It’s about the rise of minicomputers, a now-amusing name for machines the size of double-wide refrigerators, which were considered a major advance during the 1970s, when gargantuan IBM mainframes still roamed the earth and were possessed only by the largest companies and bureaucracies. Minicomputers used new CPUs and memory that made computing accessible to a much wider range of applications and locations, and were relatively cheap. They flourished.

The Soul of a New Machine has much to recommend it — it won the Pulitzer Prize for its propulsive narrative and crisp explanations of complex technology — but I’m writing about it now, following Kidder’s recent passing, because the book helpfully dislodges you from your presentist perspective and asks, “Look what happened before — sound familiar?”

* * *

A half-century after it was published, The Soul of a New Machine does a better job challenging AI hype than most current criticism. (Also, there are probably writers working on books about AI who are shaking their fists at Kidder for beating them to that memorable title.) It's hard to read The Soul of a New Machine in 2026 without wondering whether all this AI hype is really so new. Is AI truly more revolutionary than a previous wave of computer technology that offered, for the first time, to put screens on every desk of every company? The Data General team helped to bring about a transition not from existing software and hardware to incredibly intelligent software and hardware, or from powerful computers to superpowerful computers, but literally from paper to digital files and high-speed processing. Now that is a transition. The millions of companies that could not afford an IBM mainframe could afford a Data General Eclipse or a DEC VAX system or a minicomputer from another competitor. They could, for the first time, give every employee the power of computers. Is having Microsoft Copilot help your accountants with their spreadsheets more revolutionary than moving those accountants from physical spreadsheets to electronic ones?

Amazingly, the final chapters of The Soul of a New Machine tackle exactly the same profound questions we are struggling with today regarding the impact of artificial intelligence, and Kidder records Data General engineers expressing concerns that sound straight out of the mouths of engineers working at OpenAI or Anthropic. The team’s excitement upon the completion of the Eagle leads to reflections and bigger worries than beating their competitors. What if the Pentagon wants to use the Eagle for war or other destructive purposes? Should the team object or build back doors into the machine? What will the new computer system mean for employment, since it will replace many functions of work with software, and do those tasks faster than anyone can imagine, in nanoseconds? What if their work culminates in true artificial intelligence, and the machines take over and destroy us?

Spending so much time with the team, Kidder begins to ponder these questions himself — and has his own unsettling encounter with the technology. An engineer introduces Kidder to Adventure, one of the engrossing text games of early computing. He is sucked into its digital world, playing nonstop for hours. The computer suddenly feels alive, intelligent.

But Kidder pulls back.

It was the time of night when the odd feeling of not being quite in focus comes and goes, and all things are mysterious. I resisted this feeling. It seemed worth remembering that Adventure is just a program, a series of step-by-step commands stored in electrical code inside the computer.

How can the machine perform its tricks? The general answer lies in the fact that computers can follow conditional instructions.

Kidder turns to one of the Data General engineers, Carl Alsing:

I asked Alsing how he felt about the question — twenty years old now and really unresolved — of whether or not it's theoretically possible to imbue a computer with intelligence — to create in a machine, as they say, artificial intelligence.

Alsing stepped around the question. “Artificial intelligence takes you away from your own trip. What you want to do is look at the wheels of the machine and if you like them, have fun.”

Alsing’s focus on the role of the new machine in your life or work, rather than its purported soul, instantly dispels the mythology surrounding this emerging technology. Even after an all-nighter building a revolutionary computer, Alsing is lucid about what he is making: a tool that might be helpful for some people and some purposes, but not for others.

* * *

In the 1980s, most of the minicomputer companies, launched with such excitement in the late 1970s, failed. Data General was acquired for a fraction of the billions it was once worth. The minicomputer, however, was broadly adopted, was transformative, became routine, and then was surpassed by a new new machine, the personal computer.

Later, Data General’s domain name, DG.com, was sold to a chain of discount stores, Dollar General.


Angels in America / David Rosenthal

I have wanted to write this post for a long time, but I was waiting until I could visit the invaluable Royal National Theatre Archive to check my memory of their early productions. It doesn't look like I'll be in London any time soon, and I have the time now to write a long post about a long play, so here goes.

Growing up in London meant that theatre has always been an important part of my life. I have seen a great many plays including some legendary performances and magnificent productions, such as Royal National Theatre's 2014 King Lear. One of my particular theatrical interests is long-form plays. Highlights of this genre have included:
Play Text
But there is one such play that is very special to me, Tony Kushner's 7+ hour Angels in America. It is clearly among the greatest plays of the 20th century. I was there at the beginning, and I have seen many productions since. Below the fold I recount my history with this masterpiece.

Introduction

Anyone interested in this play should read both the text of the two halves, Millennium Approaches and Perestroika, and Isaac Butler and Dan Kois' magisterial and comprehensive oral history, The World Only Spins Forward: The Ascent of Angels in America. Because my story starts in 1991 I have used both to refresh my memory. Below the many quotes without links are from Butler and Kois, to whom I owe a debt of gratitude. I also viewed the National Theatre's 2017 production on the National Theatre at Home streaming service.

When I moved to the Bay Area in 1985 It was a decade since I'd lived in London and I was starved of theater. So I went a bit nuts and over the next few years subscribed to American Conservatory Theater, Berkeley Repertory Theatre, the Magic Theater and the Eureka Theater.

Eureka Theatre (1991)

The story of the play starts with a $50,000 grant from the National Endowment for the Arts for Tony Kushner to write a "two-hour play, with songs" for "five gay men and an angel" that the Eureka would produce. In 1989 the play was developed and in 1990 workshopped at the Mark Taper in Los Angeles.
KUSHNER: I wrote the part of Harper for Lorri Holt, Hannah for Abigail Van Alyn, Sigrid [Wurmschmidt] was the Angel. And Jeff King, I wrote the part of Joe for him. And that took care of the Eureka company. My first year at NYU, I became friends with Stephen Spinella. I thought then, as I think now, that he was one of the most remarkable actors I'd ever met, and I loved writing for him, and so I wrote Prior Walter for him.
As a subscriber to the Eureka I had responded to their call for donations to stage Angels in America in their next season, so I was anxious to see it. By the time it arrived at the Eureka it had evolved into two long plays with five gay men, two women, an angel and no songs.

I believe I saw Millennium Approaches the weekend after it opened, and Perestroika the following weekend. The cast was different from that at the Mark Taper. Rick Frank (Roy) and Sigrid Wurmschmidt (Angel) had both died, and Lori Holt had a new baby. It was:
  • Hannah: Kathleen Chalfant
  • Roy: John Bellucci
  • Joe: Michael Scott Ryan
  • Harper: Anne Darragh
  • Belize: Harry Waters Jr.
  • Louis: Michael Ornstein
  • Prior: Stephen Spinella
  • Angel: Ellen McLaughlin
The Eureka was staging Millennium Approaches, a four-hour play full of scene changes and magic, with almost no money. So another abiding memory is that they got this enormous impact with an incredibly stripped-down production:
[Ellen] McLAUGHLIN: Not that many people saw the Eureka version of it, but it was very important to those who did. I think there was a kind of beauty to the hammer and nails and spit and Scotch tape quality of that first version. It was moving because we had nothing.
In some ways it reminded me of the San Francisco Mime Troupe's annual free shows in parks around the Bay Area. The same quality of conspiring with the audience's imagination:
KATHLEEN CHALFANT: It was in some ways the most beautiful version of the play, and the most Poor Theater version of the play.
[Dennis] HARVEY: They basically had a giant shower curtain in front of the stage. For scene transitions they would just whip the shower curtain across, one actor at the front and one at the back, and when they got the other side it would be a new scene.
KUSHNER: To this day no one has ever done better with the magic. David [Esbjornson] is incredibly clever designing and building gizmos, so every magic trick in the play, David figured out a way to do it. There was no money or anything. He built all this shit — it was incredible.
DEBORAH PEIFER: That sense of amazement of a book popping up out of the floor in flames, all done with lighting.
KUSHNER: He did it all with bungee cords.
My most abiding memory of that first part was walking out of the theater to my car after midnight realizing I had seen the birth of a masterpiece. Theater critic Deborah Peifer sums up my reaction:
PEIFER: I have never in my life seen a situation in which people did not leave the theater during the intermission unless they had to. And I'm not talking about Can I get a cup of coffee? but Can I make it through the next act without a bathroom break? People could not bear to be out of that theater while this thing was happening.
To call this a brilliantly realized, profoundly funny, wickedly thoughtful piece of theater is to discover the severe limitations of language. I find myself wanting to say simply, it's more than I ever imagined. This is an experience in the theater you will remember for your whole life.
Deborah Peifer, Bay Area Reporter, May 30 1991
Perestroika was even more stripped-down, little more than a staged reading:
KUSHNER: Originally, every act of the five acts of Perestroika started with a clown scene set in the Soviet Union. These ended up being the first five scenes of my play Slavs! [1994].
ESBJORNSON: I used the five Bolsheviks as curtain raisers. I made the actors hold the scripts in hand while they moved around. And then at one point in each act, they laid down their scrips and acted out what I considered to be the central point of that act.
It wasn't just that there were five acts, but each of them was rather long. Butler and Kois' description of the first night matches my later recollection of how long it was:
[Brian] THORSTENSON: It got to the scene between Hannah and Prior where Prior's in the hospital and Prior says "I've always depended on the kindness of strangers." They finished the scene and the audience erupted into this ... applause ... I think it lasted a good five minutes. Kathleen and Stephen looked out at the audience, like, What is going on?.
McLAUGHLIN: I came out late into the evening as the Angel wearing the wings and the whole get-up, stood in front of the curtain and said, Act 5: Heaven, I'm in Heaven.
And the woman in the front row said "Act FIVE?! Oh my GOD! DO YOU KNOW WHAT TIME IT IS?!"
And I said "No". Because I honestly had no idea. It's not like I was wearing a watch.
And she said "It's MIDNIGHT, for God's sake! What's going on with the playwright? ACT FIVE? How long is it?
And I said, :We've never done it so I don't know, maybe forty-five minutes?" And she said, "The buses aren't even running anymore! How are we supposed to get HOME?" And she turns to the rest of the audience and says, "Are we going to stay?" And people sort of nodded and mumbled and she says "Well, I guess we'll stay, but I mean really ..."
And then she said, "But that's the end, right? There isn't an Act 6 or something?"
And I said, "Well, there's an epilogue."
And she said, "Oh my GOD, is he NUTS? An EPILOGUE? How long is THAT?"
And I said, "Well, apparently we HAVE TO STAY, but this is RIDICULOUS. TELL HIM HE HAS TO CUT!"
And then I said "Well, the longer we keep talking here ..."
Millennium Approaches was a real play and, despite being over four hours, had the audience in the palm of its hand with rapt attention. Perestroika was really different. Because it was clearly a work-in-progress, the audience felt that they were part of the process of creation, willing the show into existence.

Sometimes at the Berkeley Rep's Ground Floor residency program for new work the teams show their work — an example was Julia Cho's Aubergine which I saw both as a work-in-progress at the Ground Floor and next year in the Rep's season. Even as works-in-progress these shows are way shorter and way more polished than this Perestroika, and there is none of that show's unique, intense audience involvement. Of course, as the Angel notes, this was heightened by the show's length:
McLAUGHLIN: And then after the show, as the actors were basically limping to the dressing rooms, Tony, looking sort of glassy-eyed, came over to us and said, "You know, a really interesting thing happens after and audience has been in the theater for a really long time, they start to lose their bearings and become very malleable. They, like, forget what the think they believe about things and what they do for a living and their names and where they live and ..."
And we were like, "Yeah, Tony, and you really have to cut it."
It was magnificent but it killed its host. Butler and Kois quote the Eureka's business manager:
ANDY HOLTZ: That was the end of the Eureka Theatre as a producing company, The play that cemented the Eureka's place in the history of American theater was also the play that was too epic for such a small company. It's, like, the mom died giving birth to this amazing baby.

Royal National Theatre (1992)

Perhaps the most astonishing thing in the play's whole history is that, apart from a workshop at Juillard, the next production of Millennium Approaches was at the National Theatre in London. At the time, the National Theatre's productions on their two big stages, the Olivier and the Lyttleton, were pretty conservative, as befits the national flagship. But they also had the Cottesloe (now the Dorfman). It is essentially an empty cube, with tiers of seats on two sides. It can be configured in many different ways. For example, for Sing Yer Heart Out For The Lads most of the floor was arranged with tables and chairs, with the audience there being some of the patrons of the pub.

The National Theatre has a history of more adventurous productions in the Cottesloe; it opened with Ken Campbell's Science Fiction Theatre of Liverpool's Illuminatus Trilogy featuring drugs, satanic rituals, blasphemy and nudity. The trilogy later moved to The Roundhouse, which is where I saw this marathon. My main memory was that between the plays meals were served in the lobby. The actors ate with the audience, staying in character.

Nevertheless, Richard Eyre, the artistic director, took a huge risk:
RICHARD EYRE: Gordon Davidson sent me the play and said,, "I think you'd be interested in this". By page 2, I'd decided I wanted to do it.
He chose Declan Donnellan of the Cheek by Jowl theatre company to direct it, and Nick Ormerod, Donellan's partner, to design it. I'd seen several Cheek by Jowl productions at the National Theatre. They did classical plays, so Kushner took them to New York:
DONELLAN: Sometimes when you see images of New York, you think Oh, it's not authentic New York. It's performed New York, from movies and television. But when you get to New York, you find that New York is performing itself. Everybody is ready for their close-up.
ORMEROD: In delis and diners and whatever, they act like New Yorkers they've seen in the movies.
The cast was:
  • Hannah: Rosenmary Martin
  • Roy: Henry Goodman
  • Joe: Nick Reing
  • Harper: Felicity Montague
  • Belize: Joseph Mydell
  • Louis: Marcus D'Amico
  • Prior: Sean Chapman
  • Angel: Nancy Crane
NT's 1993 Angel
David Milling was the stage manager:
DAVID MILLING: The staging was incredibly simple. It was a shiny black floor and a giant American flag as the backdrop. And then in the center of the flag there were small doors for pieces of scenery to run through. Only at the end of the play did the flag split, half going left, half going right, and the Angel tracked through in a cloud of smoke.
I'm sure that the first thing everyone who saw the show remembers is the shock at the end of the Angel bursting through the flag with a huge noise, lots of smoke and a blinding light then announcing:
ANGEL:Greetings, Prophet;
 The Great Work begins;
 The Messenger has arrived.
(Blackout.)
But the start was almost equally memorable:
JON MATTHEWS: It opened with this image, there was nothing on the stage, and the furniture is on the sides, and they're sitting along the sides, and there was this balloon globe, and it had this light inside it, and they all put their hands on it, and then the play began.
Donellan said "My production was very much about the maintenance of tension", and I remember the production as a headlong charge forward:
KUSHNER: Caryl Churchill saw one of the early performances and came up to Declan afterwards and said, "Well congratulations, you've solved the short, choppy scene problem." When you do a play with short scenes, the scene ends, the audience has to disengage from where they've just been, and open themselves up to the next thing. That's hard to do because it involves stopping and starting over and over again. What Declan did is he dovetailed the ends of almost every scene in Millennium. He took the penultimate and the ultimate line, separated them, took the first line of the next scene and put it between the two. So you'd already be in the next scene. He wove them all together.
Donellan could do this because the staging was so sparse that it needed no time for scene changes. The actors carried in whatever props were needed for the next scene, and carried off those from the preceding scene.

It is important to understand both the risk the National Theatre was taking, as an institution supported by the government, and why it was so important, especially to the theatre community:
GARSIDE: The politics of it hit on the right moment. We were having our side of the conservative 1980s with Thatcher and the special relationship with Reagan. There was a kind of resentment of America, a dislike of their politics and how it intersected with our politics. And then there was an audience who hadn't seen a play about gay men and AIDS on a large scale, for whom the play was a revealation.

The big legal fight in gay rights at the time was against someting called Section 28. This was the big thing. It was in effect between 1988 and 2003, and barred the "promotion" of homosexuality.

Royal National Theatre (1993)

The next year both parts opened on Broadway and the National Theatre revived Millennium Approaches and added Perestroika in repertory. For the first time, I saw both parts in one day.
MYDELL: So we opened at the National, and you could see Part 1 and Part 2 in one day. That was seven and a half. People did it! We did it, and people came to see it! It didn't seem like — it felt like it was an event more than a play.
The cast was:
  • Louis: Jason Isaacs
  • Belize: Joseph Mydell
  • Angel: Nancy Crane
  • Joe: Daniel Craig
  • Hannah: Susan Engel
  • Harper: Clare Holman
  • Prior: Stephen Dillane
  • Roy: David Schofield
Part 1 was familiar, but it was the first time I'd seen Part 2 staged. First, seeing them as a seven and a half hour marathon was a revelation. Millennium ends with the mother of all cliff-hangers as the Angel arrives. Resuming the story after a quick meal is completely different from resuming it a week later. Second, Perestroika was very different from my memory of the Eureka. Kushner had done massive rewrites after the Eureka and the 1992 workshop at the Taper in LA:
KUSHNER: I know I haven't got it right yet. I'm not saying I don't think it's good — I think it's always been a good play, Perestroika — but it's never been a finished play and it never ever will be completely finished.
Many people compare the two parts and rate Perestroika as inferior, citing that its a lot more difficult and the fact that Kushner keeps changing it. But this is likely because they have seen it as two separate plays, which is a mistake. I'm pretty sure that people like me who have seen in in a marathon see it as a single play that changes once the Angel arrives. Change is one of its major themes, after all. And it is very Kushner-esque to have the Angel, whose message is to stop change, be the cause of change in the structure of the play as she is in Prior.

Next time I'm in London I plan to visit the Archive and expand these two sections.

American Conservatory Theater, San Francisco (1994)

ACT Program
I saw ACT's production of both halves, I think on successive weekends, but I remember very little about it. It was directed by Mark Wing-Davey, who played the two-headed Galactic President, Zaphod Beeblebrox, in the radio (my favorite) and TV versions (forget it) of The Hitchhiker's Guide to the Galaxy, written by Douglas Adams. I'd been impressed by his production of Caryl Churchill's Mad Forest at Berkeley Rep.

The cast was:
  • Hannah: Cristine McMurdo-Wallis
  • Roy: Peter Zapp
  • Joe: Steven Culp
  • Harper: Julia Gibson
  • Belize: Gregory Wallace
  • Louis: Ben Shenkman
  • Prior: Garret Dillahunt
  • Angel: Lise Bruneau
Dennis Harvey's review noted that:
the director throws action all over the Marines Memorial stage. Kate Edmunds’ set design is dominated by rolling scaffold bridges and graph-patterned backdrops. Their severity suggests a societal infrastructure stripped bare. Huge curtains (one a rather too-obvious American flag), one hydraulic ramp, fully exposed flying rig for the “Angel” (Lise Bruneau), fog, film projection, etc. add to the sensory overload.
This may be one reason it didn't stick in my memory. After stripped-down productions in the Eureka, basically a warehouse, and the National Theatre's Cotttesloe flexible space, the traditional proscenium stage, a more fleshed-out, much flashier staging, and the somewhat distant seating would have been jarring. Indeed, soon after this I stopped subscribing to ACT, only visiting for their excellent productions of Tom Stoppard's plays.

Royal National Theatre (2017)

By dint of waking up very early and standing in line for a long time I got day seats for a marathon of Marianne Elliot's sold-out, extraordinarily impressive production. It was a complete contrast to the earlier version. The cast was:
  • Hannah: Susan Brown
  • Roy: Nathan Lane
  • Joe: Russell Tovey
  • Harper: Denise Gough
  • Belize: Nathan Stewart-Jarrett
  • Louis: James McCardle
  • Prior: Andrrew Garfield
  • Angel: Amanda Lawrence
Joe and Hannah
Elliot's staging was a fascinating way to use the National Theatre's huge resources and the Lyttleton's vast proscenium stage to simulate the original's sparse aesthetic. She used multiple revolves and mostly skeletal scenery that flowed in and out to create small patches of light in the darkness to show, for example, the phone call between Joe and Hannah. Occasionally, as for Harper and Mr. Lies in Antarctica, the whole stage was lit but bare. There was only one scene with the kind of lavish scenery one often sees in the Lyttleton. It was the Council Room of the Hall of the Continental Principalities. Kushner's stage directions for this scene fill multiple pages, and the set needs to contrast Heaven with Earth, so this choice made sense.

One of the most striking and memorable things in Elliot's production was her vision for the Angel:
ELLIOT: Every image you see of this play involves a lovely angel in a white dress on a wire. I didn't want that.
Ben Power, the National's deputy artistic director, explains the Angel's entrance:
POWER: Prior's standing on his bed, as in other productions. The lights are changing. The sound of the approaching object is getting louder and louder. It's extremely loud in the auditorium. The lights change around him and he says, "Very Steven Spielberg".

Everyone's eye are on him and they're also going up to the flies. We know what's about to happen. They're going to fly in a woman with wings. As we're looking, as it's all building to a point of climax. At that point of climax there is a sense of a drop and a full blackout, which is very disorienting.

The lights come up. Everyone's eyes are looking up, looking for what object is coming in through the broken roof. Andrew's looking up there. And there's nothing there. As his eyeline comes down, there, strewn on the floor, among the rubble, is this thing. It's a sort of creature mess in browns and blacks. And then it rises from the floor — it's clearly been dropped from a great height — and coalesces into one body.
Lyra and Armored Bears
The National Theatre has resources that few other theaters do. One is a long history and deep expertise in stage puppetry. This reached a peak with His Dark Materials because in the play's world:
humans' souls naturally exist outside of their bodies in the form of sentient "dæmons" in animal form which accompany, aid, and comfort their humans.
Each actor was accompanied by a puppet of their daemon, manipulated by one or more puppeteers in head-to-toe black. It didn't take long for audience members to stop seeing them. At the end of the second part, all 28 actors came out for their curtain call. And then suddenly the puppeteers all pulled off their black head-dress, and you saw there were more of them than there were actors. And then the backdrop vanished and you saw all the way to the rear wall of the enormous Olivier stage. Standing there were all the stagehands. There were more of them than the actors and puppeteers combined. It was an amazing display of the vast resources the National Theatre can command for a major production.

Angel and Prior
Elliot's Angel was accompanied by a set of black-clad "shadows" like the daemon's. Except when Prior and she were wrestling, the Angel wasn't on a wire but being carried by the shadows. They would scurry around on all fours, sometimes converging on her to lift her up or sweep her massive wings, and sometimes heading off to the back of the set.

It wasn't just the physical resources the National Theatre devoted to the production, it was the time:
KUSHNER: I've never seen a director work as long or as hard on a production. A year of preparation. And you can see that degree — the depth of involvement, it's reflected in the design and in many of the choices she's made.
ELLIOT: We spent about a year and a half on the design. Not every day, but we touched in a lot. And I wished I had longer!
...
ELLIOT: We had eleven weeks, longer than anyone else has had.
KUSHNER: In a way it's the first adequate rehearsal period we've had for these plays.
When the production transferred to Broadway, it won the Tony for the Best Revival of a Play, and Andrew Garfield won for Best Actor and Nathan Lane won for Best Featured Actor. Both performances richly deserved the award.

Berkeley Repertory Theatre (2018)

Program
Berkeley Rep's production was directed by Tony Taccone, who co-directed the Mark Taper workshop with Oskar Eustis, and starred Stephen Spinella, for whom Prior was written and who I had seen at the Eureka, as Roy. So I have seen him play both of the victims of AIDS — his portrayal of sickness is remarkable, as was the contrast between how his Prior and his Roy fought the disease.

The cast was:
  • Hannah: Carmen Roman
  • Roy: Stephen Spinella
  • Joe: Danny Binstock
  • Harper: Bethany Jillard
  • Belize: Caldwell Tidicue
  • Louis: Benjamin T. Ismail
  • Prior: Randy Harrison
  • Angel: Francesa Faridany. Lisa Ramirez
Again, I saw both parts on a single day, as I recall starting at 1pm and ending at 11pm. It was astonishing how well the Rep's, with a regional theater's resources, stood up to the National Theatre's massively resourced production. This huge play can use huge resources, but it does not need them.

Angel and Hannah
Taccone's Angel was clearly influenced by Elliot's, but lacked the shadows. Despite this the Angel's flying, always the most difficult thing to stage, was really well done.

For the first time I got to see the "Roy in Hell" scene, which almost every production omits. It isn't in the published text. Omitting it means Roy's last appearance is when his ghost encounters Joe, a meeting between the play's two doomed characters. Including it, with Roy bargaining for something to do, is a sort of tribute to his drive and contrasts against Joe's spinelessness.

The Berkeley Rep's program had an interview with Spinella, who was initially reluctant to play Roy:
I got a text from Kushner saying — all I really remember is one word — "vital". That Roy is incredibly vital. I had already gone back and read all the Roy scenes, and it really hit me. That's the fun of playing this guy who is dying. He is fighting it tooth and nail. It's this knockdown, drag-out fight with this person who has this incredible will to live. It's different than Prior, who in a way is running away from his own death. Roy is just trying to get his ducks in a row and he's fighting the disease. He loses constantly, yet he keeps coming back. He is unrelenting, and that appeals to me. I'm not going to be in that hospital bed until I am ready to die. The hospital bed is going to have to grab me and pull me into it.
This could have been a quote from Nathan Lane.

The production gained glowing reviews from, among others, the LA Times and the SF Chronicle. For me, as my sixth viewing, seeing the play come back to the Bay Area over a quarter-century after it started here, over a single day, with such a grown-up staging, was a delight.

Weekly Bookmarks / Ed Summers

These are some things I’ve wandered across on the web this week.

🔖 Why I ask AI coders to ask me questions

Next, I will tell the AI to ask me 10 to 20 questions about architectural choices that I haven’t thought about, that it needs to pin down before writing code. Invariably, the questions that I’m asked are detailed, insightful, and pivotal to the design. Sometimes the AI will give me choices for answers, sometimes it will ask the question and make a sceptic recommendation.

I answer the AI’s questions, after which I ask the AI if it has any follow-up questions for me. After a few rounds of this back-and-forth, I ask the AI to create high-level documentation for different aspects of the solution. I then review the documentation. This documentation lets me review the entire design and make corrections, but it also serves as a record that this or another AI will be able to use in the future when it comes back to do software maintenance.

🔖 Gottstein Shoes

Sustainably felt slippers by Gottstein • Made of undyed wool • Comfortable felt slippers • Manufactured in Tyrol, Austria ► Feel the nature on your feet!

🔖 Baabuk Shoes

Baabuk wool footwear made responsibly and sustainably. Designed in Switzerland, crafted in Nepal and Portugal, built for comfort. The most comfortable wool sneakers, wool slippers, and boots.

🔖 Mythos and Cybersecurity

In the short term, we need something simpler: greater transparency and information sharing with the broader community. This doesn’t necessarily mean making powerful models like Claude Mythos widely available. Rather, it means sharing as much data and information as possible, so that we can collectively make informed decisions.

We need globally co-ordinated frameworks for independent auditing, mandatory disclosure of aggregate performance metrics and funded access for academic and civil-society researchers.

This has implications for national security, personal safety and corporate competitiveness. Any technology that can find thousands of exploitable flaws in the systems we all depend on should not be governed solely by the internal judgment of its creators, however well intentioned.

🔖 MNT Pocket Reform

Pocket-sized, repairable, and fully open source: Our iconic MNT Pocket Reform mini laptop is as versatile as you need it to be. Whether you’re traveling, attending classes, programming your tools at the cafe, or working at a data center—Pocket Reform fits nearly any space. And if you want it stationary, hook it up to a big monitor and play a video game or browse the web. Or get creative and use Debian-compatible software such as Libre Office, FreeCAD or Krita to express yourself.

🔖 A Philosophy of Software Design / John Ousterhout

Writing computer software is one of the purest creative activities in the history of the human race. Programmers aren’t bound by practical limitations such as the laws of physics; we can create exciting virtual worlds with behaviors that could never exist in the real world. Programming doesn’t require great physical skill or coordination, like ballet or basketball. All programming requires is a creative mind and the ability to organize your thoughts. If you can visualize a system, you can probably implement it in a computer program.

This means that the greatest limitation in writing software is our ability to understand the systems we are creating. As a program evolves and acquires more features, it becomes complicated, with subtle dependencies between its components. Over time, complexity accumulates, and it becomes harder and harder for programmers to keep all of the relevant factors in their minds as they modify the system. This slows down development and leads to bugs, which slow development even more and add to its cost. Complexity increases inevitably over the life of any program. The larger the program, and the more people that work on it, the more difficult it is to manage complexity.

🔖 FAR OFF TOWN - DUNEDIN TO NASHVILLE

Napier film-maker Bridget Sutherland made the 82min doco about one of Dunedin’s favourite musical sons and his trip to Nashville to make the album ‘Frozen Orange’ with the band Lambchop. Kilgour and Lambchop are close friends and have often toured together. They both record for Merge, and the merging of Kilgour with Nashville and its homogeneous Music City product was an ideal subject of intrigue for a film.

But Kilgour did not record with normal mainstream Nashville, he went to the city’s indie underground, and worked with people he knew and liked, and who definitely liked him, like Lambchop’s Mark Nevers, who produced the record in his own studio. Nashville’s front window is a mixture of Old Country - the never-quite-made-its who play on the hour every hour in the bars for tips - and far more lucratively, New Country, which is manicured market-aimed singers who are young, slick, attractive, bland, radio-friendly, and often extremely successful.

🔖 A eulogy for Vim

Vim is important to me. I’m using it to write the words you’re reading right now. In fact, almost every word I have ever committed to posterity, through this blog, in my code, all of the docs I’ve written, emails I’ve sent, and more, almost all of it has passed through Vim.

My relationship with the software is intimate, almost as if it were an extra limb. I don’t think about what I’m doing when I use it. All of Vim’s modes and keybindings are deeply ingrained in my muscle memory. Using it just feels like my thoughts flowing from my head, into my fingers, into a Vim-shaped extension of my body, and out into the world. The unique and profound nature of my relationship with this software is not lost on me.

🔖 easyaligner: Forced Alignment Made Easy

easyaligner is a forced alignment library for aligning text transcripts with audio. It is designed with a focus on ease of use, flexibility, and performance. The library can be used for a variety of applications, including

  1. Aligning e-texts with audiobook recordings to create interactive reading experiences (see the interactive demo below).
  2. Aligning podcast transcripts to enable features like chapter navigation and keyword search.
  3. Aligning protocols and recordings of parliamentary debates for research and accessibility purposes.
  4. Fixing misaligned subtitles in videos, or creating new subtitles from transcripts.
  5. Creating large-scale speech recognition and speech synthesis datasets for AI model training.

🔖 The Midnight Sky

The Midnight Sky is a 2020 American science fiction film directed by George Clooney based on the 2016 novel Good Morning, Midnight by Lily Brooks-Dalton. The script was written by Mark L. Smith. Clooney plays a leading role in his film, as an aging scientist who must venture across the frigid Arctic Circle to warn off a returning interplanetary spaceship following a global catastrophe on Earth. Felicity Jones, David Oyelowo, Tiffany Boone, Demián Bichir, Kyle Chandler, and Caoilinn Springall also star in this film.

🔖 Quo Vadis, Crawlers? Progress and what’s next on safeguarding our infrastructure

Readers, contributors, responsible bots, and abusive bots all share the same access points to our websites and infrastructure. We have therefore orchestrated our work with maximum care to minimize impact on our reading and editing community, with the ultimate goal of not impeding any person from accessing our projects. As a result of this work, we’re currently blocking or throttling about 25% of all automated requests that are coming from crawlers that don’t adhere to our policies (up to billions of requests per day). As we continue to improve our detection mechanisms, we expect this number to increase. Earlier this month, we also began rolling out global rate limits for API traffic, with a second rollout phase planned for April 2026.

🔖 Wikimedia Attribution Framework

The Wikimedia Attribution Framework provides guidelines that data reusers can follow to ensure that sources remain clear, recognizable, and consistent in external contexts. Attribution is essential for fair acknowledgment and active awareness of Wikimedia’s community-driven content, and it’s also a key factor in the continued growth and sustainability of the free knowledge ecosystem. The framework exists to:

🔖 HTTP Message Signatures Directory

HTTP-MESSAGE-SIGNATURES allow a signer to generate a signature over an HTTP message, and a verifier to validate it. The specification assumes verifiers have prior knowledge of signers’ key material, requiring out-of-band key distribution mechanisms. This creates deployment friction and limits the ability to dynamically verify signatures from previously unknown signers.

This document defines:

  1. A standardized key directory format based on JWKS for publishing HTTP Message Signatures keys.
  2. A well-known URI location for discovering these key directories.
  3. A new HTTP header field enabling in-band key directory location discovery.

🔖 Web Bot Auth - Cloudflare

Web Bot Auth is an authentication method that leverages cryptographic signatures in HTTP messages to verify that a request comes from an automated bot. Web Bot Auth is used as a verification method for verified bots and signed agents.

🔖 Framgments: April 14 / Martin Fowler

Understanding how to think about a problem domain by building abstractions (models) is my favorite part of programming. I love it because I think it’s what gives me a deeper understanding of a problem domain, and because once I find a good set of abstractions, I get a buzz from the way they make difficulties melt away, allowing me to achieve much more functionality with less lines of code. Cantrill worries that AI is so good at writing code, we risk losing that virtue, something that’s reinforced by brogrammers bragging about how they produce thirty-seven thousand lines of code a day.

🔖 Claude Code Running Claude Code in 4-Second Disposable VMs

Running Claude Code with full permissions inside a Docker container is a terrible idea. I did it anyway for about a week, then built something better.

Anthropic has an internal platform — people have been calling it Antspace since it got reverse-engineered from the Claude Code source — that runs AI coding tasks in isolated environments. It’s part of a vertical stack they’re building internally: intent goes in, code comes out, and the agent never touches the host machine.

I wanted that. Not the whole platform-as-a-service thing, just the core idea: give Claude Code a prompt, let it run with zero permission restrictions, stream the output back, grab any files it created, and destroy everything when it’s done. On a single Linux box sitting in my office.

🔖 On recognizing the handiwork of AI

As AI-generated images and texts proliferate, people have developed techniques for identifying them using clues like misshapen hands in images or distinctive words in text. This commentary situates these emerging practices within what Carlo Ginzburg called the “conjectural paradigm”: a mode of knowing that links contemporary AI detection to older traditions of medical symptomatology, art historical connoisseurship, and detective work. Yet unlike the stable or slowly evolving clues of earlier conjectural practices, the signifiers of AI involvement are rapidly shifting. This instability has consequences not only for how texts are read but also for how they are written. Authors now navigate a landscape of suspicion where their words may be misrecognized as machine generated. Rather than resolving into stable literacies, our efforts to recognize AI’s handiwork reveal the deeper uncertainties of authorship and interpretation.

🔖 We Found a Ticking Time Bomb in macOS TCP Networking - It Detonates After Exactly 49 Days

Every Mac has a hidden expiration date. After exactly 49 days, 17 hours, 2 minutes, and 47 seconds of continuous uptime, a 32-bit unsigned integer overflow in Apple’s XNU kernel freezes the internal TCP timestamp clock. Once frozen, TIME_WAIT connections never expire, ephemeral ports slowly exhaust, and eventually no new TCP connections can be established at all. ICMP (ping) keeps working. Everything else dies. The only fix most people know is a reboot. We discovered this bug on our iMessage service monitoring fleet, reproduced it live on two machines, and traced the root cause to a single comparison in the XNU kernel source.

🔖 Stanford Claude Code

Instructions for setting up claude-code to talk to the Stanford AI Playground API.

🔖 The peril of laziness lost

Left unchecked, LLMs will make systems larger, not better — appealing to perverse vanity metrics, perhaps, but at the cost of everything that matters. As such, LLMs highlight how essential our human laziness is: our finite time forces us to develop crisp abstractions in part because we don’t want to waste our (human!) time on the consequences of clunky ones. The best engineering is always borne of constraints, and the constraint of our time places limits on the cognitive load of the system that we’re willing to accept. This is what drives us to make the system simpler, despite its essential complexity

🔖 GPD Pocket

Most successful business people have one MacBook or Surface. because they not only have a stylish and gorgeous appearance but also are light and thin. Yet, their disadvantage is that they are not portable.We believe that future laptops shall not only be thin but also be small.GPD Pocket .GPD Pocket is such a product. It is not only gorgeous, ultra-light, ultra-thin like MacBook but also very small and can be taken away in a pocket at any time like a cell phone!

DLF Webinar: Content Authenticity and Provenance in the Age of Artificial Intelligence / Digital Library Federation

On Thursday, April 15, 2026, DLF hosted Joshua Sternfeld, Independent Scholar, and Kate Murray, Digital Projects Coordinator at Library of Congress for a discussion about content authenticity, provenance, and the future of trust in libraries, archives, and museums, building on their report, Content Authenticity and Provenance in the Age of Artificial Intelligence: A Call-to-Action for the LAMs Community. The conversation explored the challenges and opportunities presented by generative AI for cultural memory institutions and considered how LAM professionals can apply the report’s call-to-action pillars to emerging real-world examples across the field.

Recording | Slides (Call to Action Presentation Slides, Claude-Generated Report: Jenny Lind Reviews)

Content Authenticity and Provenance in the Age of Artificial Intelligence: A Call-to-Action for the LAMs Community (Report)

The post DLF Webinar: Content Authenticity and Provenance in the Age of Artificial Intelligence appeared first on DLF.

Another Way of Knowing: Resisting Eugenic Propaganda Through Community Archiving / In the Library, With the Lead Pipe

In Brief: How do information workers resist the creation of archival “deathworlds”? With rising eugenicist rhetoric in the United States, sites of cultural memory face devastating impacts. These consequences are particularly felt by Disabled and multiply-marginalized communities. This article draws on Disability Justice principles and necropolitical framings to investigate how processes of erasure can be interrupted through active collaboration and critical reevaluation of power-sharing. By supporting alternative forms of knowledge sharing and honoring the lived experience of historically marginalized communities, especially those who have faced forced institutionalization, we hope to craft alternative methodologies that center community involvement and self-determination.

By Jess Petrazzuoli-Gallagher and Ashten Vassar-Cain

Introduction

As early-career community archivists based in the United States, we are entering the archival profession at a time of fracture, lack of funding, communication breakdown, and heightened awareness of our field’s interconnectedness with policy and power. As of recently, most of the news centering Libraries, Archives, and Museums (LAMs) revolves around fear and censorship—a new list of banned books, another exhibit removal, and persistent retaliation from the Trump administration in the form of resource cuts and smear campaigns targeting institutions that refuse to bend to their will. Like many of our colleagues, we feel an overwhelming sense of urgency, guided by the weight of unanswered questions.

Reflecting on our positionality, both authors of this article are Queer and Disabled. We come to this work from research backgrounds, studying the American Eugenics movement and the use of medicalization to justify violence against marginalized bodies. Our work places us between multiple streams of knowledge. On the one hand, we are students in an ALA accredited library program, meaning we have the support and formalized training as a result of our proximity to an institution. However, most of our professional work exists outside of academia. As community archivists and activists, our work is inherently relational. It is an iterative series of mistakes, reinvention, and stories shared around tables. Our work also carries with it the lived experience of navigating ableism and violence in our daily lives, including our own experiences of institutionalization and abuse. In our practice, we reject prioritizing knowledge gained in a classroom over knowledge gained through active listening, experience, and engagement. We recognize that Disabled people have had their authority as “knowers” and knowledge producers challenged (Fricker 2007). Susan Wendell describes how “disabled people’s knowledge is dismissed as trivial, complaining, mundane (or bizarre)” (Wendell 120). Our ability to physically access knowledge is often similarly disregarded, as inaccessible buildings further restrict the ability of people with disabilities to participate as full contributors to knowledge. Because of this, we turn toward “cripistemology,” coined by Merri Lisa Johnson and Robert McCruer (2014), as an alternative to academic forms of knowledge.

Before we begin examining the process of eugenic violence and the ways it is continually recreated in our political and professional lives, we want to acknowledge that confronting this violence is far from impersonal. It is often a painful journey, especially for practitioners and community members who have been historically targeted, and those who continue to suffer harm. We struggle with the popular notion that the challenges we are currently experiencing are unprecedented. Rather, we are seeing a reinvigorated commitment to eugenicist rhetoric and policy, which have always been part of the American landscape and are enshrined in our social politics. This article examines our role as community memory workers in bearing witness and interrupting harm.

Our work is guided by Disability Justice principles, scholarship, and activism that moves beyond “rights-based” framings and toward collective action and rejection of all forms of oppression, domination, and exploitation. Because we see Disability as a dynamic axis of politics and identity, we choose to capitalize it in this article when it is used to refer to an identity category rather than as a descriptor. As Leah Lakshmi Piepzna-Samarasinha states in Care Work: Dreaming Disability Justice, “I don’t want to be fixed, if being fixed means being bleached of memory, untaught by what I have learned through this miracle of surviving. My survivorhood is not an individual problem. I want the communion of all of us who have survived, and the knowledge” (Piepzna-Samarasinha 239).

Our positionality informs our ways of documenting memory. It has also led us to imagine and create interventions that challenge power structures within our own archival practice. Through our work with the Pennhurst Memorial & Preservation Alliance’s (PMPA) Community Archive and Special Collections, we are undertaking efforts to document narratives from the self-advocacy movement led by individuals with Intellectual and Developmental Disabilities in a way that prioritizes original voice, honors lived experience, and expands access.

The Modern Eugenics Movement

In the late nineteenth century and early twentieth century, eugenics was seen as a “scientific” approach to control human genetics by limiting reproduction, resulting in forced sterilization, segregation, and systemic abuse and neglect. Eugenics and its connection to scientific racism were used to justify mistreatment on the basis of “perceived impairment.” Eugenics is one way that white supremacy violently asserts itself, claiming a scientific basis for settler colonialism and the expansion of empire. Sociologist Irving Kenneth Zola explained how medical authorities participate in enforcing state violence. In his 1972 essay “Medicine as an Institution of Social Control,” he writes that

the labels health and illness are remarkable ‘depoliticizers’ of an issue. […] By the very acceptance of a specific behaviour as an ‘illness’ and the definition of illness as an undesirable state, the issue becomes not whether to deal with a particular problem, but how and when. Thus the debate over homosexuality, drugs or abortion becomes focused on the degree of sickness attached to the phenomenon in question or the extent of the health risk involved. And the more principled, more perplexing, or even moral issue, of what freedom should an individual have over his or her own body is shunted aside. (500)

The understanding that medicine can be used as a tool to promote settler colonialist aims is sometimes referred to as Medical Imperialism (Schreier). The eugenics projects enacted by the United States resulted in over 70,000 forced sterilizations in the 20th century, though this number is likely larger, as many sterilizations have been performed without the informed consent of the individuals who had been subjected to the procedures. This did not happen overnight. It began with extensive campaigns by members of various scientific communities that captured the interest of policy makers, physicians, educators, and the American public. Eugenics bounced amongst America’s intellectual circles, ingraining itself in medical practice and scholarship. Detailed by Edwin Black in War Against the Weak, the saying “the taint is in the blood” became a prominent precept of the early eugenicists, who claimed that eradication of “undesirable traits” would result in a collectively superior “race,” and thus, collective peace and safety (Black 25). Scholar Marius Turda emphasizes a similar sentiment that capitalizes on the self-styled scientific theory of human betterment and planned breeding that eugenicists embraced. In posing biological purity as the nation’s responsibility, “eugenicists dissolved aspects of the private sphere, by scrutinizing and working to curtail reproductive, individual, gender, religious and indigenous rights. The boundary between the private and public spheres was blurred by the idea of public responsibility for the nation and the race, which came to dominate both” (2471). Such widespread influence on the “biological deterioration” of the human race captured politicians, doctors, scientists, lawmakers, and educators around the globe, and inspired horrific campaigns of genocidal violence.

The American obsession with surveillance and censorship weaponizes an idealized nuclear family, just as proponents of the early American Eugenics movement did. While libraries, archives, and museums contend with the removal of exhibits, Disability communities fear removal from public life, citing escalations that target community living protections.

In the latest iteration of American Eugenics, the Trump administration has waged a multipronged attack against Disabled Americans. In addition to dismantling of Diversity, Equity, Inclusion, and Access (DEIA) initiatives, the article “The Trump Administration’s War on Disability” from the Center on American Progress outlines an accelerated erosion of civil rights for Disabled Americans, including:

  • Weakening the government’s ability to enforce civil rights protections and investigate discrimination cases
  • Threatening access to benefits, affordable healthcare, and resources such as social services and community-based living supports
  • Divesting from public health infrastructure amidst an ongoing pandemic that disproportionately affects Disabled people
  • Decreasing employment protections related to disability
  • Attacking public education services offered to Disabled children
  • Robert F. Kennedy Jr.’s determination to “fight against Autism,” and proposed surveillance of Autistic individuals.

Actions taken by the Trump administration are most prominently seen in the July 24th executive order titled “Ending Crime and Disorder on American Streets,” which calls on the Attorney General and Health and Human Services Secretary to

enforce, and where necessary, adopt, standards that address individuals who are a danger to themselves or others and suffer from serious mental illness or substance use disorder, or who are living on the streets and cannot care for themselves, through assisted outpatient treatment or by moving them into treatment centers or other appropriate facilities via civil commitment or other available means, to the maximum extent permitted by law. (Trump, Executive Order 14321)

In combination, these measures constitute a return to “Ugly Laws,” a series of policies spanning from 1880s-1970s that removed Disabled people from public life through incarceration on the basis of “disfigurement.” Historical ugly laws and eugenics legislation reveal two intersecting dimensions of marginalization that encompass the visceral discomfort of a viewing public and the pathologization of “subnormality.” Ugly laws were then used as one metric for policing disabled, poor, and people of color for being in public, heavily relying on them being perceived as “dangerous,” “immoral,” or “unsightly”. Under Ugly Laws (1867-1974), almshouses acted as an alternative sentencing for “unsightly beggars” and “physically unable persons”—marking both people in these categories as unworthy of participating in public life, and instead subject to management by the state. State institutions for people with disabilities acted as an expansion of this carceral system, funneling individuals between its iterations, where commitment could extend to the end of that individual’s lifetime (Schweik).

Eugenic rhetoric is also employed in Executive Orders that target LAMs, invoking language of “sanity” and likening reparative descriptions to cognitive distortions (Trump, Executive Order 14253). As information workers, our choices in language, curation, and collaboration have the potential to accelerate the process of erasure and create lasting consequences for the communities depicted in the records we steward.

The Necropolitical Landscape of Memory and LAMs

Achille Mbembe, an anti-colonial Cameroonian scholar, coined the term “necropolitics.” Necropolitics expands on Foucault’s “biopolitics” through a Fanonian lens, anchoring it in opposition to apartheid and occupation. Necropolitics explores notions of oppression and mortality, while emphasizing the role of sovereignty as “the capacity to define who matters and who does not, who is disposable and who is not” (Mbembe 2003 27). Mbembe critically examines sovereignty in relation to biopower, stating that war is how nations exercise sovereignty, enact subjugation and uphold colonialism. He imagines politics as a “form of war” and asks “what place is given to life, death, and the human body (in particular the wounded or slain body)? How are they inscribed in the order of power?” (Mbembe 2003 12). Mbembe describes the creation of “deathworlds,” or “new and unique forms of social existence in which vast populations are subjected to living conditions that confer upon them the status of the living dead” (Mbembe 2003 39-40).

Some deathworlds have physical boundaries, identified by checkpoints and walls. They may take the form of state institutions that house people with disabilities, immigration detention centers, or whatever new construction of the carceral imagination best serves the goals and aims of those in power. They can also be less visible to the naked eye, consumed in the form of “small doses” of death that slowly erode and constrict our personhood. Necropolitics is wielded as “the power to manufacture an entire crowd of people who specifically live at the edge of life, or even on its outer edge — people for whom living means continually standing up to death” (Mbembe 2003 37-38).

Museums and archives occupy critical positions in this necropolitical landscape, serving as repositories of historical evidence and active creators of cultural memory. Our institutions have the power to enact “archival death” through our curatorial choices. Archival life exists in the tension between preservation and interpretation, between fixed materiality and fluid meanings attributed across time. Archival death becomes particularly insidious through its apparent neutrality—the removal of exhibits, deaccessioning of materials, and reframing of narratives—while achieving the same erasure as more overt forms of violence.

Prominent discussions in the field of digital stewardship and archival processing ask us if we are ready to confront the fact that our professional practices have upheld and facilitated the under-documentation and erasure of people from Black, Indigenous, Immigrant, Disabled, and Queer communities from the historical record and what inclusive forms of archiving might look like within our field (Duff 124).

“Archival Death” and Curatorial Power

Archival literature and practice relating to disability has historically focused on medical narratives, accessible practices, and histories authored by those in power rather than the ways in which disability and marginalized populations are documented. These framings often result in a narrow, medicalized representation of people within our collections that fails to capture the complex, intersectional lives of disabled people. Primarily residing in medical libraries and government archives, practices related to archiving disability tend to focus heavily on the interpretation of disability in the context of science and medicine, relying on labels prescribed by medical authorities rather than the social and embodied experience of disability. Kelvin White’s overview of the genesis of the field of archival preservation in “Promoting Reflexivity and Inclusivity in Archival Education, Research, and Practice,” provides valuable insight into the field’s standardized practices that have been shaped by people in positions of power—mainly those who were not from historically marginalized communities themselves. (K. White 117).

The intersectional lives of people with disabilities and their interactions with medical violence and eugenics are erased, and the physical archives remain largely inaccessible to disabled patrons and disabled professionals. The record may technically survive, but the life within it does not. Archival death, then, is not only about what is discarded or deaccessioned. Instead, it operates equally through how/what remains and if it is actively hidden.

According to Tobin Siebers, disability can be seen as an “elastic social category” that changes depending on the social context, making a singular definition for archival usage difficult. Without a proper theoretical framework like complex embodiment —which Sara White explains as evaluating disability as an experience—archivists and political actors risk inserting their internal biases about disability, eugenics, and state-sanctioned violence against marginalized communities (S. White).

Beth Linker’s “On the Borderland of Medical and Disability History: A Survey of the Fields,” expands on White’s historical discourse and links the rise of the American medical system to the documentation of disease-centric medical history that often neglects disability. The academic study of medical history in the U.S. began in the early 1930s—much like the beginning of the professional archival field—largely due to émigré scholars who had been trained in medicine and the humanities in German-speaking Europe. These scholars, including Henry Sigerist, Owsei Temkin, and Erwin Ackerknecht (all of whom spent time at Johns Hopkins), brought with them a deep commitment to continentally-infused theories and ideas, particularly concerning disease. As a result of this intellectual background and the research interests of these influential figures, disease history became the central research aim of the newfound field of medical history in the United States. This focus was not predetermined but rather a product of the time, as the individuals who shaped the discipline were medical practitioners. The relative neglect of disability within medical history contributed to the emergence of disability history as a distinct field in the late 1990s. “New” disability historians explicitly defined their discipline in contrast to medical history, arguing that the divergences between the two fields defined disability history’s parameters. Disability historians argued that the medical model defined disability solely as a consequence of biological factors—such as congenital or chronic illness, injury, or deviation from a perceived “normal” biomedical structure or function—and seeks to “fix” or cure its effects at the individual level.

When disability materials are collected solely through a medical lens—cataloged under disease categories, described in clinical language, stripped of the political and social contexts that gave them meaning—the person is rendered a faceless patient rather than an agent. When it comes to records and legacies of state-run institutions, such as the Fernald State School and Hospital in Massachusetts or the Pennhurst State School and Hospital in Pennsylvania, sketches of the institutions themselves are widely available. The institutions are venerated, retaining a life of their own even after the buildings have been shuttered. However, former residents are reduced to gaps in the historic record. Family members and researchers requesting information about their lives are met with barriers such as incomplete or lost records, restricted access to records, or lack of resident perspectives. In a way, the archive may act akin to the institution itself, dressing harm and isolation in the language of “care,” and participating in epistemic injustice against people with disabilities and their many “ways of knowing.”

In “Documenting Disability History in Western Pennsylvania,” Bridget Malley highlights Helen Samuels’ documentation strategy as “well designed to address gaps in the historic narrative by ‘provid[ing] a useful framework for discussing selection issues,’” particularly with respect to marginalized communities (17). Documentation strategy as methodology “guides selection and assures retention of adequate information about a specific geographic area, a topic, a process, or an event that has been dispersed throughout society” (17). Documentation strategy emphasizes structuring the inquiry and examining the form and substance of the available documentation. This involves actively seeking to understand the documentary universe related to a topic, identifying where documentation exists and, crucially, where it doesn’t exist.

Traditional appraisal practices can inadvertently contribute to archival gaps by reflecting the biases and perspectives of the appraisers, leading to the erasure of marginalized voices like those of people with disabilities. Documentation strategy aims to move away from subjectivity by incorporating a deeper understanding of the topic and the perspectives of those involved. For Malley, the collaboration between Western Pennsylvania Disability History Action Consortium (WPDHAC) members (community experts) and archivists from the Heinz History Center allowed for a merging of archival and community knowledge, leading to more nuanced appraisal decisions and potentially filling gaps based on community-identified needs in the archives.

Archival death was built into the profession’s foundations: the same apparatus that preserved evidence of national progress systematically failed to collect, or actively destroyed, evidence of the state’s violence against its most marginalized citizens. As uncomfortable as it may be to confront, this was not an oversight. It was a feature of archives designed to construct a cohesive national narrative by using seemingly neutral representations of history that avoided critical insights into the past, and something that we must take active steps to remedy. The records relating to institutionalized individuals that do survive from this era—asylum logs, sterilization orders, commitment papers—were authored by perpetrators, not survivors. Even when disabled people appear in the archive, they appear as objects of intervention rather than witnesses to their own lives. When disability materials are appraised, arranged, and described according to the medical model, the archive reproduces the very framework that justified confinement and cure. The disabled person is preserved as a case, not a life; a diagnosis, not a history. This is erasure through categorization—effective in determining who the ‘dead’ and the ‘living’ might become to future researchers, advocates, and communities seeking to understand what was done and what survived.

Community-Led Collecting: Alternative Methods for Protecting Shared History

Community-based archives have emerged as crucial alternatives to mainstream institutions, particularly for documenting the histories of marginalized groups. These archives are often created by and for the communities they serve, prioritizing agency, self-determination, and the preservation of narratives that challenge dominant frameworks. Community archives intentionally subvert the “neutrality” of institutional preservation by centering the values, priorities, and privacy concerns of their communities. Community archives may employ participatory appraisal and curation strategies, working directly with community members to identify, select, and describe materials. This approach seeks to capture the richness and diversity of lived experience, rather than reducing disability or other markers of identity to a social problem. It also reflects the interconnectedness of “ways of knowing.” 

In a 2022 interview, Achille Mbembe explains “the epoch we have entered into is one of indivisibility, of entanglement, of concatenations. Times of concatenation presuppose that our bodies have become repositories of different kinds of risks” (Mbembe 2022). Risk is very present in discussions of disability politics. “Risky bodies,” as described by Hi‘ilei Julia Kawehipuaakahaopulani Hobart and Tamara Kneese, are subjected to coercive forms of care (Hobart & Kneese), under the assumption that those living in the intersections of positionality cannot be trusted as knowers. One path away from paternalistic and colonial imaginings of care and knowledge is community-led intervention.

Some notable sites of community intervention we have encountered in our work include the Living Archives on Eugenics (LAE), The Anti-Eugenics Collective at Yale, and the From Small Beginnings Collective. The Living Archives on Eugenics in Western Canada (LAE) provides us a realistic vision for the future of community archival practices centered on disability and survivors of the eugenics movement. Working directly with survivors the project “raised awareness of historical and contemporary manifestation of eugenics [by capturing and disseminating] survivor’s stories.” Interactive collections provide historical context to the eugenics movement in Western Canada and emphasize the bond that was created between curators, archivists, and survivors. The Anti-Eugenics Collective at Yale situated Yale’s campus as the former headquarters of the American Eugenics Society, and their related collections as a site of harm and opportunity for reparation. They engage this troubled history through workshops with K-12 educators, students, medical professionals and the general public. The global collective From Small Beginnings is a group of anti-eugenics activists that helps educators and researchers learn of ongoing efforts to disrupt eugenics in action, and combats isolation by building a network of committed individuals and organizations. These projects meaningfully balance confrontation and collaboration through outreach and information organization.

These initiatives, coupled with the work done by the Disability Archives Lab on centering critical disability studies in archival research and practice, and the recent publication Preserving Disability: Disability and the Archival Profession (Brilmyer & Tang, 2024), inspired us to think about potential places of intervention in our own archival process. We established the Pennhurst Memorial & Preservation Alliance Community Archives in 2024 with support from the Pennsylvania Historic & Archival Records Care (HARC) grant, organizing materials collected by the organization over a span of approximately five decades. Despite being a volunteer-run organization, we wanted to ensure that we could make our records, largely generated by self-advocates with Intellectual and Developmental Disabilities, accessible to the public. Of particular significance is the archive’s Speaking for Ourselves (SFO) collection, which documents the critical role of self-advocacy organization Speaking for Ourselves in the disability rights movement emerging from state institutions in Pennsylvania. By centering self-advocates’ political struggle, agency, expertise, and vision for the future, the archive challenges dominant narratives of Disability, medicalization, and victimhood.

Our process required that we confront power imbalances and collective trauma surrounding documentation and consent. In our previous archival research and practice, we noticed that academic and state archives privileged the voices of medical and institutional authorities. Disabled individuals, especially people with Intellectual and Developmental Disabilities, were excluded from authoring knowledge, instead cast as subjects of research. Our collections contained a unique perspective that was absent from more “traditional” state and academic archives, acknowledging People with Intellectual and Developmental Disabilities as originators of our records and contributors to social change.

After over a year of sitting in on community board meetings and listening to requests from the community for long-term preservation and digital accessibility, we applied for funding through the Council on Library and Information Resources’ “Digitizing Hidden Collections: Amplifying Unheard Voices” grant program, and were awarded funds to run a two year digitization project. As grantees of this program, we seek to digitize and make accessible over 9,000 items within our collections.

In order to keep us accountable to our mission and the community’s requests, we have implemented a series of strategies to aid us in our intervention. To us, this means prioritizing audio and visual material for digitization. We recognize that many of the self-advocates who  originated the materials in our collections did not participate in traditional forms of communication or written record keeping, instead relying on dictation, audio and video recording, and other accessible modes of knowledge sharing and creation In an attempt to preserve original voice, we are engaging in consultation with surviving advocates depicted in our materials. We are also planning for quarterly access consultations with users across disability communities. These consultations will allow us to continually integrate feedback and ensure that the open access metadata and digital exhibits generated through this project are accessible to the widest possible user base.

Planning this project required us to reimagine what our role as archivists might look like. We understand that the self-advocates who originated and are depicted in the collections have had much of their lives documented without consent in the form of institutional medical records. To avoid replicating similar harm, we have decided to forgo a traditional “donor” model, in favor of a “stewardship agreement.” The terms of this agreement allow our archive to take actions necessary for preservation, make collections publicly available, and provide open access to associated metadata, with the understanding that the physical materials are property of Speaking for Ourselves.

We are documenting each step of our process so that we can contribute findings from this project to the larger Disability archival community, so that it may be replicated and expanded upon. Additionally, we designed the project to prioritize intergenerational collaboration, bringing younger members of our community in conversation with the historical context for our current struggles and learning from the voices of Disabled elders. Over the next two years, we are engaging in an iterative learning process, in hopes of fulfilling the request for digital access and making the perspectives and activism of self-advocates known to a wider audience.

Interrupting Erasure: How Archivists Protect the Future

Critically rethinking power and ownership requires institutions and practitioners to develop new methodologies that recognize diverse community members as primary stakeholders, and archives as sites of evolution and growth. This involves moving beyond inclusion toward genuine power-sharing and decolonial praxis. The archive becomes a site of active creation and a path to alternative “ways of knowing,” which in turn resist the creation of “deathworlds.” In “Out of the Dark Night: Essays on Decolonization,” Achille Mbembe affirms that

humanity is to be made to rise [faire surgir] through the process by which the colonized subject awakens to self-consciousness, subjectively appropriates his or her I, takes down the barrier, and authorizes him- or herself to speak in the first person. This awakening and appropriation aim not only at the realization of the self, but also, more significantly, at an ascent into humanity, a new beginning of creation, the disenclosure of the world. (62)

Many Disabled self-advocates have communicated distrust toward record-keeping solely dictated by medical and legal authorities. Disability community archives can demonstrate one avenue for grassroots preservation efforts that maintain disabled people’s ownership/autonomy over their narratives of survival and liberation, challenge dominant medical narratives that have historically justified confinement, and resist archival death.

The formation of the archival profession and of many LAM institutions are tied to colonial wealth and power. Even after constant re-evaluation of practices and strategic upheaval, LAMs come in contact with a different type of “deathworld” on a daily basis, and participate in decisions that determine what – and who– is remembered. As librarians, archivists, and museum professionals, we are responsible in part for making difficult choices with the collections we steward. We are also responsible to the communities depicted in those records. Speaking of Zola’s writings on medicalization, he cautions that “not only is the process masked as a technical, scientific, objective one, but one done for our own good” (502). Though not medical professionals, information workers risk modeling a similar attitude toward disability in our work, and participating in archival erasure under the guise of objective processes. Even as the risk landscape changes, we must recognize that our roles are not neutral. To combat erasure, we must take an active role in interrupting the weaponization of memory against the most marginalized. Interruption can take many forms, and requires ongoing reflection and adaptation. 

Drawing on the practical work of building and sustaining the PMPA Community Archive, we are energized in imagining how LAM professionals can engage in active forms of resistance. The sustainability of grassroots archival work depends on the active partnerships between individuals, communities, larger institutions, and solidarity networks across libraries, archives, and museums. Without webs of mutual aid and professional collaboration, we all remain vulnerable to the same political pressures that have compromised larger institutions.

Dealing with histories of medical violence mandates space to grieve. Leah Lakshmi Piepzna-Samarasinha invites grief as an active part of the process, arguing

that feelings of grief and trauma are not a distraction from the struggle. For example, transformative justice work—strategies that create justice, healing, and safety for survivors of abuse without predominantly relying on the state—is hard as hell! What would it be like if we built healing justice practices into it from the beginning? (42)

The stakes of this work extend beyond professional practice. It challenges all of us to consider whose lives matter in the American memory. The PMPA Community Archives imagines community-controlled historical preservation as a form of active survival and maintains originators’ ownership over their narratives of resistance and liberation—narratives that are urgently felt when policies seek to industrialize age-old structures of abuse and reintroduce the ‘legal removal’ of disabled people from the public eye.


Acknowledgements

We are deeply grateful to our Lead Pipe editors, Jess Schomberg and Pam Lach, whose feedback and support helped shape this article. We are especially grateful to our reviewer, Gracen Brilmyer, whose work we greatly appreciate and respect. This work emerged from and belongs to the community of self-advocates and activists who have shaped the Pennhurst Memorial & Preservation Alliance. Their decades of organizing, documenting, and demanding recognition created the conditions for this scholarship to exist. Their lived expertise, activism, and commitment to truth-telling made this research possible. We are grateful to be entrusted with carrying forward their labor of memory and resistance. Any insights here reflect their collective work.


Works Cited

Assistant Secretary for Public Affairs (ASPA). “Secretary Kennedy Appoints New Interagency Autism Coordinating Committee to Advance Fight Against Autism.” HHS.Gov, 28 Jan. 2026, www.hhs.gov/press-room/hhs-kennedy-appoints-new-interagency-autism-coordinating-committee.html.

Black, Edwin “America’s National Biology,” in War Against the Weak: Eugenics and America’s Campaign to Create a Master Race Dialog Press, 2012, pp. 21-42.

Brilmyer, Gracen, and Lydia Tang, editors. Preserving Disability: Disability and the Archival Profession. Library Juice Press, 2024.Disability Archives Lab, disabilityarchiveslab.com.

Duff, Wendy, et al. “Investigating the Impact of the Living Archives on Eugenics in Western Canada.” Archivaria: The Journal of the Association of Canadian Archivists, 2019, archivaria.ca/index.php/archivaria/article/download/13701/15099.

Eugenics and its Afterlives, www.antieugenicscollective.org.

Fricker, Miranda. Epistemic Injustice: Power and The Ethics of Knowing, Oxford University Press, 2007.

FromSmallBeginnings, www.fromsmallbeginnings.org.

“Hidden Collections.” CLIR, 28 Oct. 2025, www.clir.org/hiddencollections.

Hobart, Hi‘ilei Julia Kawehipuaakahaopulani, and Tamara Kneese, editors. Radical Care: Survival Strategies for Uncertain Times. Duke University Press, 2020.

Ives-Rublee, Mia, and Casey Doherty. The Trump Administration’s War on Disability. Center for American Progress, www.americanprogress.org/article/the-trump-administrations-war-on-disability.

Johnson, Merri Lisa and Robert Mcruer. “Cripistemologies: Introduction.” Journal of Literary & Cultural Disability Studies 8 (2014): 127 – 147

Linker, Beth. “On the Borderland of Medical and Disability History: A Survey of the Fields.” Bulletin of the History of Medicine, vol. 87, no. 4, Dec. 2013, pp. 499–535, https://doi.org/10.1353/bhm.2013.0074.

Living Archive on Eugenics, www.eugenicsarchive.ca.

Malley, Bridget. “Documenting Disability History in Western Pennsylvania.” The American Archivist, vol. 84, no. 1, 1 Mar. 2021, pp. 13–31, https://doi.org/10.17723/0360-9081-84.1.13.

Mbembe, Achille. “Necropolitics.” Public Culture, vol. 15, no. 1, 1 Jan. 2003, pp. 11–40, https://doi.org/10.1215/08992363-15-1-11.

Mbembe, Achille. Necropolitics. Duke University Press, 2019.

Mbembe, Achille. Out of the Dark Night: Essays on Decolonization. Columbia University Press, 2021.

Mbembe, Achille. “Achille Mbembe: Planetary Politics for All Creation.” Interview by Noema Magazine. Noema Magazine, 11 Jan. 2022, Noema Magazine.

NPR. “The Supreme Court Ruling That Led to 70,000 Forced Sterilizations.” Fresh Air, 7 Mar.
2016, www.npr.org/sections/health-shots/2016/03/07/469478098/the-supreme-court-ruling-that-led-to-70-000-forced-sterilizations.

Pennhurst Memorial & Preservation Alliance, preservepennhurst.org.

Pennsylvania State Archives. Historical & Archival Records Care Grant Program, Commonwealth of Pennsylvania, www.pa.gov/services/phmc/apply-for-the-historical—archival-records-care-grant-program.

Piepzna-Samarasinha, Leah Lakshmi. Care Work: Dreaming Disability Justice. Arsenal Pulp Press, 2021.

Schreier, H. & Berger, L. (letter), ‘On medical imperialism’, Lancet, 1:1161, 1974.

Schweik, Susan M. The Ugly Laws: Disability in Public. New York University Press, 2010.

Sins Invalid. “10 Principles of Disability Justice.” sinsinvalid.org/10-principles-of-disability-justice.

Speaking For Ourselves, speaking.org/.

Turda, Marius “Legacies of Eugenics: Confronting the Past, Forging a Future,” Ethnic and Racial Studies 45, no. 13, 2022, pp. 2470–77, https://doi.org/https://doi.org/10.1080/01419870.2022.2095222.

United States, Executive Office of the President [Donald Trump]. Executive Order 14253: Restoring Truth and Sanity to American History, 28 March, 2025, www.whitehouse.gov/presidential-actions/2025/03/restoring-truth-and-sanity-to-american-history/.

United States, Executive Office of the President [Donald Trump]. Executive Order 14321: Ending Crime and Disorder on America’s Streets, 24 July 2025, www.whitehouse.gov/presidential-actions/2025/07/ending-crime-and-disorder-on-americas-streets/.

Weinberg, Hannah. “Tracking the Trump Administration’s Attacks on Libraries.” American Libraries Magazine, 1 May 2025, americanlibrariesmagazine.org/2025/03/19/tracking-the-trump-administrations-attacks-on-libraries.

Wendell, Susan “Toward a feminist theory of disability.” Hypatia, 1989.

White, Kelvin L., and Anne J. Gilliland. “Promoting Reflexivity and Inclusivity in Archival Education, Research, and Practice.” The Library Quarterly, vol. 80, no. 3, July 2010, pp. 231–248, https://doi.org/10.1086/652874.

White, Sara. “Crippling the Archives: Negotiating Notions of Disability in Appraisal and Arrangement and Description.” The American Archivist, vol. 75, no. 1, Apr. 2012, pp. 109–124, https://doi.org/10.17723/aarc.75.1.c53h4712017n4728. Zola, Irving Kenneth. “Medicine as an Institution of Social Control” The Sociological Review, 20: 487-504. https://doi.org/10.1111/j.1467-954X.1972.tb00220.x

2026-04-15: ACM Capital Region Celebration of Women in Computing (CAPWIC 2026) Trip Report / Web Science and Digital Libraries (WS-DL) Group at Old Dominion University



This year the ACM Capital Region Celebration of Women in Computing (CAPWIC 2026) was held from March 27–28 as an in-person event. The conference took place in Alexandria, Virginia, and the event was hosted by the Virginia Tech's Institute for Advanced Computing (IAC). CAPWIC is all about bringing together women in computing and their peers to support each other and grow in the field. The conference connects students, faculty, and industry professionals from across the Capital Region – Pennsylvania to Virginia to share ideas, discuss research, and build a strong, supportive community.


The conference featured workshops, technical talks, flash talks, research shorts, poster presentations, as well as panel,  birds-of-a-feather and keynote sessions. I was the only participant from Old Dominion University's Web Science and Digital Libraries (WS-DL) research group this year, where I presented a research short. The event included parallel sessions across various categories and topics, and I attended sessions from each category.


Conference Venue: Institute for Advanced Computing (IAC), Virginia Tech, Alexandria, Virginia


Day 1: March 27, 2026 


The first day of the conference began with a campus tour and a graduate/career fair. Day 2 also included the campus tour and graduate/career fair for those who missed. This was followed by opening remarks and dinner. Next, the first keynote was delivered, and the day concluded with closing remarks for Day 1.


Campus Tour: Drone Display and Immersive Visualization Lab Visit


The Institute for Advanced Computing (IAC) of Virginia Tech is a research institute located at Alexandria, Virginia. The institute offers hands-on learning opportunities for graduate students in computer science and computer engineering. Specialized labs are available at the institute for research in immersive visualization, drone technology, wireless, quantum, and brain-inspired computing systems. We got the opportunity to visit the Drone Lab and Immersive Visualization Lab. 


The Drone Lab featured an indoor drone cage used to conduct flight experiments in a controlled and safe environment. The lab team introduced us to the fundamentals of unmanned aerial system technology and shared insights into their ongoing research. One of the interesting discussions was about how they are trying to detect commercial off-the-shelf (COTS) drones, which can be used for attacks or unauthorized surveillance. They also gave us a chance to fly drones inside the cage, which was a really fun and thrilling experience.


The Immersive Visualization Lab provided immersive projection on three walls and the floor, allowing users to be fully immersed in visual representations of data and other phenomena. We had the opportunity to experience a virtual walk through a beautiful garden, which felt truly magical. It was amazing to see how visual design and 3D modeling can bring environments to life and let us explore places we would not normally be able to experience in person.


Graduate/Career Fair

The sponsors of the event organized a graduate/career fair for the attendees. There were representatives from ACM Women in Computing (ACM-W), Virginia Tech’s Computer Science Department, Virginia Tech’s Sanghani Center for Artificial Intelligence & Data Analytics, University of Mary Washington's College of Business, Northeastern University’s Khoury College of Computer Sciences, and Women in CyberSecurity (WiCys). They shared information about graduate programs and career opportunities in research and academia, and also provided valuable feedback on resumes. I had the opportunity to interact with several representatives, which helped me better understand potential career paths in academia and research. 


Opening Remarks

After the campus tour and graduate/career fair, the conference began with the opening remarks from the organizers. The head of Virginia Tech’s computer science department, Christine Julien, welcomed everyone to the conference. The organizing chairs, Sehrish Basir Nizamani from Virginia Tech and ODU alumna (PhD, 2004) Mona Rizvi from James Madison University, provided an overview of the tracks for each category and schedule of the conference. The program included 2 panels, 5 workshops, 8 technical talks, 12 flash talks, 22 research shorts, 43 posters, and 1 birds-of-a-feather session, all conducted across parallel sessions.  


Keynote #1: Tools in Your Toolbox: What I've Learned as a Professional Female Computer Scientist 

Christine Julien introduced the first keynote speaker Laurian Vega, a Senior System Engineer at Booz Allen Hamilton. She shared her skills and expertise that she developed throughout her career as a female computer scientist. One of the key takeaways from the keynote was that no effort is ever wasted as long as you learn something from it. The speaker talked about how important it is to invest in soft skills and to build strong networks. She also encouraged us to choose workplaces that align with our values and treat us well. A point I found especially meaningful was that mental health is just as important as physical health. The speaker also emphasized caring about the work we do and using our skills to give back to the community. Finally, she highlighted that a PhD degree is not about becoming an expert in everything, but about continuously learning and growing. Overall, the talk highlighted that success in computing is not just about technical ability, but about developing a balanced toolkit that supports both personal and professional growth. 


Day 2: March 28, 2026 


Day 2 started off with breakfast, followed by the second keynote. Before the lunch break, two parallel sessions were held. During the first session, I attended a workshop and a technical talk. In the second session, I attended flash talks and another workshop. After the lunch break, there were two more parallel sessions. I attended a panel and a poster session for the third one and research shorts from the final one.


Keynote #2: Goodbye Imposter, Hello Winner: Overcoming Perceptual Expectations to Reclaim Excellence

Erika Olimpiew from Virginia Tech introduced the second keynote speaker Candace Aku, a Senior Technical Program Manager at Google Public Sector. She shared her journey in the tech industry and the process of creating a professional identity beyond others’ expectations. The speaker emphasized on the importance of not limiting one’s potential before even starting a career, and of reflecting on whether actions are driven by personal goals or others’ expectations. The speaker also highlighted how constantly chasing the “next” can lead to burnout, reminding the need to prioritize well-being. The discussion on the weight of expectations such as intelligence, imperfection, fear of failure, and judgment was particularly insightful, as these factors often contribute to imposter syndrome. The keynote was highly motivating, encouraging individuals to embrace their identity, challenge limiting beliefs, and grow without sacrificing their well-being. 


Workshop: Debugging Your Resume

Aubrey Baker, an eCommerce Web Developer from Red Van Workshop, and Holly Wilsey, a Video Game Engineer from Purple Basil Games, organized a hand-on workshop on preparing resumes. The session was chaired by Nguyen Ho from Loyola University Maryland. They shared best practices on customizing resumes for various purposes, such as academic work, internships, employment, and volunteer experiences. Participating in this workshop gave me a better understanding of how Applicant Tracking Systems (ATS) screen resumes before they are seen by a human. I learned how small details in formatting and wording can impact visibility, and how to avoid common mistakes that can weaken a resume. The breakout sessions were especially helpful, as we got to review and improve our resumes in a group while receiving useful feedback. 


Technical Talk: Education & Inclusion


Denise D'Angelo, a Transformation Technology Leader at DynamicD Enterprises, presented a technical talk “Designing the AI-Ready Workforce” as part of the Education & Inclusion track. The session was chaired by Mohammed Farghally from Virginia Tech. The speaker offered valuable insights into how to approach AI-enabled work more thoughtfully, especially in the context of hiring. As AI becomes part of our everyday work, traditional ideas about roles and performance are evolving, influencing both opportunities and trust. She explained that being “AI-ready” is not just about knowing how to use AI tools, but about understanding how people and AI systems work together. The talk was very helpful to clearly understand how to prepare for interviews in an AI-driven hiring landscape. 


Flash Talk: Trust, Fairness, and Societal Impact of AI


Mona Rizvi chaired the flash talk session on the Trust, Fairness, and Societal Impact of AI track. 

As the first presenter, Sadia Afrin Mim from George Mason University presented “LLM-Guided Input Generation for Causal Fairness Testing.” Current fairness testing methods in machine learning systems often create unrealistic test cases by ignoring how features relate to each other in real-world situations. To address this limitation, the presenter introduced a new approach that uses large language models (LLMs) to generate more meaningful and context-aware test inputs. 

Next, Arshnoor Bhutani and Mahi Sanghavi from University of Maryland, College Park presented “Data-Driven Exploration of Physiological Factors Perpetuating Bias in Pulse Oximetry Readings for At-Home Use.” They examined bias in pulse oximeters which is used to measure blood oxygen levels and found that skin tone remains an important contributor to this bias. They analyzed the BOLD data set and identified a clear pattern showing that errors increase as skin tone gets darker. Their work aims to better understand these patterns so that corrections can be developed to help reduce health disparities.

Next up, Khoulood Alharthi from Virginia Tech presented “Gender, Culture, and Privacy: Navigating Social Media Concerns in Saudi Arabia.” She explored how privacy concerns on social media are shaped by culture and gender, focusing on users in Saudi Arabia. She emphasized that privacy is not only determined by platform features, but deeply influenced by users’ social and cultural values such as modesty, reputation, and social boundaries. Her work provided valuable insights into how social media platforms and privacy settings can be designed to better align with users’ cultural expectations.

Christopher Parham from Virginia State University presented the next flash talk, “A Trust-Aware, Biometrically-Secured Social Network Using Decentralized Identity Protocols and the Analytic Hierarchical Process for Collaboration.” His talk focused on improving security and trust in online systems by addressing human factors in cybersecurity. He proposed a novel decentralized method that creates one-time biometric features to prevent attacks like replay or misuse of credentials. His work aims to create a more secure, user-friendly authentication framework that supports reliable and trust-based collaboration.

Saanvi Shashikiran from Georgetown University presented the last flash talk, “Understanding State-Level AI Readiness Policy.” She explored how prepared different U.S. states are for adopting AI, focusing on the role of policies, infrastructure, and support systems. She examined five states and analyzed government documents to understand how policies regulate AI use. She discussed how text-based search methods can be used to identify policies relevant to AI readiness, and found that a specific scoring method (BM25) performed most effectively.

 

Workshop: Cyber Hygiene That Sticks: Research in the K-12 Space on Cybersecurity


Deborah Kariuki, an Assistant Teaching Professor at University of Maryland Baltimore County, led a workshop on how cybersecurity in K-12 education can be improved through more interactive and human-centered approaches. She emphasized that relying only on rules and one-time sessions are not enough to strengthen cyber hygiene. She demonstrated how interactive activities through real-world scenarios can help students effectively recognize threats like phishing, password safety, and data-sharing online. She also talked about the broader efforts of organizations like WiCyS in actively promoting cybersecurity education and awareness, helping students build confidence and lasting digital safety habits. 


Panel: Navigating the Path to Grad School: Discuss, Reflect, and Make an Informed Decision


Mohammed Seyam from Virginia Tech moderated a panel session that provided a reflective perspective on what it is truly like to pursue an advanced degree and how to decide whether it is the right path. The panel featured Madison Barton, a graduate admission counselor at Northeastern University, Mohammed Farghally, a collegiate assistant professor at Virginia Tech, Promise Owa, a graduate student at Northeastern University, and Chandani Shrestha, an Assistant Professor at James Madison University. The panelists shared their own journeys, including their uncertainties, key turning points, and lessons learned, while addressing common questions such as –


  • Does taking a year off or doing a job before starting grad school have any impact?

  • What are the key factors to choose grad school? Funding? Research facilities?

  • How much research interest matters to sustain throughout grad school?

  • What are some challenges for women in CS grad school?

  • How to deal with imposter syndrome during grad life?


The session was very encouraging for the participants to think intentionally about their goals, interests, and readiness for grad school. 


Posters: Cybersecurity, Privacy & Responsible AI


Jessica Zeitz from University of Mary Washington chaired the poster session on the Cybersecurity, Privacy & Responsible AI track.


Fairuz Nawer Meem from George Mason University presented two posters. One poster titled “Hope or Hype? Understanding Vibe Coding through Software Practitioner Discussions” showed analysis on online discussions to understand how developers’ opinions about “vibe coding” changed over time. Another poster, “Well-Being in AI-Assisted Software Development,” showed an experimental study on how using AI tools affects developers’ stress, emotions, and overall well-being while coding, compared to coding without AI. Sadia Afrin Mim, also from George Mason University, presented the poster “Towards Practical Discrimination Testing for Software Systems,” discussing the evaluation of a user study that fairness-specific tools, along with AI support, help developers find and understand bias in software more easily. 


Min Zhang from Virginia Tech proposed a way for smaller local AI models to get help from powerful remote models while protecting sensitive data in her poster “PrivacyR1: Privacy-Preserving Collaborative Reasoning in Multi-Agent Systems.” Jennifer Alexandra Thompson, also from Virginia Tech, presented the poster “Exploring Socioeconomic Status Narratives of Computer Science Students,” exploring how a student’s socioeconomic background affects their access to technology and success in computer science education. Another student from Virginia Tech, Kimberly Giordano presented the poster “Beyond the Android Manifest: Analyzing Native Libraries and Eye-Tracking Use in Virtual Reality Applications,” showing an analysis on how VR apps use eye-tracking data based on Android Manifest evaluation and native code inspection. She found that some apps may access data without clearly informing users. 

Rebecca George from the College of William & Mary presented performance evaluation of a new storage system (DAOS) on a large supercomputer and identified the best ways to optimize data reading and writing for faster performance in her poster “Benchmarking DAOS Filesystem on Aurora.” Zahra Rizvi, also from the College of William & Mary, presented the poster “Bridging the AI Education Gap: A Self-Funded AI Awareness Initiative in Cocoa-Farming Villages of Ghana,” introducing an initiative that teaches basic AI concepts to students in rural Ghana and aims to expand access to AI education in underserved communities.  

Susan Zehra, a PhD student and senior lecturer from Old Dominion University’s CS department, presented her poster “Securing Vehicular Ad Hoc Networks (VANETs) Against Cyber Threats,” proposing a decentralized security system to protect vehicle communication networks and showing it can effectively detect and prevent cyber attacks.


Research Shorts: Cybersecurity, Trust & Resilience

Nareman Hamdan from James Madison University chaired the research short session on the “Cybersecurity, Trust & Resilience” track.

The first presenter, Stephanie Travis from Virginia Tech, presented “Identifying Human Factors in Red Teams for Cyber Exercises.” She focused on making cybersecurity training more realistic by considering how real attackers think and behave. She studied existing works and gathered insights from experts to create a set of human behavioral factors to incorporate in cyber defense exercises in simulations. She found that by improving how red teams simulate attacks, the training can better reflect real-world situations.

Next, I presented “Framework for Finding Attribution of Social Media Screenshots.” Sharing screenshots on social media platforms is now common. I pointed out legitimate reasons why people share screenshots, such as to enable cross-platform sharing, to use as evidence for deleted posts etc. Then, I showed how people can create fake tweets easily and share such screenshots on social media platforms. Next, I demonstrated different ways the live web and web archives can be used to find attribution of screenshot content. I emphasized using web archives to find attribution of deleted posts since they cannot otherwise be found on the live web. Lastly, I shared my evaluation results of the automated process of how one can find attribution of a screenshot using the Wayback Machine.


Next up, Xinyi Zhang from Virginia Tech presented “From Vulnerable to Resilient: Examining Parent and Teen Perceptions on How to Respond to Unwanted Cybergrooming Advances.” Cybergrooming is a harmful online behavior that can affect teens’ mental health and physical safety. The presenter studied how teens and parents react to different scenarios and identified behaviors that can either increase risk or help protect against harm. By analyzing these responses, she developed patterns of both vulnerable and protective actions to better support teens through education and tools that would encourage safer online behavior. 

Yeana Bond from Virginia Tech was the last presenter and discussed improving how metadata-related bugs are detected in Java applications in “Towards Large Language Model-Powered Automation of Detecting Metadata Related Bugs.” Misuse of metadata can cause severe issues in Enterprise Applications written in Java, so her goal was to make debugging metadata problems more efficient with the help of AI. By comparing different AI models, she found that newer models produce more accurate and complete rules. 


Keynote #3: Human-Centered Automation: A Journey through HCI, AI, and the Future of Robotics

Jessica Zeitz from University of Mary Washington introduced the final keynote speaker Meg Dickey-Kurdziolek, a UX Lead/Senior Staff UX Researcher at Intrinsic.ai. She shared her journey from her PhD at Virginia Tech to her current role at Intrinsic.ai, and how her understanding of human-centered design has changed along the way. She provided valuable insights into the evolving role of UX in the age of AI and robotics. She discussed the challenges of making complex robotic systems more user-friendly. She talked about how Explainable AI (XAI) helps people better understand and trust these systems as AI becomes part of our everyday life. In summary, the speaker highlighted how HCI principles can guide the future of automation and provided useful insights into navigating this rapidly evolving field. 


Closing Remarks and Award Ceremony


The conference concluded with acknowledgments to the sponsors and a vote of thanks to all participants and organizers, followed by an award ceremony. The organizing committee delivered closing remarks and announced that CAPWIC 2027 will be held at James Madison University in Harrisonburg, Virginia. They also introduced the organizing chairs for the upcoming conference. Next, the awards for ‘Best Research Short’, ‘Best Flash Talk’, and ‘Best Poster’ were announced in both graduate and undergraduate categories, along with honorable mentions for each category.

I was delighted to receive the ‘Best Research Short’ award in the graduate category for “Framework for Finding Attribution in Social Media Screenshots.” 


The awardees for Best Flash Talk and Best Poster (graduate category) are listed below:

  • Best Flash Talk – “Benchmarking and Advancing Generative Models for Calorimeter Shower Simulation” by Farzana Yasmin Ahmad from the University of Virginia
  • Best Poster – “Well-Being in AI-Assisted Software Development” by Fairuz Nawer Meem from George Mason University


Wrap-up


CAPWIC indeed provides a supportive and encouraging platform for sharing ideas while fostering meaningful opportunities for both personal and professional growth. This was my first time attending the CAPWIC conference in-person. It was a great opportunity for me to connect with researchers, students, and professionals across different areas of computing. I would like to express my sincere gratitude to ODU ACM-W for providing travel support to attend this conference. I also had the wonderful opportunity to stroll through one of the oldest areas in the U.S. – Old Town, Alexandria. I was mesmerized by the brick sidewalks, cobblestone streets, historic townhouses, cherry blossoms in bloom, and the beautiful sunset views along the waterfront. It was refreshing to relax after a full day at the conference.



Previous trip reports for CAPWIC by WS-DL members: 2025, 2015.


---- Tarannum Zaki (@tarannum_zaki)

 

Finding AI Learning resources for Library Professionals / Artefacto

We’ve recently added a new starter curriculum for AI to our Libraryskills.io platform – a space dedicated to highlighting and signposting great, free learning resources for and by library professionals.  AI is one of the most talked about topics in libraries right now. And it has particular relevance for library and information professionals for a [...]

Continue Reading...

Source

2026-04-13: The Contemplative Sciences Center (CSC) Sensemaking Symposium 2025: Trip Report. / Web Science and Digital Libraries (WS-DL) Group at Old Dominion University



Introduction – The CSC Sensemaking Symposium 2025

From October 9–11, 2025, the University of Virginia’s Contemplative Sciences Center (CSC), UVA brought together artists, scientists, scholars, and contemplatives from across the world for the Sensemaking Symposium, a two-and-a-half-day immersion into how humans perceive, interpret, and create meaning in an age defined by complexity.

Hosted in the newly opened Contemplative Commons, the event dissolved traditional boundaries, merging research with lived experience. Through sound installations, musical performances, guided contemplations, and cross-disciplinary conversations, participants entered a space where inquiry became sensory, where knowledge was not only discussed but embodied.

The symposium framed sensemaking as both an inner and collective act, linking neuroscience and mysticism, technology and ritual, sound and silence. Across four thematic sessions, Sensemaking, Hearing, Seeing, and Extrasensory, presenters invited the audience to explore the full range of human perception.

Michael R. Sheehy – Director of Research, Contemplative Sciences Center (UVA)

Dr. Michael R. Sheehy delivered the opening keynote, Contemplative Technologies of Human Sensemaking. As Director of Research at the CSC and Research Associate Professor of Religious Studies, Sheehy bridges lived contemplative traditions, especially Tibetan Buddhism, with modern scientific inquiry. His work spans lucid dreaming, Dzogchen meditation, cognitive illusion, and the cultural ecologies of contemplative practice.

Old Dominion University – Mindfulness and Data Class Participants

The Mindfulness and Data class at Old Dominion University examines how contemplative practices intersect with data-driven ways of understanding the world. Under the guidance of Dr. Nicole Willock, students learn to connect inner awareness with analytical thinking by exploring how attention, emotion, and perception shape the interpretation of information. The course blends reflection, discussion, and hands-on engagement with technologies such as physiological sensors, EEG devices, and digital tracking tools. Through this interdisciplinary approach, students gain insight into how mindfulness can enhance critical thinking, empathy, and more ethical, human-centered uses of data.

As part of their learning journey, students from the class attended the Contemplative Science and Art Conference, where they experienced firsthand how contemplative practice, cutting-edge technology, and creative expression come together. By interacting with tools such as heart-rate monitors, EEG headsets, eye-tracking systems, and immersive art installations, students were able to connect classroom concepts with real-world applications and deepen their understanding of how science can illuminate the inner dimensions of human experience.




Dr. Nicole Willock is a Professor at Old Dominion University whose teaching bridges mindfulness, religion, and cultural studies. She leads the Mindfulness and Data class and guided her students to the symposium to experience how science, art, and contemplative practice converge.


Lawrence Obiuwevwi is a doctoral student in Computer Science at Old Dominion University. His research focuses on emotion sensing, physiological signals, and spectrum analysis. He supported the learning experience by helping interpret technologies used at the symposium such as EEG, heart-rate sensors, and eye-tracking systems.


Cora Morgan is a Communication major whose interests center on storytelling, culture, and mindfulness. The symposium expanded her view of how contemplative science connects inner awareness with social expression.


Alexis R. Morel is a Psychology major interested in emotional balance and empathy. Her participation deepened her understanding of how the mind and body interact through attention and awareness.


Araceli Gordus Huizar is majoring in Women’s and Gender Studies with minors in Spanish and Media Studies. She explores identity, culture, and creativity, and found the symposium rich with interdisciplinary insight.


Dabre Ali is an undergraduate with a growing interest in human-centered technology. The symposium exposed him to tools that illuminate invisible dimensions of human experience, emotion, rhythm, and inner stillness.

Session I – Sensemaking

Opening the symposium, this session explored how human perception, art, and science coalesce into new modes of understanding. Led by eco-artist Wolfgang Buttress, philosopher Jelena Markovic, Buddhist studies scholar James Gentry, and techno-artist David Glowacki, the session wove ecological awareness, embodied knowledge, and the aesthetics of complexity.

Moderated by Devin Zuckerman, the conversation grounded the symposium’s theme: that sensemaking is both analytical and aesthetic, bridging data and devotion, science and spirituality.

Wolfgang Buttress – Eco-Artist, United Kingdom

Eco-artwork 1Eco-artwork 2

Dr. Wolfgang Buttress brought an artistic and ecological dimension to Session I: Sensemaking, illuminating how sound, light, and structure can translate the intelligence of the natural world into human experience. A celebrated British sculptor and installation artist, Buttress is internationally known for creating large-scale, data-driven works that merge art, science, and environment. His most renowned installation, The Hive, originally commissioned for the UK Pavilion at Expo 2015 and now a permanent piece at Kew Gardens, uses live sensor data from a real beehive to generate light and sound, allowing viewers to feel the hum of a living colony.

His recent UVA installation, NINFEO, continued this exploration by immersing participants in a responsive landscape of light and resonance. Through collaborations with scientists, architects, and musicians, Buttress transforms empirical data into sensorial art, revealing the hidden rhythms of ecosystems and reminding audiences that perception itself is an act of ecological relationship. His work stands as a meditation on interconnection, an artistic invitation to listen to the world’s subtle frequencies and rediscover the harmony between human sensemaking and the planet’s living pulse.

Jelena Marković – Philosopher, Université Grenoble Alpes, France

Philosophical artwork

Philosopher Dr. Jelena Marković offered a deeply introspective counterpoint to the artistic and scientific discussions, inviting participants to reflect on how thought itself becomes embodied. A scholar whose work bridges philosophy of mind, cognitive science, and affective experience, Marković explores how attention, emotion, and grief transform the self and shape our perception of the world.

Currently a post-doctoral fellow at Université Grenoble Alpes and a member of the Centre for Philosophy of Memory, she examines transformative experiences, moments such as loss or wonder that reorganize our sense of being, and how affect biases attention and meaning-making. Her research also extends into performance and art-based philosophy, using creative forms to investigate cognition and embodiment. At the symposium, her contribution grounded the dialogue in phenomenology and emotional depth, revealing that sensemaking is not only a cognitive act but a lived process through which feeling, memory, and awareness co-construct reality.

James D. Gentry – Buddhist Studies Scholar, Stanford University

James D. Gentry portraitTibetan Buddhism objects

Dr. James D. Gentry offered a historical and contemplative lens on how meaning emerges through embodied ritual and sensory practice. An Assistant Professor of Religious Studies at Stanford University and a leading scholar of Tibetan Buddhism, Gentry studies the material, ritual, and visual cultures that shape Buddhist experience. His acclaimed book, Power Objects in Tibetan Buddhism: The Life, Writings, and Legacy of Sokdokpa Lodrö Gyeltsen, explores how objects such as relics, amulets, and ritual implements become vehicles of perception and transformation.

His broader research investigates how sound, sight, and touch function as technologies of enlightenment within Himalayan traditions, and how these sensory frameworks can dialogue with contemporary understandings of consciousness and materiality. At the symposium, Gentry’s reflections bridged ancient contemplative knowledge and modern philosophical inquiry, reminding the audience that sensemaking, whether through ritual or research, is always an embodied and relational act of seeing, hearing, and touching the sacred in everyday life.

David Glowacki – Techno-Artist & Scientist, Intangible Realities Laboratory (Spain)

Glowacki VR molecular artDavid Glowacki VR Isness installation

Dr. David Glowacki expanded the boundaries of perception by merging physics, philosophy, and mystical imagination. Founder of the Intangible Realities Laboratory (IRL), Glowacki is a scientist-artist whose work transforms data into immersive, contemplative experience. With a Ph.D. in molecular physics and a background spanning chemistry, literature, and philosophy, he creates multi-sensory VR installations that invite audiences to feel energy, form, and consciousness as living presences.

At the symposium, Glowacki spoke about his ongoing exploration of Tara, the Buddhist embodiment of compassion and luminous awareness, and how archetypes reveal the human capacity to perceive interconnection beyond the material. Through projects like Isness, which allows participants to merge as glowing fields of light in shared VR space, Glowacki bridges scientific and contemplative traditions, where atoms and awareness coexist in the same luminous field.

Session II – Hearing: Sound & Silence

Session II opening image

Sound became the medium through which participants listened to the world anew. JoVia Armstrong drew on her expertise as a percussionist to show how rhythm and reverb shape emotion and meaning. Patrick Finan, a clinical psychologist, described how sonic vibration and music therapy alleviate pain and restore balance to the nervous system. Kythe Heller blended poetry, performance, and theology, while Adam Lobel invited stillness as the acoustic of awareness.

Moderated by sound artist Matthew Burtner, the panel revealed that hearing, whether through silence or resonance, is both a physical and contemplative act, tuning the self to the wider harmonics of life.

JoVia Armstrong – Percussionist, Composer & Assistant Professor of Music, University of Virginia

JoVia Armstrong performing at UVAJoVia Armstrong performance banner

Dr. JoVia Armstrong transformed the symposium space into a living instrument, reminding us that sound is not merely heard, it is felt. A percussionist, composer, and sound artist from Detroit, Armstrong blends Afro-diasporic rhythmic traditions with modern electronic experimentation to explore the emotional and spatial dimensions of listening.

At the symposium, she spoke about the importance of reverb in musical composition, not just as an acoustic effect but as a metaphor for resonance, echo, and memory. In her words, reverb gives sound a body; it situates the listener within space and time, allowing emotion to linger like breath in a room.

Currently an Assistant Professor of Music at the University of Virginia, Armstrong holds a Ph.D. in Integrated Composition, Improvisation, and Technology from UC Irvine. Her performance ensemble, Eunoia Society, experiments with drones, loops, and multichannel environments to create immersive sonic meditations.

Through her work, she bridges rhythm and reflection, tradition and technology, inviting audiences to experience sound as a contemplative process of becoming aware of one’s own presence in the auditory world.

Patrick H. Finan – Clinical Pain Psychologist & Professor of Anesthesiology, University of Virginia

Patrick Finan portraitPatrick Finan presentation banner

Dr. Patrick Finan invited listeners to consider sound as medicine, a bridge between psychology, physiology, and the inner landscape of pain. A clinical pain psychologist and Harold Carron Professor of Anesthesiology at the University of Virginia, Finan’s work explores how sleep, emotion, and reward systems shape the experience of chronic pain.

At the symposium, he discussed the healing potential of sound and music, describing how rhythm and resonance can modulate emotional states and neural activity, providing moments of relief and restoration.

Drawing from research in his lab, including fMRI, sensory testing, and ecological momentary assessment, Finan explained that music’s ability to soothe pain is grounded in psychophysiological synchronization: the body literally entrains to patterns of calm.

His talk framed listening as an act of empathy and self-regulation, suggesting that the future of pain management may rely not only on medication but on cultivating deep, attentive sonic relationships with one’s own body.

Kythe Heller – Interdisciplinary Artist, Poet & Scholar, Harvard University

Kythe Heller portraitFirebird album by Kythe Heller

Dr. Kythe Heller wove poetry, philosophy, and performance into a meditation on the spiritual and sensory dimensions of listening. An interdisciplinary artist and Doctor of Theology from Harvard University, Heller’s work bridges creative practice and contemplative inquiry, exploring how sound, silence, and language become conduits for transformation.

At the symposium, she reflected on the voice as a vehicle of revelation, tracing how resonance and vibration carry meaning beyond words, invoking both the mystical and the material. As founder and director of Vision Lab at Harvard Divinity School, Heller convenes artists, scientists, and contemplatives to explore imagination as a force that reshapes consciousness.

Her poetry collection Firebird and multimedia performances investigate illumination, grief, and transfiguration. Through her presence, Heller invited participants to experience silence not as emptiness but as a vibrant medium, a threshold where self and world meet in shared reverberation.

Adam Lobel – Contemplative Teacher, Ecophilosopher & Founder of 4F Regeneration

Adam Lobel portraitAdam Lobel teaching

Dr. Adam Lobel invited participants to listen beyond the human, to tune into the soundscape of the Earth itself. A contemplative teacher, ecophilosopher, and founder of 4F Regeneration, Lobel works at the intersection of Buddhist practice, ecological awareness, and collective transformation.

He spoke about sound as a bridge between consciousness and the living world, encouraging listeners to recognize hearing as both an ethical and ecological act. Drawing on his background in Buddhist philosophy and decades of contemplative teaching, Lobel suggested that awareness practices can heal the rift between human perception and planetary systems.

His teachings blend meditation, ritual, and ecological activism, creating spaces for embodied reflection. In Charlottesville, his words rang like a dharma bell, reminding participants that every sound, from wind to breath to silence itself, is a pulse in the shared heartbeat of life.

Kelsey Johnson – Astronomer & Professor of Astronomy, University of Virginia

Kelsey Johnson observing starsKelsey Johnson astronomy outreach

Dr. Kelsey Johnson guided participants to look outward, and inward, through the lens of the night sky. A renowned astronomer at the University of Virginia and founder of Dark Skies, Bright Kids!, she studies the birth of galaxies and the formation of stars hidden within cosmic dust.

Johnson spoke about the loss of the natural night sky due to light pollution, reminding the audience that the glow of cities is dimming humanity’s oldest connection to the cosmos. For millennia, humans oriented their stories, rhythms, and sense of humility by the stars.

She argued that regaining dark skies is both an ecological and contemplative act, an invitation to rediscover our place in the vastness of space. When we lose the stars, she reflected, we risk losing sight of our own smallness and wonder. Her talk blended scientific insight with existential reverence, making the night sky a mirror for meaning and fragility.

Andrew Holecek – Contemplative Teacher, Author & Scholar of Dream Yoga

Andrew Holecek teaching dream yogaDream Yoga by Andrew Holecek

Dr. Andrew Holecek invited participants to journey beyond ordinary perception, to explore the “luminous darkness” of the mind itself. A renowned teacher of Tibetan Buddhist meditation and lucid dreaming, Holecek has spent decades studying how awareness continues through waking, dreaming, and dying.

He spoke about the transformative power of darkness, drawing on Tibetan dark retreat practices where total darkness reveals the inner light of consciousness. He described how the night, both literal and psychological, can become a field for insight rather than fear, showing that seeing is not only optical but spiritual.

Author of works including Dream Yoga: Illuminating Your Life Through Lucid Dreaming and the Tibetan Yogas of Sleep, Holecek emphasized that cultivating awareness in darkness dissolves boundaries between seer and seen. His reflections reminded the audience that light and darkness are partners in perception, and that embracing the unseen helps us see more clearly within.

Jesse Fleming – Media Artist & Assistant Professor of Emerging Media Arts, University of Nebraska–Lincoln

Jesse Fleming portrait Jesse Fleming immersive installation

Dr. Jesse Fleming explored how technology, light, and presence shape perception. A filmmaker, media artist, and Assistant Professor at the University of Nebraska–Lincoln’s Carson Center for Emerging Media Arts, his work bridges consciousness studies, design, and immersive art.

Fleming spoke about how mediated seeing can become a contemplative practice, reflecting on how screens, reflections, and moving images influence attention and empathy. Drawing on projects such as The Shared Individual and Nuclei, he demonstrated how immersive environments can expand awareness rather than fragment it. His research asks what happens when the media stops entertaining and starts awakening, when pixels, photons, and human perception synchronize to reveal the subtle boundary between observer and observed. Fleming reframed “seeing” as participation in a living network of light, bodies, and consciousness.

Session IV – Extrasensory

The final session expanded perception beyond the five senses, merging science, spirituality, and technology. Mikey Siegel introduced bio-sensor experiences that transform heartbeats and breath into shared light and sound fields. Eve Ekman illuminated the emotional body as a sensory organ guiding compassion and resilience. Michael Lifshitz traced the neuroscience of hypnosis, meditation, and psychedelics to show how consciousness continually remakes reality. Oludamini Ogunnaike offered a luminous account of Sufi and Islamic practices in West Africa, where chant, rhythm, and beauty serve as portals to divine knowledge. Moderated by Casey Forgues, the discussion synthesized art, science, and spirituality into one realization: sensemaking begins where the measurable meets the mystical.

Mikey Siegel – Technologist & “Consciousness Hacker”, Stanford University

Mikey Siegel immersive technologyInteractive consciousness tech by Siegel

Dr. Mikey Siegel brought frontier-thinking to the table, showing how technology and collective physiology become tools of sense-making. A former robotics engineer (MIT Media Lab) now based at Stanford University and working with his initiative BioFluent Technologies, Siegel designs immersive systems (such as his renowned platform GroupFlow) that measure participants’ heart-rate and breath and convert these into shared audio-visual experiences.

At the symposium he spoke about how sense-making isn’t just a solo act of cognition, but a field phenomenon, a resonant space where bodies, devices, sounds and attention interweave. He urged us to ask not only what our technologies do, but who they enable us to become.

Eve Ekman – Contemplative Social Scientist & Emotion Researcher, University of California, Berkeley

Eve Ekman portraitEve Ekman lecture

Dr. Eve Ekman turned attention inward, guiding participants to consider emotion itself as a sensory organ, a compass for meaning and human connection. A contemplative social scientist and Senior Fellow at the Greater Good Science Center at UC Berkeley, Ekman’s research explores emotional awareness, empathy, and resilience.

She spoke about how emotions shape perception, emphasizing that sensemaking is not limited to intellect or sensation but is deeply informed by the body’s internal signals. Drawing from her work on the Atlas of Emotions with the Dalai Lama and her Cultivating Emotional Balance program, she illustrated how mindfulness and compassion training help individuals transform reactivity into clarity.

Her reflections revealed that emotional literacy is a contemplative technology, one that allows people to feel more deeply, connect more authentically, and perceive the subtle vibrations of the human heart. Ekman reminded participants that the future of awareness depends not only on sharper tools of observation but on gentler capacities for feeling.

Michael Lifshitz – Neuroscientist & Assistant Professor of Psychiatry, McGill University

Michael Lifshitz portraitMichael Lifshitz contemplative neuroscience

Dr. Michael Lifshitz bridged neuroscience, anthropology, and contemplative practice to examine how the human mind constructs ,  and transcends ,  ordinary perception. An Assistant Professor of Psychiatry at McGill University and Director of the Psychedelics and Contemplation Lab, Lifshitz investigates how meditation, hypnosis, and psychedelics alter consciousness and the sense of self.

At the symposium, he spoke about how non-ordinary states of awareness reshape the boundaries of the senses, describing them as experiments in human possibility. Drawing from neuroimaging and ethnographic research, he explored how spiritual and contemplative experiences can transform the brain’s perception of agency and embodiment.

His talk emphasized that what we call “extrasensory” may not be supernatural at all,  but an expanded form of sensemaking that includes the body, culture, and consciousness in continuous dialogue. Through this lens, Lifshitz offered a scientific and deeply human reminder: to understand the mind, we must study not just what it perceives, but how it learns to see itself.

Oludamini Ogunnaike – Associate Professor of African Religious Thought & Democracy, University of Virginia

West African Islamic arts and practiceSufi chanting and West African devotional arts

Dr. Oludamini Ogunnaike explored how sensory experience and spiritual knowledge fuse in the Sufi and Islamic traditions of West Africa, inviting participants to listen for the hidden frequencies of sacred sound, poetry, and devotion. At the University of Virginia, he teaches African religious traditions, Islamic philosophy and art, and the intellectual history of Sufism and Ifá.

His research examines the aesthetic, philosophical, and sensory dimensions of West African Islamic and indigenous traditions, particularly how devotional recitations, mystical poetry, and ritual practices function as forms of sense-making. At the symposium he described how the chants of the Tijāniyya order, the rhythms of madīḥ poetry, and the oracular Ifá tradition reveal the senses as conduits of knowledge, not just passive receptors.

Through works such as Deep Knowledge (2020) and Poetry in Praise of Prophetic Perfection (2020), he reframes sense-making as a poetic, embodied, and spiritual act, one in which the boundaries between listener, liturgy, and divine presence dissolve.

Conclusion – Integrating the Senses, Integrating the Self

The Contemplative Sciences Symposium revealed the power of interdisciplinary inquiry, where artists, scientists, philosophers, physicians, and contemplative practitioners came together to examine how humans perceive, interpret, and make meaning. Across sessions on hearing, seeing, extrasensory awareness, and the nature of sensemaking itself, the symposium showed that understanding the world requires more than intellect alone; it requires the full participation of the senses, the body, and the imagination. This gathering demonstrated how deeply interconnected the contemplative, scientific, and creative disciplines truly are, and how each contributes a vital perspective to the study of awareness and human experience.

A significant part of this success is also reflected in the structure and philosophy of Dr. Willock’s Mindfulness and Data course, which seamlessly integrates contemplative practice with physiological measurement, emotional awareness, and data-driven inquiry. In her classroom, students learn to read heart rate, breath, and affective signals not merely as metrics, but as reflections of lived, embodied processes. By guiding students to unite mindfulness with analytic rigor, Dr. Nicole creates a learning environment in which theory becomes experience and data becomes self-knowledge. Her approach shows that education can be contemplative, scientific, and personal all at once, inviting students to think critically, feel deeply, and cultivate attention as a tool for understanding.

The participation of her students in the Contemplative Sciences Symposium further exemplifies this integration. Engaging directly with leading scholars, artists, and contemplative researchers allowed them to witness interdisciplinary collaboration in action and to situate their own learning within a broader landscape of inquiry. Through both classroom practice and conference immersion, students experienced firsthand how mindfulness, physiology, art, culture, and neuroscience converge to expand human understanding. This synergy, between curriculum and community, between inner practice and academic exploration, highlights the transformative potential of contemplative education and the essential role it plays in shaping thoughtful, reflective, and compassionate learners.

Lawrence Supervisors’ Special Thanks:

I would like to express my sincere gratitude to my supervisors, Dr. Erika Frydenlund, Research Associate Professor at Old Dominion University and a member of the Storymodelers Lab; Dr. Krzysztof J. Rechowicz, Assistant Professor at Old Dominion University and a member of the Storymodelers Lab and the Virginia Digital Maritime Center (VDMC); and Dr. Sampath Jayarathna, Associate Professor at Old Dominion University and a member of the Web Science and Digital Libraries Research Group and the NIRDSLab, for the continued opportunities they have afforded me to be part of impactful research and meaningful academic endeavors. Their guidance, support, and mentorship have been invaluable to my growth.

About the Author:
Lawrence Obiuwevwi is a Ph.D. student in the Department of Computer Science, a graduate research assistant with The Center for Secure and Intelligent Critical Systems (CSICS), and a proud student member The Web Science and Digital Libraries (WS-DL) Research Group, and NirdsLab at Old Dominion University.


Lawrence Obiuwevwi
Graduate Research Assistant
Virginia Modeling, Analysis, & Simulation Center
Department of Computer Science
Old Dominion University, Norfolk, VA 23529
Email: lobiu001@odu.edu
Web : lawobiu.com

 


Digital Storytelling in Practice: A New Session Format for the DLF Forum / Digital Library Federation

Digital Storytelling in Practice: A New Session Format for the DLF Forum

Team DLF is introducing a new session format in the Call for Proposals for the 2026 Virtual DLF Forum: Digital Storytelling Presentations. This format is designed to deepen collaboration, center relationships, and create space for shared learning across roles, institutions, and communities.

The new 40-minute Digital Storytelling (DS) Presentation format is designed as an interactive session that highlights digital storytelling projects developed through collaborative partnerships. These DS Presentations center on installation-inspired projects, such as exhibits, platforms, or collections, that offer immersive, experiential engagement for participants. We encourage presenters to incorporate demonstrations whenever possible to help attendees engage more fully with the tools, platforms, or storytelling approaches being shared. 

Rather than focusing solely on a single presenter or project overview, the format should feature a minimum of two (2) presenters and no more than three (3) presenters. For example, a digital librarian or archivist might pair with a community partner, student, artist, or scholar whose work is represented in, or inspired by, the digital project. Together, presenters will explore not only the final product but also the collaborative process, relationships, and ideas that shaped the work, to show attendees how this work might be imagined, adapted, and implemented within their own institutions.

Presentations will emphasize the broader significance of digitization, why access matters, how collections are used, and the impact beyond the institution. Examples of proposals might  include: 

Archive to Art: A digital archivist and artist show how digitized protest materials inspired a multimedia installation, emphasizing workflow and creative impact. Example: Women’s March on Washington and Atlanta March for Social Justice and Women Collection (January 21, 2017), Women and Gender Collections, Georgia State University Library Digital Collections

Community Memory in Motion: A librarian and historian built a neighborhood digital archive through collaboration, now used in schools and local programs.  Example: Folded Map Project 

Teaching with Data: A librarian and student used a digitized collection to create a data visualization project, linking the technical process to student research.  Example: Students Turn College Fight Songs into Award-Winning Data Visualization | News | Northwestern Engineering

This format is included in the 2026 Call for Proposals, and we look forward to seeing how presenters bring collaborative digital storytelling to the Forum. If you’d like to talk through your idea or learn more about the format, please email us at forum@diglib.org.

The post Digital Storytelling in Practice: A New Session Format for the DLF Forum appeared first on DLF.

Call for Proposals: 2026 Virtual DLF Forum / Digital Library Federation

CLIR’s Digital Library Federation (DLF) invites proposals for the virtual 2026 DLF Forum, to be held online October 14-15, 2026. Learn more about who we are and who attends the DLF Forum.

Please note: This Call for Proposals (CFP) is for the October virtual event. There is no in-person event in 2026. We are committed to making this online conference accessible to all through consistent use of captioning in all sessions and the provision of accessible presentation materials, screen-reader-friendly documents, and clear communication of accommodation options. For accessibility related questions or concerns, please contact forum@diglib.org

The submission deadline is Monday, May 11, at 11:59 pm ET

We invite proposals for live virtual presentations on all topics related to digital libraries, encompassing case studies, “show and fails,” practical applications, methods, projects, ethics, research, and learning in any area, such as: 

  • Collections & Stewardship: Digitization, digital preservation, digital asset management systems (DAMS), born-digital materials, and format conversions.
  • Community & Advocacy: Partnerships, community archives, outreach, and professional advocacy.
  • Digital Research & Pedagogy: Digital humanities, scholarship, music, art, creative expression, and digital pedagogy.
  • Ethics, Justice, & Society: Race and technology, accessibility, AI/Machine Learning, copyright, and environmental sustainability. 
  • Infrastructure: Platforms, workflows, project management, and assessment.

This list of content topics is intended as a starting point and is not exhaustive; we welcome additional ideas and approaches that align with the spirit of the Forum.

Session Formats

All sessions will take place live in a meeting-style or webinar-style Zoom room, and breakout rooms will be available upon request for all formats except lightning talks. Sessions are invited in the following lengths and formats:

    • 90-minute Workshops: Guided training sessions on a specific tool, technique, workflow, or concept. Up to five (5) facilitators are allowed per submission. 
    • 50-minute Working Sessions: Open sessions for community organizers, creative problem solvers, and existing or prospective DLF working groups to begin or get feedback on in-progress projects, collaborate on addressing challenges, and discuss thought-provoking questions. Up to five (5) facilitators are allowed per submission.
    • 40-minute Panels: Discussions of up to five (5) presenters on a unified topic, with an emphasis on community discussion. Proposals with diverse and inclusive speaker involvement will be favored by the committee. Panels will be slotted into 50-minute sessions, leaving a minimum of 10 minutes for Q&A and discussion at the end of each session. 
    • 40-minute Presentations: A single topic or project presented by up to three (3) presenters. Presentations will be slotted into 50-minute sessions, leaving a minimum of 10 minutes for Q&A and discussion at the end of each session. 
  • NEW! 40-minute Digital Storytelling (DS) Presentations: Interactive sessions highlighting digital storytelling projects—such as exhibits, platforms, or collections— developed through collaborative partnerships that offer immersive, experiential engagement. They should feature a minimum of two (2) and no more than three (3) presenters in conversation, such as a digital librarian or archivist paired with a community partner, student, artist, or scholar whose work is represented in, or inspired by, the digital project. Demos are encouraged. DS Presentations will be slotted into 50-minute sessions, leaving a minimum of 10 minutes for Q&A and discussion at the end of each session. Read more about this new format here.
  • 5-minute Lightning Talks: High-energy talks on any topic held in succession in a single session, presented by up to two (2) presenters. There is no formal Q&A for lightning talks, but we encourage presenters to share contact information with attendees for follow-up conversations after the session.

Proposal Requirements

  • Proposal title and submission format.
  • Author information: full names, organizational affiliations, and email addresses for all presenters and authors.
  • Brief abstract – limited to 50 words. This abstract will appear in Community Voting and in the conference program.
  • Full proposal – limited to 250 words for all formats except for panels and workshops, which are limited to 500 words. This full proposal will only be seen by reviewers and the Program Committee.
  • Five keywords for your proposal.
  • Breakout room request – there will be an option in the submission form to indicate a request for breakout rooms.
  • Workshops Only: Learning objectives (limited to 50 words; brief, clear statements about what attendees will be able to do as a result of taking your proposed workshop); technology needed; participant proficiency level; how your workshop will be interactive. 
  • Session materials (notes, documents, slides, handouts, etc.) will be shared under a CC BY 4.0 license, which allows for sharing and adaptation of content with appropriate credit and an indication of any changes made. We will continue to invite presenters to deposit these materials in the Zenodo.org open-access repository, where the DLF community archives DLF Forum materials under this license. Presenters must agree in the submission form to share their materials under these terms.

Submissions and Evaluation

Based on community feedback and the work of our Program Committee, we welcome submissions geared toward a practitioner audience that:

  • Clearly engage with DLF’s mission;
  • Activate and inspire participants to think, make, and do;
  • Engage people from different backgrounds, experience levels, and disciplines; and/or
  • Include clear takeaways that participants can integrate and implement in their own work.

All submissions will be peer-reviewed. Reviewers will use this rubric to rate each proposal based on the values listed above. They may also recommend the proposal for a different format. Broader DLF community input will also be solicited through an open community voting process, which will help inform the Program Committee’s final decisions.

We especially welcome proposals from individuals who bring diverse professional and life experiences to the conference, including those from underrepresented or historically excluded racial, ethnic, or religious backgrounds, immigrants, veterans, those with disabilities, and people of all sexual orientations or gender identities. As we have done in the past, the Program Committee will prioritize submissions from individuals who identify as Black, Indigenous, and People of Color (BIPOC), individuals working at Historically Black Colleges and Universities (HBCUs), Tribal Colleges and Universities (TCUs), Hispanic Serving Institutions (HSIs), Minority Serving Institutions (MSis) and other libraries, archives, museums, and organizations that center BIPOC to promote inclusivity to the greatest extent possible. Self-identification options will be provided in the proposal submission form, but are not required.  

Schedule

  • Call for Proposals opens: Tuesday, April 14
  • Call for Proposals closes: Monday, May 11
  • Notification of final decisions: Week of June 11
  • Program released: Week of June 23

Read more about the DLF, who attends the Forum, and find co-presenters. 

Please feel free to reach out with any questions: forum@diglib.org

FAQ

What is the DLF Forum? 

DLF programs stretch year-round, but we are perhaps best known for our signature event, the DLF Forum

The DLF Forum welcomes digital library, archives, and museum practitioners from member institutions and beyond—for whom it serves as a meeting place, marketplace, and congress. As a meeting place, the DLF Forum provides an opportunity for our working groups and community members to conduct their business and present their work. As a marketplace, the Forum provides an opportunity for community members to share experiences and practices with one another and support a broader level of information sharing among professional staff. As a congress, the Forum provides an opportunity for the DLF to continually review and assess its programs and its progress with input from the community at large.

Here, the DLF community celebrates successes, learns from mistakes, sets grassroots agendas, and organizes for action. The Forum is governed by the DLF’s Code of Conduct. All Forum, in-person and online events, and community participants are expected to uphold a harassment-free, inclusive environment. The Code prohibits bullying, discrimination, and harmful behavior of any kind, requires respectful, constructive engagement and adherence to established safety protocols, and includes options for reporting harassment.

Generally, who attends the DLF Forum? 

The DLF Forum attendees are a multi-disciplinary cross-sector community of people who work in the digital library, museum, archives, and cultural heritage fields, from librarians, project managers, curators, technologists, and developers to administrators and service providers. The Forum welcomes practitioners from academic, art and cultural heritage, and non-profit organizations, government agencies, and more. They come from all over the country and world and represent all levels of professional experience. Forum attendees are inquisitive, engaged, and action-oriented with a focus on learning new skills and solving problems together. When offered in a virtual format, the DLF Forum may reach a wider and larger audience than in-person events.

Who should submit a proposal? 

We encourage proposals from:

  • DLF members and non-members;
  • Regulars and newcomers;
  • Digital library practitioners from all sectors (higher education, museums and cultural heritage, public libraries, archives, etc.) and those in adjacent fields such as institutional research and educational technology;
  • Students, early- and mid-career professionals, and senior staff alike.

Can you help me find a co-presenter? 

Looking for co-presenters on a particular topic? Try using our 2026 DLF Forum Unofficial Program Sessions and Connections spreadsheet for connecting with other prospective presenters. Note that the Program Committee and CLIR+DLF Staff do not monitor the document and it is not part of the official submission process. 

How is my proposal evaluated? 

All submissions will be peer reviewed. Reviewers will use this rubric to rate each proposal. They may also recommend the proposal for a different format. Broader DLF community input will also be solicited through an open community voting process, which will inform the Program Committee’s final decisions. 

What makes a successful proposal? Can I see successful proposals from previous years? 

Based on community feedback and the work of our Program Committee, we welcome submissions geared toward a practitioner audience that:

  • Clearly engage with DLF’s mission;
  • Activate and inspire participants to think, make, and do;
  • Engage people from different backgrounds, experience levels, and disciplines; and/or
  • Include clear takeaways that participants can integrate and implement in their own work.

We strongly encourage prospective presenters to review our rubric and past DLF Forum programs (from 2025 and 2024 in-person & virtual) to understand what makes a successful DLF Forum proposal. Strong proposals will demonstrate how presenters intend to design their proposed sessions to be interactive, inclusive, and action-oriented, and will also outline clear learning objectives. We especially welcome proposals from individuals who bring diverse professional and life experiences to the conference, including those from underrepresented or historically excluded racial, ethnic, or religious backgrounds, immigrants, veterans, those with disabilities, and people of all sexual orientations or gender identities. As we have done in the past, the Program Committee will prioritize submissions from individuals who identify as Black, Indigenous, and People of Color (BIPOC), individuals working at Historically Black Colleges and Universities (HBCUs), Tribal Colleges and Universities (TCUs), and other libraries, archives, museums, and organizations that center BIPOC to promote inclusivity to the greatest extent possible. Self-identification options will be provided in the proposal submission form, but are not required.  

What is the author limit? What is the presenter limit? 

Each session type has a maximum number of presenters per submission:

  • 90-minute Workshops: Up to 5 facilitators
  • 50-minute Working Sessions: Up to 5 facilitators
  • 40-minute Panels: Up to 5 presenters
  • 40-minute Presentations: Up to 3 presenters
  • 40-minute Digital Storytelling Presentations: Up to 3 presenters
  • 5-minute Lightning Talks: Up to 2 presenters

There is no limit to the number of non-presenting authors listed on a proposal.

The post Call for Proposals: 2026 Virtual DLF Forum appeared first on DLF.

Come join the TinyCat 10th Birthday Hunt! / LibraryThing (Thingology)

We’re hosting a special TinyCat Birthday Treasure Hunt over on LibraryThing! We’ve got ten clues, one for each year. We’ve scattered a collection of birthday banners around the two sites, and it’s up to you to find them all.

  • Come brag about your clowder of tiny cats (and get hints) on Talk.
  • Decipher the clues and visit the corresponding pages in LibraryThing or TinyCat to find a banner. 
  • Each clue points to a specific page. Remember, some banners will be hidden in LibraryThing and some in TinyCat! 
  • You have until 11:59 pm EDT on Thursday, April 30th to find all the TinyCats.

Win prizes:

  • Any member who finds at least two birthday banners will be awarded a TinyCat banner badge.
  • Members who find all 10 birthday banners will be entered into a drawing for one of five sets of TinyCat and LibraryThing swag. We’ll announce winners at the end of the hunt.

P.S. Thanks to conceptDawg for the gray catbird illustration!

Weekly Bookmarks / Ed Summers

These are some things I’ve wandered across on the web this week.

🔖 AI Whistleblower: We Are Being Gaslit By AI Companies, They’re Hiding The Truth! - Karen Hao

The truth about Sam Altman. AI Critic Karen Hao reveals what 90 OpenAI employees told her.

Karen Hao is an AI expert, award-winning investigative journalist, and former reporter for The Wall Street Journal covering American and Chinese tech companies. She is also co-host of the podcast The Interface and freelances for publications like More Perfect Union and The Atlantic. Her latest book is the bestselling ‘EMPIRE OF AI: Inside The Reckless Race For Total Domination.’

🔖 Introduction to Compilers and Language Design

This is a free online textbook: you are welcome to access the chapter PDFs directly below. If you prefer to hold a real book, you can also purchase a hardcover or paperback below. The textbook and materials have been developed by Prof. Douglas Thain as part of the CSE 40243 compilers class at the University of Notre Dame. Join our mailing list to receive occasional announcements of new editions and other updates.

🔖 Inside Claude Code With Its Creator Boris Cherny

A somewhat bizarre interview with the creator of Claude Code, where he talks about the origins of the tool, and how its current development fits in with Anthropic’s business plans – which seem pretty vague other than taking over the world.

🔖 pi-mono contributing guide

Some open source projects that accept AI contributions are moving to a model where PRs need to reference an issue that has been marked approved by an existing maintainer with a lgtm comment. This then triggers a Github Action that adds the user to the .github/APPROVED_CONTRIBUTORS file. Then when a PR comes in, it isn’t immediately closed.

https://newsletter.pragmaticengineer.com/p/mitchell-hashimoto

🔖 badlogic / pi-mono

AI agent toolkit: coding agent CLI, unified LLM API, TUI & web UI libraries, Slack bot, vLLM pods.

(apparently Claude Code was built with this?)

🔖 The Coral Bones

The Coral Bones is a tale of three women from different times in Earth’s history, each of whom has a special relationship with the Great Barrier Reef. Judith is the daughter of a 19th Century English sea captain and is desperate to study the natural world, just like the famous Mr. Darwin. Hana is a Japanese-Australia scientist from the present day, studying the dying reef. And Telma is a descendant of refugees in a near future Australia where most forms of animal life except humans are functionally extinct.

🔖 In Ascension

In Ascension is a 2023 novel by Martin MacInnes, published in the UK by Atlantic Books and in the US by Grove Atlantic.[1] It is published or forthcoming in ten languages. The novel tells the story of Leigh, a young girl who grows up in the Netherlands amid the specter of climate change and eventually becomes a marine scientist exploring ocean trenches and investigating an anomaly at the edge of the Solar System.

🔖 Papers, Please: The toll of age verification laws on digital sex work

“The only point [of these laws] is to restrict access to content,” Riana Pfefferkorn, an attorney and policy fellow at Stanford University’s Institute for Human-Centered Artificial Intelligence, told me. “I think that the ubiquity of [age verification] lately has warped people’s views of what online safety means, so that now everything is just like, ‘Why don’t we just do [age verification]? Won’t that fix it?’” she continued. She was alluding to the use of AI to generate adult content, as well as the trend of users on X requesting that the platform’s AI assistant, Grok, non-consensually undress photos of potentially underage girls. In response to outcry, X chose to paywall access to its AI tools

🔖 Content Neutrality for Kids: Intermediate Scrutiny for Social Media Age-Verification Laws

The First Amendment imposes a high, but not insurmountable, hurdle for states to overcome in regulating minors’ social media use. By focusing on specific features that lead to harmful effects on minors, states can craft content-neutral laws that will merit only intermediate scrutiny. The solutions to the LinkedIn Problem proposed above — naming platforms directly under TikTok’s revival of the “special characteristics” standard or regulating specific harmful features without reference to content — are the two likeliest ways for states to have their laws upheld in court. Like California, states must be creative and flexible as they respond to a rapidly developing legal doctrine. If “[s]ocial media is a cancer on our society,”213 then seeking a constitutional cure is crucial even if current efforts “dwell only on the suffering of children.”214

🔖 The machines are fine. I’m worried about us.

for someone who doesn’t yet have that intuition, the grunt work is the work. The boring parts and the important parts are tangled together in a way that you can’t separate in advance

🔖 On The Enshittification of Audre Lorde: “The Master’s Tools” in Tech

This is not an argument to reject the enshittification analysis. It is an argument to extend it. A decolonial critique of technology is not simply “the internet was always bad.” It is rather: the conditions that made the internet harmful to specific communities were never peripheral to its design; they’re an integral part of it. And any politics that aims to restore something like the pre-enshittification internet without reckoning with those conditions is doomed to reproduce them.

2026-04-11: PostGuard: The Hackathon-Winning AI That Stops Career-Ending Posts / Web Science and Digital Libraries (WS-DL) Group at Old Dominion University


From March 23rd to the 31st, the Computer Science Graduate Society (CSGS) at  Old Dominion University hosted their Spring 2026 Hackathon. The competition brought together teams across master's and PhD categories to tackle different research topics, mainly in artificial intelligence (AI). Our team, the Attention Bros (Sandeep Kalari and Dominik Soós), chose to compete in Track 6: Privacy-Preserving AI, alongside four other great teams in the PhD category.

We are grateful to announce that we won first place in the PhD category with our project, PostGuard!

This was a fast-paced challenge completed over a single week. Despite the time constraints, we successfully engineered a novel architecture that balances AI utility. In this blog post, we provide an overview of the problem we tackled, the Privacy Paradox and existing methods, our system architecture; and the mathematically proven findings that secured our victory.

For more details, you can explore our Github repository containing the code, detailed report, and the dataset used in our analysis. 


Online Comments Have Lasting Consequences

Social media platforms serve as both a public forum and a digital newsstand. We started by looking at the problem: online comments can cause irreversible, real-world damage. In the heat of the moment, you post something, and before moderation can catch it, someone takes a screenshot. People lose their jobs over 280 characters posted online. 

Current moderation systems are 100% reactive, so they only act after the fact. We wanted to build a preventative system where it warns you before you hit send. 

The Privacy Paradox and Existing methods

To give users a specific and actionable warning about how a post violates their employee's policies, the system needs to know their personal context, like their job role and employer. However, collecting and processing that data creates a massive surveillance and privacy risk.

When we looked at how existing research handles this problem, we found a significant gap. To prevent someone from posting something that will ruin their career, you have three standard options, all of which fail:

  • Content moderation is reactive. It doesn't warn the user; it just punishes them after the fact, and it's also not user-specific. 
  • Differential Privacy works great for aggregate data but is useless for individual, consequence-based warnings.
  • Text Anonymization frameworks like RUPTA are great at removing personally identifiable information (PII) from text, but they strip away the exact context the LLM needs to generate a personalized warning. 
We needed a system that acted pre-posting, user-specific, and privacy-aware. That's why we built PostGuard. 
Approach Reactive? Pre-posting? User-specific? Privacy-aware?
Content Moderation
Differential Privacy
Text Anonymization (RUPTA) ➖ Partial
PostGuard (Ours)

Building a Dataset

To rigorously test our system, we couldn't just use standard benchmarks, so we built a dataset grounded in reality. We spent the first phase of the hackathon collecting a custom dataset:
  1. 15 Real Incident Cases: We pulled verified, real-world firings that were covered by major outlets. 
  2. 20-Article Vector Corpus: We embedded 15 signal articles and intentionally injected 5 noise articles to rigorously test our retrieval precision.
  3. Synthetic Personas: We generated 15 synthetic users with escalating post histories spanning from 2024 to 2026, mapping them one-to-one with real corporate policies. 

The Architecture

PostGuard intercepts risky posts before the user hits submission. To do this without leaking the user's data to the web, we built a four-stage pipeline:
  1. Risk Extraction: We use a lightweight LLM (Gemini Flash) to quickly extract risk factors from the draft and generate a targeted search query.
  2. RAG layer: We use an embedding model to search our custom vector database for relevant corporate policies and real-world firing precedents.
  3. Warning Generation: A secondary LLM (Gemini Pro) synthesizes the retrieved precedents and generates a customized, user-facing warning.
  4. RUPTA Evaluation: Finally, we run a dual-evaluation loop. A P-Evaluator scores the re-identification risk of the data we just processed, and a U-Evaluator scores the utility of the generated warning.
Figure 1. System architecture detailing the four stages of evaluation: (1) Initial comment ingestion, (2) Contextual policy retrieval via RAG, (3) Severity classification, and (4) Generation of the intent-preserving warning

Three Privacy Modes

The core of our privacy-preserving approach is user control. The system operates in three modes that dictate what data is forwarded through the pipeline. 

Mode Data Sent to System Privacy Utility
Anonymous Comment text only. No role, no employer, no history. High — poster nearly unidentifiable Lower — generic warnings
Contextual Comment + platform + job role. Medium — role narrows the field Medium — role-specific warnings
Full Profile Comment + role + employer + recent history. Low — nearly identifiable High — employer-policy specific warnings

Evaluation Results

The evaluation of our moderation and warning system demonstrates a highly effective balance between accuracy, user intent preservation, and privacy. To understand the system's true performance, we analyzed it across four core dimensions: Privacy vs. Utility, RAG Retrieval Accuracy, Severity Classification, and Warning Quality. Here is a breakdown of the metrics we used and why they matter.

Privacy vs. Utility

Protecting user identity is just as important as providing accurate warnings. By calculating the Relative Utility Threat (RUT) score, adapted from Soonseok Kim's 2025 MDPI Electronics paper, "Quantitative Metrics for Balancing Privacy and Utility in Pseudonymized Big Data", we proved mathematically that Contextual mode (RUT: 0.824) delivers higher AI accuracy than Full Profile Mode, while exposing only 46% of the relative privacy risk.

Mode RUT Score Utility Re-id Risk Interpretation
Anonymous 0.908 88 0.05 Excellent — high utility, almost no re-id risk.
Contextual 0.824 94 0.35 Best balance — recommended deployment threshold.
Full Profile 0.652 92 0.75 Utility gain does not justify the massive privacy cost.
The RUT framework validates that feeding the LLM maximum personal data ("Full Profile") yields diminishing returns by transforming various risk and utility metrics into a unified, probabilistic scale. Contextual mode sits right at the optimal deployment threshold, giving the AI just enough context, like the job role and platform, to generate highly specific warnings without sacrificing anonymity.

RAG Retrieval Accuracy

To evaluate RAG Retrieval Accuracy, the system was tested on 15 real-world incident cases against a mixed corpus. The results demonstrate highly effective document sourcing, achieving a Hit Rate@1 score of 0.80, meaning that the correct article ranked first in 12 out of the 15 cases. Furthermore, the system also achieved perfect retrieval with a Hit Rate@3 score of 1.00, ensuring the relevant articles was always surfaced within the top three results. 

Metric Score What it means
Hit Rate@1 0.80 Correct article ranked first in 12/15 cases.
Hit Rate@3 1.00 Correct article always in top 3 — perfect retrieval.
Mean Reciprocal Rank 0.90 Average rank position is very high.

This strong performance is reinforced by a Mean Reciprocal Rank of 0.90, which confirms that the average rank position of the correct information remains consistently high across all queries. 

Severity Classification

The system prioritizes user trust and accuracy in Severity Classification, which measures its binary classification performance for detecting high and critical violations. It achieved a perfect Precision score of 1.000, guaranteeing zero false alarms so the system never wrongly warns a safe comment. 

Metric Score Interpretation
Precision 1.000 Zero false alarms — the system never wrongly warns a safe comment.
Recall  0.533 7 cases under-classified — the system is intentionally conservative.
F1 Score 0.696 Overall classification quality.

The overall classification quality is represented by an F1 Score of 0.696. The Recall score of 0.533 reflects 7 cases that were under-classified. However, this is a deliberate design choice to make sure the system remains conservative rather than over-restrictive. 

Warning Quality

Traditional metrics like exact-match or BLEU scores fail to capture the nuance of rewritten text. for this reason, we used an "LLM-as-Judge" framework to score the qualitative aspects of the AI's output on a 5-point scale. This allowed us to measure subjective dimensions like Relevance, Policy Accuracy, Rewrite Safety, and Prevention Impact at scale. 

Dimension Mean Score What it measures
Relevance 4.67 Does the warning correctly identify the actual violation?
Policy Accuracy 4.67 Does it cite the correct policy or law for this specific case?
Rewrite Safety 4.67 Does the rewrite preserve intent while removing the risk?
Prevention Impact 4.67 Would this warning likely have prevented the real firing?
Overall 4.67 Holistic quality score
The overall mean score of 4.67/5 confirms the system acts as a helpful, accurate coach that preserves the user's original intent while effectively neutralizing the career risk. We found that the system successfully rewrites drafts to preserve their intent while removing career risks.

Looking Forward

The internet doesn't have to be a trap door. With PostGuard, we proved that we can give users specific and potentially career-saving warnings without turning AI tools into surveillance machines.

We are incredibly grateful to the CSGS organizers for putting together such a challenging and rewarding event. Earning first place in the PhD category was the peak of a long, exhausting, and incredibly fun week of research. 

Thanks for reading, and watch what you comment!

~Dominik Soós (@DomSoos)

Open Refine: Blanking Down Only Within Records / Library | Ruth Kitchin Tillman

This is the first in what will be a series of posts deriving from my most recent use of OpenRefine. This post is primarily for intermediate or advanced OpenRefine users, so I won’t be going over fundamentals. Beginners who are already familiar with the power of “Blank down” and “Transform” should also be ok.

One of the great things about OpenRefine is that you can take a spreadsheet with a repeating key field, apply the “Blank down” function on that column, and unlock a “record” experience. I find this really helpful when I need to combine MARC data with item data, a 1:many relationship. I can facet the holding item library to a particular campus, for example, and see entire records, not just the item row.

While only the key needs to be blanked down to create the record view, I can improve my experience working with the data by blanking down other repeating fields so I’m only seeing one instance of any given field.

A screenshot of OpenRefine in which the record IDs, titles, and URLs for each record have been cleaned up into unique rows for each record while entries for every item can be seen on the right.

The problem is that while keys don’t repeat, other data may. If I’ve got a column for notes and I’m working on a set of related records, multiple adjacent records may have the same note text. Blanking down means that the note text is only left in the first record of the sequence, until there’s a record where that field is genuinely blank or which has a different text.

I’ve previously handled this problem by counting how many rows were impacted by my initial blank down and noting how many were impacted by blank downs on each subsequent column. If the number is higher, I undo my action and simply leave the field duplicated in each row of the record. But I don’t like it.

The Solution

While working on a massive data review project this spring, I became frustrated with repeating data which I couldn’t blank down. I decided to search around online to see if others have found ways to handle it. I found the solution in a 10-year-old thread on the OpenRefine Google Group:

Once you’ve turned your rows into records, apply the following cell transform to each column you want to blank down:

value + " - " + row.record.index

Now, perform the “Blank down” operation.

Get your original content back with a second cell transform:

value.replace(/ - \d+$/,'')

The (Brief) Explanation

It’s important to note here that you must have already performed the initial “Blank down” operation on your column with keys to turn your rows into records. Otherwise, each row will be treated as its own record. Once you have records, though, they’ll be treated as records by this transformation whether you’re currently viewing the data as rows or records.

The first transformation uses the simple string join GREL syntax to join the original value, a " - “, and that row’s record index.

Something like:

Collection Name
Pennsylvania German broadsides and Fraktur
Pennsylvania German broadsides and Fraktur
Pennsylvania German broadsides and Fraktur
Pennsylvania German broadsides and Fraktur
Pennsylvania German broadsides and Fraktur
Pennsylvania German broadsides and Fraktur
Pennsylvania German broadsides and Fraktur
Pennsylvania German broadsides and Fraktur
Pennsylvania German broadsides and Fraktur

becomes:

Collection Name
Pennsylvania German broadsides and Fraktur - 0
Pennsylvania German broadsides and Fraktur - 0
Pennsylvania German broadsides and Fraktur - 0
Pennsylvania German broadsides and Fraktur - 0
Pennsylvania German broadsides and Fraktur - 1
Pennsylvania German broadsides and Fraktur - 1
Pennsylvania German broadsides and Fraktur - 2
Pennsylvania German broadsides and Fraktur - 2
Pennsylvania German broadsides and Fraktur - 2

and when I apply the blank down function, I get:

Collection Name
Pennsylvania German broadsides and Fraktur - 0



Pennsylvania German broadsides and Fraktur - 1

Pennsylvania German broadsides and Fraktur - 2


(This is just one column view, there’s an assumed leftmost column with the record IDs.)

The second transformation is a simple regex replace, looking for the delimiter (” - “) and a string of one or more numbers up to the end of the cell value. Because we’ve put a right-anchor on it, it should only match the addition we made, even if the string happens to contain its own " - 92951” (unless you’ve turned on repeating).

This simple pair of transformations and the ability to blank down without losing data has really improved my experience of working with the large data exports.

It’s a bit cumbersome to repeat them each time, so stay tuned for my next blog post: “I can’t believe I’ve been using OpenRefine for a dozen years and only just learned how easy it is to repeat a set of operations.”

Zotero 9 / Zotero

We’re excited to announce Zotero 9, which introduces a major new way to engage with your documents, along with a host of other improvements to the research workflow.

Coming less than three months after Zotero 8, Zotero 9 is the first major update since we announced a faster release cycle for Zotero, and it represents our commitment to getting stable features into the hands of users more quickly.

Read Aloud

Read Aloud reads your documents to you in high-quality, natural-sounding voices. It works on PDFs, EPUBs, and webpage snapshots.

To get started, just click the headphones button in Zotero’s reader toolbar.

As you’re listening, you can skip forward or backward by paragraph or by sentence (Option/Alt-click or Option/Alt-left/right), and you can start reading from a particular point by clicking in the left margin or by right-clicking and choosing “Read Aloud from Here”.

An “Annotate Sentence” button — or H or U on your keyboard — will automatically highlight or underline the last sentence you heard, and you can use shortcut keys to quickly move, expand, or delete the new annotation.

Your last reading position is saved and synced between devices, so you can pick up where you left off on any device.

Read Aloud requires an internet connection and a Zotero account for high-quality voices, which we’re calling Zotero Voices. If you’d like to use Read Aloud offline, you can still use the text-to-speech voices available on your system, but the quality will be significantly degraded.

We offer two tiers of Zotero Voices: Standard and Premium. Standard voices are generated on Zotero servers, with unlimited minutes for Zotero Storage subscribers and 2 hours/month for free accounts. Premium voices are the highest-quality voices available, processed by external text-to-speech providers — they make fewer mistakes, sound more natural, support many more languages, and can handle multilingual text. Individual Zotero Storage subscribers will receive up to 2 hours of Premium usage (varying by voice) each month, and free and institutional accounts will also receive a small quota in order to try the voices out. Initially, all subscribers can request additional Premium minutes for free. In the near future, we’ll provide more details on monthly allocations and options for adding additional minutes going forward.

Read Aloud is currently available only in the desktop app, but it’ll be coming to the iOS and Android apps soon.

Recently Read

A new Recently Read collection at the top of the collections list in each library shows items with attachments you’ve recently read, most recent first. Opening an attachment or changing pages will bump the item to the top of the list.

The collection includes a Last Read column, which you can also add to other views, and “Attachment Last Read” is available as an Advanced Search condition.

The last-read time syncs between devices, so you can quickly find a file on another device that you’ve read elsewhere.

Insert Annotations Directly into Word Processor Documents

The word processor plugins now feature a new Add Annotation button that lets you insert one or more annotations directly into your document, with active Zotero citations that automatically generate bibliography entries.

Previously, you could add annotations to Zotero notes and insert notes into your document with active citations, but it’s no longer necessary to create an intermediate note if you prefer to work directly in your word processor.

Add Annotation opens a new mode in the citation dialog that expands attachments to show individual annotations. You can browse or search for annotations, choose one or more, and then add them to your document, along with active citations and any comments you added. Image and ink annotations are inserted as images.

“Added By” and “Modified By” for Group Libraries

You can now add “Added By” and “Modified By” columns to the items list in group libraries, letting you see and sort by the people who created and last updated items.

These fields also show in the metadata list along with Date Added/Modified.

Per-Group File Renaming Settings

Group admins can now configure file-renaming settings for each group library, ensuring consistent filenames for all group members.

Performance Improvements

We’ve made some major improvements to Zotero’s performance, including reducing startup memory usage by 20% and drastically reducing disk access and network requests during file syncing in some situations.

On macOS, Zotero now uses a feature of the modern Apple filesystem to avoid additional disk-space usage when copying files. This includes the automatic daily backups Zotero makes of its database, potentially saving hundreds of megabytes or gigabytes of local disk space.

Web-Based Login

You now log in to your Zotero account via the browser instead of entering credentials in the app. This allows you to use a password manager to auto-fill credentials and will enable two-factor authentication (currently in beta), greatly increasing the security of your Zotero account.

Other Improvements

See the changelog for the full list of changes.

Get Zotero 9

If you’re already running Zotero, you can upgrade from within Zotero by going to Help → “Check for Updates…”.

Don’t yet have Zotero? Download Zotero 9 now.

Knowledge as Critical Digital Infrastructure: A Call to Action for a Resilient Future / Open Knowledge Foundation

You can also read and share this story in Spanish and Portuguese. Knowledge is the foundation of every modern society, underpinning democracy, driving innovation, and strengthening our collective culture. In the digital age, this bedrock is critical digital infrastructure (CDI), the essential software, standards, data systems, and information that provide public functions upon which society depends. By recognizing...

The post Knowledge as Critical Digital Infrastructure: A Call to Action for a Resilient Future first appeared on Open Knowledge Blog.

Happy 10th Birthday TinyCat! / LibraryThing (Thingology)

Ten years ago we created TinyCat, a catalog solution for small libraries. 

The idea was simple: Thousands of small libraries were already using LibraryThing, but it was too much—too many doors to the larger world of discovery and social interaction. So we cut all that out, and we made everything about finding books. We added the circulation and administration features small libraries need. We made the simple and intuitive user interface we wished that big, public libraries had.

Growth and Improvements

From about 500 libraries in our first year, we have grown to more than 3,500 today—from schools to religious communities, from museums and LGBTQ centers, to a growing number of small public libraries. Another 32,516 LibraryThing members use TinyCat as a second way to search their personal libraries.

The last decade has brought TinyCat to countries across the globe, from the USA to Ireland, Singapore, Peru, Egypt, and South Africa to name a few. We recently added translation options, so more people can now use TinyCat in more languages.

There are many small libraries who still don’t know about TinyCat, and we can use your help spreading the word. If you know anyone working or volunteering with a small library, send them our way. Folks can sign up or learn more about TinyCat at https://www.librarycat.org

TinyCat Survey

Share your thoughts about TinyCat in our new survey!

This survey is short and every question is optional. You can save your progress and come back any time. While the survey is not anonymous, we encourage you to be honest and candid — all your feedback helps us improve!

This is a great chance to share your experiences as an admin, feedback from your patrons, and ideas you have for making TinyCat even better. All current and previous TinyCat members are welcome to participate.

New Feature: Restrict Catalog Access

TinyCat libraries now have the option of requiring a login to view their catalog. If enabled, all visitors will be prompted to enter their credentials before seeing the homepage or searching the catalog.

To enable this for your TinyCat library, go to your Patron Accounts settings and switch on Restrict Catalog Access. 


Enter the Giveaway!

Enter our TinyCat 10th Birthday Giveaway for a chance to win a special TinyCat bookmark and TinyCat stickers!

How to enter: 

  1. Share one of your favorite or most popular items from your TinyCat library on social media. Alternatively, you’re welcome to post in the TinyCat group on LibraryThing. 
  2. Include a screenshot or link to the item’s catalog page so we can see it like your patrons do. 
  3. Tag TinyCat in the post so we can find your post and enter you in the giveaway.

(Not just books! Game libraries can share a game, movie libraries a film, etc.)

Where to tag us:

Deadline:

We will randomly select 10 winners on May 8, 2026. Each will receive a special TinyCat bookmark and stickers. We will contact the winners for mailing details.

Store Sale

All TinyCat merchandise and barcode scanners, including stickers, pins, and coasters, are on sale now through May 4. Barcode scanners are only $5 and TinyCat pins are only $2! See all the discounts at the LibraryThing Store.

Profile Badges

TinyCat libraries over 1, 2, 5, and 10 years respectively will receive new badges in LibraryThing. Go ahead, brag!

Thank you for reading and for celebrating with us! 

2026-03-17: The Disintegration Loops: Generational Loss in Web Archives / Web Science and Digital Libraries (WS-DL) Group at Old Dominion University

 The Disintegration Loops: Generational Loss in Web Archives


Michael L. Nelson




As part of the Internet Archive's Information Stewardship Forum (March 18–20, 2026), I decided to use my five minute lightning talk to raise the issue of generational loss in web archives.  Or more directly, making copies of copies (...of copies…) – something that web archives currently do not do well.  My title is based on William Basinski's four volume release "The Disintegration Loops", in which he played the audio tapes of "found sounds", recorded decades earlier, in loops, with the whole process lasting over an hour.  The effect is hauntingly beautiful, with each loop slightly degrading the magnetic tape, resulting in a generational loss.  The degradation of each loop is right on the edge of the just-noticeable difference, until the entire track is reduced to just a shadow of its former self.


I first discussed this topic in my 2019 CNI closing keynote (slide 88), where I introduced the inability of web archives to archive other web archives as part of the larger issue of web archive interoperability. Let's begin with walking through the example of archiving a tweet (which we already know to be challenging!).   The original tweet is still on the live web, even though the UI has undergone many revisions since when it was originally tweeted in 2018.  


https://twitter.com/phonedude_mln/status/990054945457147904 

(screen shot from 2026-03-17)



I archived that tweet to the Internet Archive's Wayback Machine in 2018 (screen shot from 2019):


https://web.archive.org/web/20180501125952/https://twitter.com/phonedude_mln/status/990054945457147904 


I then archived the Wayback Machine's copy of the tweet to archive.today in 2019 (screen shot from 2019):

https://archive.ph/PaKx6 


Note that archive.today is aware that the page comes from the Wayback Machine but the original host is twitter.com, and it maintains both the original Memento-Datetime (20180501125952) as well as its own Memento-Datetime (20190407023141).  I then archived archive.today's memento to perma.cc in 2019 (screen shot from 2019):



https://perma.cc/3HMS-TB59 


Finally, I archived the perma.cc memento back to the Wayback Machine in 2019 (screen shot from 2019):


https://web.archive.org/web/20190407024654/https://perma.cc/3HMS-TB59 


Although the loss occurs in discrete chunks, it is reminiscent of Basinski's Disintegration Loops, with information lost at each step, and the final version being a mere shadow of the original.  In 2019, this was not universally recognized as a problem, since archiving the playback interface of other web archives was not considered a problem to itself.  The "right" solution, of course, is to share the WARC files (or WAC, or HAR, or…) out-of-band and let the other web archives replay from the same source files.  But this is rarely possible: for a variety of reasons web archives typically do not share the original WARC files, and in the case of archive.today, might not even store the original source files (and instead, likely only store the radically transformed pages).  


More importantly, it is sometimes useful to archive a particular web archive's replay of a page, which itself must be archived, because it changes through time. For example, memento #3 (the perma.cc memento of archive.today's memento) is now different; this is a screen shot from 2026:


2026 replay of https://perma.cc/3HMS-TB59 


Surely the source files themselves have not changed, and the difference is due to improvements in pywb, which is under constant development. perma.cc's replay of the 2019 page in 2019 is different from the replay from 2026, which implies that it could be different still in the future. But we can not currently archive without generational loss of perma.cc's replay of that page to, say, the Wayback Machine.  The fact that screen shots – which are rife with their own potential for abuse (cf. HT 2025, arXiv 2022) – are the only mechanism to document these replay differences underscores the web archive interoperability problem.


I chose the topic of generational loss for my slot at the Information Stewardship Forum because recent events have introduced a new use case for archiving the replay of web archives. Wikipedia recently announced it was blacklisting archive.today because its editors discovered that webmaster at archive.today was using its captcha to direct a DDoS attack against a blog owned by someone that webmaster had a dispute with (the blogger had posted a lengthy investigation of the identity of webmaster), and, for our discussion more disturbingly, had edited the content of an archived page to include the name of the blogger where it would not otherwise be.  The Wikipedia discussion page is hard to follow, in part because the editors are discussing how to archive the replay of an archived page.  For one example, they show how the archive.today replay now has been changed back to have "Comment as: Nora " (middle of the image):



But the replay alteration from archive.today in question is archived at megalodon.jp to show that the name "Nora " was replaced with the name of the blogger that had earned webmaster's ire, "Jani Patokallio". And yes, megalodon.jp's replay of archive.today's memento is that bad (at least in my browser, it is shrunk down impossibly small), so I used the dev tools to find the string in question. 


https://megalodon.jp/2026-0219-1509-14/https://archive.is:443/2021.05.30-173350/http://www.maskofzion.com/2012/04/jewish-at-root-iraqs-destruction-hell.html


Another Wikipedian archived (using yet another archive, ghostarchive.org) a google.com SERP to show that archive.today has reverted from "Jani Patokallio" back to "Nora ". 





What does changing "Nora" to "Jani" (and then changing it back again) accomplish? I'm not sure; this appears to be just a petty response to an ongoing dispute.  But the implication is profound: this is the first known example of a major web archive purposefully and maliciously altering its contents, something that we knew was possible but had not yet experienced.  


We have long known that replay can change through time (cf. PLOS One 2023) due to the replay engine (the Wayback Machine, Open Wayback, pywb, etc.) evolving, but these changes were engineering results and the replay mostly improved over time. But now we have seen web archives maliciously alter (and then revert) the replay, and we need a more standard and interoperable way to archive archival replay.  Not just to prove that a web archive did alter its replay, but also to prove that an archive did not alter its replay.  Out-of-band sharing of WARC files is the gold standard, but for a variety of reasons this is unlikely to happen.  We must be able to use web archives to verify and validate web archives.  We explored a heavyweight design for this a few years ago (JCDL 2019), but it should be revisited in light of developments like WACZ.  


–Michael


ht to Herbert Van de Sompel for introducing me to "The Disintegration Loops" many years ago.


2026-03-25: The original Google/Blogger name ("Nora") has been anonymized.

Fear and Burnout At the Interface of Librarianship and Māori Knowledge in Aotearoa New Zealand / In the Library, With the Lead Pipe

By Kathryn Oxborrow Vambe

In Brief: This article presents some findings from a study of non-Māori librarian engagement with Māori knowledge. I asked non-Māori librarians (who predominantly identified as New Zealand European, a local synonym for White) about their journeys of learning and engagement, and Māori librarians about their experiences with their non-Māori colleagues’ engagement (or lack thereof). A key theme was fear on the part of non-Māori librarians. This acted as a barrier to engagement for non-Māori participants and created extra work for Māori librarians who were expected to pick up tasks related to Māori people and culture that their non-Māori colleagues declined to undertake. I suggest that when it comes to Māori knowledge, non-Māori librarians need to feel the fear and persevere, as well as being proactive to act as good allies to their Māori colleagues.

Introduction

Kia ora (this is a greeting in te reo Māori, the Māori language – definition from Te Huia, 2016) from Aotearoa New Zealand. I am a White British cisgender heterosexual woman of English and Scottish heritage. I was born in England but have lived in Aotearoa since 2010. I have spent the majority of the last twenty years working in and around libraries, and in 2020 I completed my PhD through Victoria University of Wellington | Te Herenga Waka. When I moved to Aotearoa I was enthusiastic to learn what I could about Māori culture, customs and language, and was interested to observe the lack of enthusiasm in some of my non-Māori colleagues. When I had the opportunity to undertake PhD research, I decided to explore how non-Māori librarians learn about and engage with Māori knowledge. In this article I will focus on one key finding: The prevalence of fear in non-Māori librarians’ journeys of learning and engagement with Māori knowledge and the phenomenon of non-Māori overreliance on Māori colleagues in relation to basic library tasks involving engagement with Māori people or knowledge. I conclude by considering ways that non-Māori librarians can push through their fear and become good allies to their Māori colleagues.   

A note on terminology

In this article I discuss Māori people and culture and as such will include words and phrases in te reo Māori. These will be accompanied by English language definitions in brackets after their first use, with the exception of the words Māori and te reo Māori which will be used henceforward without translation. Definitions are taken from Te Aka Māori Dictionary online unless otherwise stated. Government departments, organisations and projects in Aotearoa are often known by both their English and Māori names, and the two names do not always represent a direct translation of each other. When referring to these I use both Māori and English names, separated with a |. Whether the English or Māori term is given first is decided by common usage.

In the original research thesis on which this article is based, I used the term mātauranga Māori, a term with a breadth and depth of meaning. The definition I used was from Mead (2012) who described it as “…Māori knowledge complete with its values and attitudes” (p. 9). For the purposes of clarity, I will be using the term Māori knowledge throughout. A more detailed discussion of the term mātauranga Māori in relation to this research can be found in Oxborrow (2020, pp. 18-20). 

In this article I refer to the country of New Zealand by its Māori name, Aotearoa, or the combined name Aotearoa New Zealand. I use these terms interchangeably throughout this article. I only use the term New Zealand on its own when referring to the mainstream or majority culture, or where it is used by participants or cited authors.   

Local Context and Literature Review

Aotearoa New Zealand

Aotearoa New Zealand is a former British colony in the South West Pacific. The Indigenous Māori people made up 17.8%of the population in the 2023 census (Stats NZ | Tatauranga Aotearoa, 2024a). The largest ethnic group is New Zealand European (Stats NZ, 2024a). Ancestors of Māori settled in Aotearoa at least 500 years before Europeans began arriving in the early 1800s (Royal, 2012). In 1840 Te Tiriti o Waitangi | The Treaty of Waitangi was signed as an agreement between over 500 Māori chiefs and representatives of the British Crown, setting the terms for the relationship between Māori and the growing settler population, and signalling the beginning of modern New Zealand (Orange, 2023). Breaches of Te Tiriti o Waitangi | The Treaty of Waitangi by the Crown began happening shortly after its signing, leading to Māori being dispossessed of the majority of their land (Orange, 2023). While there has been some redress for this, including a permanent Tribunal reporting on historical and present day breaches of Te Tiriti o Waitangi | Te Tiriti o Waitangi (Waitangi Tribunal | Te Rōpū Whakamana i te Tiriti o Waitangi, n.d.), the impacts of this for Māori are ongoing. These include disproportionate representation in health and social statistics including incarceration rates (Department of Corrections | Ara Poutama Aotearoa, 2025) and early death (Stats NZ | Tatauranga Aotearoa, 2024b). While there has been movement in recent years in relation to the visibility and acceptability of Māori language and culture, and increased Māori participation in public life, there are non-Māori in Aotearoa who feel very uncomfortable about these changes. An example of this is the New Zealand Centre for Political Research blog, whose contributors often complain of “tribal takeover” (e.g. Newman, 2025, paragraphs 13, 50, 53). This political pushback has been seen through supporters of the 2023 Coalition Government, whose legislative agenda has been described by commentators as strongly anti-Māori (see, for example, Clark & Hill, 2024; Paewai, 2025).

Members of the local White settler population in Aotearoa use different terms to describe their ethnicity or cultural identity. New Zealand European/European New Zealander is a commonly used term (Allan, 2001). Another term that is used for this ethnicity is Pākehā, a Māori term, the original meaning of which is not universally agreed on, as per the following definition from Te Aka Māori Dictionary:

New Zealander of European descent – probably originally applied to English-speaking Europeans living in Aotearoa/New Zealand. According to Mohi Tūrei, an acknowledged expert in Ngāti Porou1 tribal lore, the term is a shortened form of pakepakehā, which was a Māori rendition of a word or words remembered from a chant used in a very early visit by foreign sailors for raising their anchor … Others claim that pakepakehā was another name for tūrehu2 or patupairehe3. Despite the claims of some non-Māori speakers, the term does not normally have negative connotations. (Moorfield, n.d.)

In my time in Aotearoa I have also heard various definitions from Māori colleagues, including some who have told me the word can be used to describe all non-Māori. The nuances within the word Pākehā range from those who may believe the term to be offensive (as described in Black, 2010), to those who believe it denotes historical and spiritual connection to the physical environment of Aotearoa (Dyson, 2001; King, 2004) or an individual’s continued efforts to engage with te ao Māori (the Māori world, definition from Te Huia, 2016) (e.g. Jones, 2020). Non-Māori of any ethnicity may choose to identify themselves as Tangata Tiriti, who Bell (2024) describes as “…non-Māori people who are guided by a sense of their relationship to te Tiriti o Waitangi / the Treaty of Waitangi and to te ao Māori in their work” (p.1). Due to these multiple understandings of the term Pākehā and the fact that not all interviewees identified as New Zealand European, I use the term non-Māori in this article to refer to interviewees and any other individuals in the context of Aotearoa New Zealand who do not identify as Māori. 

Librarianship in Aotearoa New Zealand

At the time of the 2023 census of Aotearoa New Zealand, 5,730 individuals reported working in libraries across the country, including librarians, library assistants and library technicians (Stats NZ | Tatauranga Aotearoa, 2023). However, in 2025 only 853 were members of the Library and Information Association of New Zealand Aotearoa | Te Rau Herenga o Aotearoa (hereafter referred to as LIANZA), the largest professional association for librarians in Aotearoa (LIANZA, 2025a). 

Due to the small population of the profession and low levels of wider professional involvement, librarianship in Aotearoa New Zealand is more of a generalist occupation than in some other larger countries such as the United States of America. There are only three tertiary institutions offering qualifications for librarianship in Aotearoa: Victoria University of Wellington | Te Herenga Waka, Open Polytechnic | Kuratini Tūwhera, and Te Wānanga o Raukawa (one of three Māori-led tertiary education institutions in Aotearoa). Of these institutions, only Victoria University of Wellington | Te Herenga Waka offers postgraduate programmes including Master’s and PhD. It is not always a requirement to hold a qualification in librarianship to be appointed to a professional library role in Aotearoa, and there is no expectation for subject support librarians in tertiary education libraries in Aotearoa to hold or work towards a PhD in their area of subject specialism. Librarians in Aotearoa New Zealand do not always specialise in a particular type of library work and it is common for library professionals to work in various different types of libraries across their careers (see, for example, Stone, 2013). I am an extreme example of this, having worked in four different types of libraries as well as in librarian professional education over the course of the last fifteen years.

LIANZA   

In their history of LIANZA, Millen (2010) writes: “Looking back, the most notable – even radical – developments [in LIANZA] of the past thirty years have been the progress made in the area of biculturalism” (p. 172). According to the literature, the library and information profession in Aotearoa is one of the professions which has demonstrated a commitment to engaging with mātauranga Māori from relatively early on in its history. Lilley (2013) states that the first mention of library services for Māori is in 1962 when the Māori Library Services Committee was formed to recommend strategies to libraries to help them engage with Māori. The report of the committee was produced in 1963 and published in the association’s publication New Zealand Libraries (Maori Library Service Committee, 1963). Millen (2010) states that focus on Māori issues began to strengthen within LIANZA in the 1980s. Both Lilley (2013) and Millen (2010) point out that the updating of the Treaty of Waitangi Act in 1985 to enable the Waitangi Tribunal | Te Rōpū Whakamana i te Tiriti o Waitangi to accept retrospective claims from as far back as 1840 led to increased use of libraries by Māori who used them to find evidence for their claims. Another key development in the profession in the 1980s which Millen highlights is the Saunders Report on education for librarianship in 1987. Te Rōpū Takawaenga, a group of students at Victoria University of Wellington, highlighted the lack of discussion of Māori culture and knowledge in the report and called for a profession-wide discussion. Te Rōpū Whakahau, the professional association for Māori in libraries and information management, was established in 1992, initially as a Special Interest Group of LIANZA, and later becoming an independent organisation (Lilley, 2013). LIANZA and Te Rōpū Whakahau have had a partnership agreement since 1995 (Lilley, 2013). This used to involve the inclusion of two Te Rōpū Whakahau representatives on the LIANZA Council, but in recent years has become a less-structured agreement with commitment to working together and honouring Te Tiriti o Waitangi | The Treaty of Waitangi (LIANZA & Te Rōpū Whakahau, 2024). 

Professional Registration

Professional Registration was introduced by LIANZA in 2007 (Millen, 2010). According to the LIANZA Taskforce on Professional Registration (2005), the scheme was established to act as a benchmark for professional learning and development both within the library and information profession in Aotearoa and also to be compatible with other Anglophone countries with similar schemes such as the UK and Australia. Registrants must demonstrate ongoing professional learning and development across the eleven elements of the Body of Knowledge (LIANZA, n.d.-b). Body of Knowledge Element 11 (BoK11) is “Awareness of indigenous knowledge paradigms, which in the New Zealand context refers to Māori” (LIANZA Professional Registration Board, 2013, p. 9). The scheme includes mandatory revalidation (LIANZA, n.d.-a). Every three years, registrants must submit a reflective journal detailing their professional learning and development which must include two entries relating to BoK11 (LIANZA, n.d.-a). If candidates do not revalidate their Registration, it lapses (LIANZA, 2020).  

Professional Registration has not gained the status of being a default requirement for employment in the library and information sector in Aotearoa as its instigators hoped. In a paper to LIANZA members, Steven Lulich, Chair of the Taskforce on Professional Registration, wrote “It is hoped that over the next two years, most of those working in the profession will join the scheme” (Lulich, 2007, p. 4). This has not come to pass, and it is now extremely rare to see a professional librarian role advertised in Aotearoa that lists Professional Registration as a requirement. The number of professionally registered librarians has been trending downwards for several years, with LIANZA reporting just 303 professionally registered librarians as of January 2026 (LIANZA, 2026). LIANZA is looking to increase interest in Professional Registration and the Bodies of Knowledge by incorporating them in the new Te Tōtara Workforce Capability Framework (LIANZA, n.d.-b) (Tōtara is a type of native tree).  

Research on Libraries and Indigenous Knowledge in Aotearoa

While there is tangible commitment to biculturalism and mātauranga Māori from the library and information profession in Aotearoa as represented by LIANZA and other professional groups, there are still a number of issues of concern related to library and information professionals’ engagement with these topics highlighted in the small body of literature addressing libraries and Indigenous knowledge in Aotearoa. 

Irwin and Katene (1989), in a study highlighting the dearth of tribal-specific information in libraries and the difficulties experienced by Māori trying to find that information, highlight the role to be played by libraries in partnering with Māori to alleviate some of the social disadvantages that they face. Irwin and Katene argue that knowledge is power and, therefore, access to knowledge is potential power. Social statistics at the time alluded to the fact that Māori were disempowered, and the authors argued that one possible reason for this is denial of access to knowledge. “Librarians are in a position of power where they can provide open access to knowledge, or they can deny this” (pp. 23-4). While this study is old and there is likely to have been some improvement in the intervening years, statistics show that Māori still experience greater levels of social disadvantage than non-Māori, as discussed above.

Tuhou (2011) identifies a number of barriers preventing Māori tertiary students from engaging with the university library. A lot of these are physical, with one group likening the atmosphere of the library to a prison, but staff were also a factor. Tuhou recommends cultural awareness training for staff to help them engage appropriately with Māori students, and for all staff to have the skills and confidence to answer  reference questions asked by Māori students. Ritchie (2013) also noted that Māori students may experience barriers preventing them from engaging with the university library.  

Bryant’s (2015) investigation of Ngā Ūpoko Tukutuku | Māori Subject Headings found that while there were several positive developments, much work is still needed for librarians to fully integrate the headings in their cataloguing, reference, and information literacy practices. Bryant highlights training as a key issue in increasing the use of the headings by librarians, and the majority of participants expressed the desire for more training than they had already had. 

Focus in these studies has mainly been on Māori experience of libraries and the challenges faced by non-Māori librarians in engaging well with various aspects of Māori knowledge. Prior to my study, no research has investigated the process of non-Māori librarian engagement with Māori knowledge.    

Methodology

This article highlights some findings from a larger research project investigating the journeys of non-Māori librarians in Aotearoa New Zealand seeking to learn about and engage with Māori knowledge. To frame these findings in context I will give some background about the broader study and the methods used. In this study I sought to answer my main research question, “How are non-Māori librarians in Aotearoa New Zealand making sense of [Māori knowledge]?” (Oxborrow, 2020,  p.9) by undertaking interviews with non-Māori librarians and focus groups with Māori librarians. I used Sense-Making Methodology (SMM), devised by the late Professor Brenda Dervin and colleagues (e.g. Dervin, 2003) as a guiding framework for my study. The central metaphor on which SMM is based describes a process of individual sense making where the sense maker finds themselves in a Situation facing an information or knowledge Gap. They need to find a way to Bridge this Gap to reach an Outcome and continue on their journey (See Figure 1: The Sense-Making metaphor). In the interviews, I sought to learn about individual Sense-Making instances (Situation-Bridge-Gap-Outcome sequences) and asked questions that probed the different phases of the process: Situation, Gap, Bridge and Outcome, as well as factors which acted as either Barriers or Helps to engagement. The full schedule of interview questions can be found in Appendix 1.   

Brenda Dervin's sense making drawing shows a stick person carrying an oddly shaped umbrella running towards three flags. In between the person and the flags is a giant pit which has a bridge made of different pieces extending over it so the person does not fall into the gap.

Fig. 1: The Sense-Making metaphor. Copyright: Sense-Making Methodology Institute, used with permission, https://sense-making.org/ [Accessed: January 12, 2026].

Interviewees were recruited by advertising the study on an email distribution service and the LIANZA weblog. Due to a high number of responses, participants were selected using a maximum variability approach. This meant that the group was highly varied in a lot of ways, including amount of experience, types of roles and libraries worked in, and geographical location within Aotearoa. Of the 25 interviewees, there were 12 who worked in tertiary libraries, seven in public libraries, and six in school or specialist libraries. Of those working in tertiary libraries, six had previous experience of working in other types of libraries. One area in which the group was most similar was that the vast majority of the sample (23/25) identified as New Zealand European, along with one participant who identified as Asian and one who identified as a Pacific Islander. While New Zealand European most often refers to White people who were born in Aotearoa New Zealand, it is not a strict category and thus four interviewees who had immigrated to Aotearoa as children self-identified as New Zealand European.  

In addition to these interviews I also undertook three focus groups with Māori librarians, recruited by personal invitation of some individuals that I had existing connections with, and also by approaching Māori colleagues for recommendations of other individuals who may have wished to be involved. Focus groups were undertaken in Ōtautahi (Christchurch), Te Whanganui-a-Tara (Wellington) and Tāmaki Makau Rau (Auckland). Of the eleven focus group participants, nine were working in tertiary libraries and two in public libraries. However, at least four of those working in tertiary libraries had previous experience of working in public libraries.

I conducted these groups myself, with cultural advice from one of my supervisors, Associate Professor Spencer Lilley (whose Māori tribal affiliations are Te Ātiawa4, Muaūpoko5 and Ngāpuhi6). Focus groups were chosen as the data collection method since they can be empowering for participants, positioning them as experts (Dyall et al., 1999; Smithson, 2000). This was of particular importance given my identity. As well as observing cultural protocols during the focus group meetings to the best of my ability such as opening and closing mihi (acknowledgements), I also employed a thorough member checking process to maximise opportunities for participants to provide feedback regarding any concerns they may have had about misrepresentation in the research. As well as providing the transcripts to focus group participants for checking, I also distributed a draft copy of the focus group findings chapter to participants prior to submission of the final thesis. In the focus groups, I asked participants about their experiences with their non-Māori colleagues. The first question was as follows: “The profession of librarianship in Aotearoa has expressed a commitment to biculturalism since the 1980s – To what extent is the reality living up to the promise of the profession in terms of engagement with mātauranga Māori by non-Māori librarians?” Other questions asked about factors acting as Helps and Barriers to non-Māori engagement with Māori knowledge, as well as risks and benefits. The full schedule of focus group questions can be found in Appendix 2. 

I analysed the data using thematic analysis (as described by Braun & Clarke, 2005), checking in with my supervisors throughout the process. Most interviewees discussed examples of other non-Māori librarians’ engagement or lack of engagement, in addition to their own journeys, and these were also coded separately. I used the stages of the Sense-Making process described above to inform my analysis of both interviews and focus groups. On completion of the analysis of each of the two data sets, I undertook a comparison between the two at the level of themes and sub-themes.     

Findings 

Interviews revealed several interesting findings about the Sense-Making journeys of non-Māori participants. Interviewees emphasised the large scale of their knowledge Gaps in relation to Māori knowledge, as well as highlighting Gaps in the areas of Māori and Libraries (which included aspects such as Māori history, Māori information sources and the treasured status of knowledge and information in Māori culture) and Language and Cultural Protocol. Bridges identified were Courses, Books and Text Resources and People and Situations. Both Helps and Barriers consisted of significant internal aspects, where elements of interviewees’ existing knowledge and experience or aspects of their personalities were either things that Helped them proceed or acted as potential Barriers. These were in some cases closely related; for example, fear was a potential Barrier in a lot of cases, but having the strength of character to push past that fear was also something that Helped some interviewees. Nineteen of 25 interviewees mentioned feeling good was one of the Outcomes of their experiences. This included having a feeling of knowing more, expectations being exceeded, and having a general positive feeling about the experience. 

Focus group participant discussions included questions designed to elicit aspects of the Sense-Making process (see Appendix 2). A key Situational factor was non-Māori librarians having the choice of whether or not to engage with Māori knowledge in their work. Outcomes were largely seen as positive for both Māori (such as better service for Māori clients and more allies for Māori librarians) and non-Māori, for whom such engagement experiences could be transformational. Focus group participants also saw potential risks, however, such as Māori client alienation. This is when Māori customers experience shame, feel belittled because a non-Māori librarian appears to them to have more knowledge than they do, or feel that non-Māori are over-stepping when they engage with Māori knowledge. The importance of learning te reo Māori was highlighted by participants throughout (although no specific question was asked about this), as was the need for Māori knowledge to be normalised throughout the profession of librarianship, and bringing together existing initiatives to build momentum.   

For further information on these findings, see Oxborrow (2020). In the rest of this section I will focus on the area of fear as a Barrier, as described by interviewees, and its flow on effect of overreliance and helplessness as discussed by focus group participants.  

Fear, Overreliance and Helplessness

From comparing the two sets of data, a key finding emerged. Non-Māori interviewees often spoke about fear in relation to their own experiences of engagement with Māori knowledge or their observations of their non-Māori colleagues’ engagement (or lack of). The concept of fear included fear of the unknown, fear of making a mistake, fear of what others might think, or fear of causing offence. Fear was often described by interviewees as a barrier to engagement, and frequently resulted in work involving Māori clients or knowledge being passed on from a non-Māori librarian to a Māori colleague. One of the interviewees articulated this situation in the following manner:

One of the first kind of panic things that can happen as a non-Māori person and you see a Māori person turn up and they’re like ‘I want some information about a Māori issue’ You’re like ‘Ooh, can I find someone who’s Māori to answer that question? I don’t feel qualified! Ah!

None of the interviewees spoke about the impact that this fear, and subsequent passing on of work might have for their Māori colleagues. It was, however, something that focus group participants talked a lot about, with the understandable frustration coming through clearly in the following quote:

Participant 1: And, you get people who make up lots and lots in excuses ‘oh, there wasn’t enough preparation, ‘I didn’t have enough pronunciation lessons’, ‘I don’t understand pepeha7, ‘I went to my 101 Māori course and I don’t have the confidence or the competence to be able to engage in mātauranga Māori’ 

And so, for me it’s like ‘So what are you asking me to do? Hold your hand? Do you want me to hold your hand? Do you want me to give you all the resources that you can possibly get?’ There are thousands and thousands of level two, level four resources that are available to librarians – we’re a library, we’re full of them – and yet there’s no self-development, there’s no want to self-develop unless somebody… 

Participant 2: Yeah, there’s no desire, aye? 

This quote also indicates that the participants in this group believed that the concern about being qualified that was articulated in the previous interviewee quote was not the main barrier to engagement. Focus group participants considered it totally appropriate for non-Māori colleagues to seek support on higher-level queries where a greater depth of cultural knowledge was needed. However, much of the time, the knowledge required was at the level of attempting a basic reference desk request or consulting an online dictionary. One focus group participant gave an example in a tertiary library context where if a patron approached a reference desk with a basic question about a History topic, the first response should not be to immediately fetch the subject specialist (as often happened), but to attempt to help. Only if the query proved to be too complex should they request assistance. The role of subject specialist in tertiary libraries in Aotearoa is a lot less specialised than might be the case in other larger countries, so the expectation would usually be that any librarian on reference desk duty would make an attempt to answer questions on any subject unless it was clear from the outset that the topic was very obscure and would be very difficult to find information on. Focus group participants were talking about questions on basic Māori topics not requiring a high level of cultural knowledge, or in some cases even general questions from Māori patrons, which were being turned over to Māori librarians immediately. 

Expectations of cultural support described by Māori librarians also involved broader things such as always being the one who is asked to lead or organise traditional welcoming ceremonies. One focus group participant described such expectations like this: “‘Ah, you’re the Māori, so you can look after anything Māori.'” Similar expectations for Māori to undertake cultural duties beyond their job descriptions have also been noted in other professions such as university teaching (Mercier, Asmar & Page, 2011) and science (Haar & Martin, 2022).

Librarians in Aotearoa, as in many places, seek to be active in terms of supporting diversity and inclusion. Wei and Boamah (2019) describe how Auckland libraries provide specific services to immigrant users. LIANZA (2025b) introduced a Freedom-to-Read toolkit to help librarians deal with book challenges. LIANZA also puts a strong emphasis on its attempts to create a more inclusive atmosphere for Māori, both as library patrons and also as fellow librarians, as can be seen in its statement of values:

Respect is at the core of our interactions, whether with our members, partners, or the communities we serve. We respect diverse perspectives, acknowledging that each voice contributes to the rich tapestry of our sector. Our commitment to respect extends to upholding the principles of Te Tiriti o Waitangi, recognising and valuing the unique knowledge and cultural heritage of Māori.(LIANZA, 2024, p. 2)

While no doubt progress has been made in the intervening years since the profession first began to focus on Māori knowledge, librarianship still has a long way to go in terms of being a safe and equitable career choice for Māori. Māori made up 17.8% of the population of Aotearoa in 2023 (Stats NZ, 2024a), but made up just 5.6% of librarians and 2.5% of library assistants in that same year (Infometrics, 2024). Although there are probably multiple factors leading to this discrepancy, the findings of my research would suggest that non-Māori overreliance and self-perceived helplessness plays a part in it. As noted above, non-Māori self-perceived helplessness was viewed differently by Māori librarians in the focus groups, who believed that their non-Māori colleagues could be more proactive in learning about and engaging with Māori culture.

One of the problems that focus group participants mentioned in relation to the overwork experienced by Māori librarians is that some would become burnt out and have even left the profession as a result. Similar findings have been reported among Māori scientists (Haar and Martin, 2021). There are also writings about the experiences of Black and other minoritised librarians in the United States of America that indicate they are also expected to pick up extra diversity-related work on top of their substantive roles, and thus are unlikely to continue on in the profession due to stress and burnout (e.g. Hinton, 2023). 

The situation in Aotearoa is complicated by the fact that there are sometimes non-Māori who go to the other extreme and operate beyond their level of knowledge and understanding and get things wrong (one example in a focus group was using Google Translate to create te reo Māori translations of complex library information, the result of which was completely inaccurate). Such examples were given in a context where cultural appropriation of Māori culture and knowledge continues to be common (University of Auckland, 2024) and te reo Māori is viewed as a taonga (treasure, anything prized) (Ngā Pae o te Māramatanga | New Zealand’s Māori Centre of Research Excellence, n.d.) and so its misuse by non-Māori is problematic. The process of finding a balance between not opting out and not overstepping is an ongoing challenge that requires humility and perseverance. Non-Māori authors such as Bell (2024) and Jones (2020) discuss the complexities for non-Māori attempting to engage well with Māori. Focus group participants described non-Māori colleagues who had got things wrong and then refused to engage anymore after this because they had been corrected by a Māori person and that had upset them:

Participant 3: Or they’ve been, reprimanded is too strong a word, but they’ve done something and then been told it was the wrong thing to do and it’s 

Participant 4: In the past, and they 

Participant 3: really put them off  

Participant 5: Put them off, yeah 

Participant 4: yeah, they don’t want to do it any more 

Participant 3: completely and they no longer want to have anything to do with anything Māori

Bell (2024) acknowledges that being challenged is hard: Let’s face it, it’s pretty natural to not want to be put on the spot, to be told you are wrong, or privileged, or have made a mistake, or are being racist. None of these experiences are very comfortable! (p.39)

The fact remains that to create meaningful change in the sector, non-Māori librarians in Aotearoa need to learn to engage with situations that feel uncomfortable in order to learn, grow and help make a better profession for Māori to join and remain in.

Conclusion: Feel the Fear and Persevere

As mentioned above, the cultural milieu of mainstream New Zealand means that meaningful engagement with the Māori world by non-Māori is largely still an individual decision. This is despite the existence of initiatives such as LIANZA Professional Registration and BoK11, which in the view of the majority of interviewees and focus group participants, had not created major change in the library profession in regards to non-Māori engagement with Māori knowledge. The lack of change is perhaps unsurprising given the low levels of professional engagement among librarians in Aotearoa discussed earlier.This being the case, there is often little external impetus for non-Māori librarians to keep going with their journeys of learning and engagement when other pressures or priorities crowd in, which means that they can lose momentum. We (non-Māori librarians) find it easy to forget that this is not an issue that our Māori colleagues can pick up and put down in the same way. As one of the focus group participants said of the attitudes of some of their non-Māori colleagues towards engagement with Māori knowledge: “It’s really motivated individually … it’s an option, optional…” They continued by describing an attitude they had seen in their non-Māori colleagues “…I’ll choose to be bicultural today, tomorrow I might not be.” The focus group participant finished their point by referring to Māori librarians: “…whereas we’re always in sights of it [living and working between two cultures].” Focus group participants also talked about the importance of non-Māori librarians being good allies, described by one focus group participant as having “…shared responsibility…”. They talked about several ways in which non-Māori librarians could help lighten the load for their Māori colleagues. These include running Māori events alongside or with cultural support from Māori colleagues, and advocating for Māori issues in the workplace so that Māori colleagues don’t always have to be “the angry Māori in the room” as one focus group participant put it, so they feel more supported and less worn down.  

Ongoing effort is required to keep momentum going, and this can be difficult for non-Māori librarians to sustain independently. Focus group participants mentioned the positive impact that can come from attempting to create a culture of engagement as part of an organisation or library system. Leaders and managers have a key role to play in encouraging their teams to engage with Māori knowledge on an ongoing basis (Oxborrow Vambe, 2025). Since there is no guarantee of such consistent support, a key message is to feel the fear and persevere. This involves having the humility to accept that everyone makes mistakes and being committed to a continuing journey of development despite challenges. The fear may not ever fully dissipate, though it may reduce through repeated exposure to challenging situations. It was the willingness to keep on pushing through fear to continue engaging that made the difference for some interviewees, as discussed above. Future research could include case studies of good practice, and methods employed by non-Māori librarians to move through fear. 

It is also important to emphasise that fear was not the only emotion discussed by interviewees in their journeys of learning and engagement with Māori knowledge. As discussed above, interviewees also discussed positive aspects of their experiences with Māori knowledge, with 19 of 25 interviewees discussing Feeling Good as an Outcome of their learning or engagement. One interviewee described their journey as “…one of the best learning experiences of my life, really.” So being committed to ongoing engagement with Māori knowledge can be personally rewarding as well as contributing to creating a more welcoming profession for Māori librarians. Creating opportunities for non-Māori librarians to share those positive experiences with each other could be a really powerful tool to encourage those who are more reluctant to engage with Māori knowledge to begin or continue their journeys of learning and engagement.   


Acknowledgements

I would like to thank all the many people who supported the PhD research on which this article is based, including all participants and my supervisors, Professor Anne Goulding and Associate Professor Spencer Lilley. My PhD was partially funded through the A.K. Elliot Memorial Scholarship. Much appreciation to the peer reviewers Professor Alison Jones and Jeannette Ho and the editor Brittany Paloma Fiedler. Thanks to my WWA paragraph editing partners, Anne Hiha and Sara Kindon, for your suggestions. This article was written during a secondment to Te Manawahoukura Rangahau Residency. Many thanks to the manager of my substantive role, Jenny Barnett, for making it possible for me to undertake this secondment. 


Appendix 1: Interview Question Schedule

Tell me about your background in the library profession.

What do you find particularly interesting about mātauranga Māori?

Participants will then be asked to give an overview of the main occurrences in their story of learning about mātauranga Māori in order (the story of their process of engaging with mātauranga Māori). These events will be written down on a piece of paper to serve as a prompt for the remainder of the interview. A similar set of questions will be used to ask about the participant’s choice of 2-4 occurrences, as time allows. These questions will be used to investigate one instance at a time. The questions are as follows:

Tell me about [the course/experience/learning source]

What led up to this moment of learning about/engaging with mātauranga Māori?

What didn’t you know about mātauranga Māori at that stage?

Did you have any problems because of what you didn’t know? What were they?

How did you know where to go to find answers to your questions [for the situation you were facing]?

What were you trying to learn or achieve through this?

Did you have any big questions that motivated you to seek more information or knowledge? If so, what were they?

What helped you in the situation? How?

Did you expect what you learned to help? If so, did it help in ways you expected or other ways?

What hindered you in the situation? How?

Did you expect what you learned to present problems? If so, did it present problems in ways you expected or other ways?

What conclusions or ideas did you come to as a result of this experience?

What did the experience help you achieve afterwards?

The final phase of the interview will be talking about your whole journey of making

sense of mātauranga Māori and includes some questions about LIANZA Professional Registration:

How does your journey of making sense of mātauranga Māori relate to your sense of identity as a New Zealander?

How does it relate to your sense of power?

Has your decision to become/not to become or to continue/not continue being Registered been influenced by the inclusion of mātauranga Māori as a mandatory element in the Body of Knowledge?

[REGISTERED PARTICIPANTS ONLY] Has your involvement in LIANZA’s Professional Registration scheme impacted on your journey of engagement with mātauranga Māori in your professional life? If so, how?

Is there anything else you would like to mention before we finish?

Appendix 2: Focus Group Question Schedule

1. The profession of librarianship in Aotearoa has expressed a commitment to biculturalism since the 1980s – To what extent is the reality living up to the promise of the profession in terms of engagement with mātauranga Māori by non-Māori librarians?

2. In your opinion, what effect has LIANZA Professional Registration had on the extent to which non-Māori librarians engage with mātauranga Māori in their professional lives?

3. What factors help non-Māori librarians to engage with mātauranga Māori?

4. What barriers prevent non-Māori librarians from engaging with mātauranga Māori?

5. Matrix

The matrix is titled non-Māori librarians engaging with mātauranga Māori in their professional lives. Two columns are labeled Benefits and Risks. Three rows are labeled Māori stakeholders, individual librarians, and the profession as a whole.

6. In an ideal world, what would you like to see from individual non-Maori librarians in terms of engagement with Maori knowledge and culture?

7. What needs to happen to bring about change?

8. Is there anything else you would like to talk about before we finish?

References

Allan, J. (2001). Review of the measurement of ethnicity: Classification and issues. https://www.stats.govt.nz/assets/Uploads/Retirement-of-archive-website-project-files/Methods/Review-of-the-Measurement-of-Ethnicity-Classifications-and-issues/review-of-the-measurement-of-ethnicity-classification-and-issues-main-paper.pdf

Bell, A. (2024). Becoming Tangata Tiriti: Working with Māori, honouring the Treaty. Auckland University Press. 

Black, R. (2010). Treaty people recognising and marking Pākehā culture in Aotearoa New Zealand. (Doctoral dissertation, University of Waikato, Hamilton). Retrieved from https://researchcommons.waikato.ac.nz/handle/10289/4795  

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101.  

Bryant, M. (2015). Whāia te mātauranga – How are research libraries in Aotearoa New Zealand applying Ngā Ūpoko Tukutuku / the Māori Subject Headings and offering them to users? (Master’s thesis, Victoria University of Wellington, Wellington). Retrieved from http://researcharchive.vuw.ac.nz/xmlui/handle/10063/4633  

Clark, E. & Hill, R. (2024). New Zealand is unwinding ‘race-based policies’. Māori say it’s taking away their rights. ABC News. https://www.abc.net.au/news/2024-09-19/new-zealand-unwinding-maori-rights-treaty-of-waitangi/104364638 

Department of Corrections | Ara Poutama Aotearoa. (2025). Prison facts and statistics -June 2025. https://www.corrections.govt.nz/resources/statistics/quarterly_prison_statistics/prison_facts_and_statistics_-_june_2025

Dervin, B. (2003). From the mind’s eye of the user: The Sense-Making qualitative-quantitative methodology. In B. Dervin, L. Foreman-Wernet, & E. Lauterbach (Eds.), Sense-Making Methodology reader: Selected writings of Brenda Dervin (pp. 269-292). Hampton Press. 

Dyall, L., Bridgeman, G., Bidois, A., Gurney, H., Hawira, J., Tangitu, P., & Huata, W. (1999). Māori outcomes: Expectations of mental health services. Social Policy Journal of New Zealand, 12, 1-20.

Dyson, L. (2001). Traces of identity: The construction of white ethnicity in New Zealand. (Doctoral dissertation, Middlesex University, United Kingdom). Retrieved from http://eprints.mdx.ac.uk/6691/  

Haar, J., & Martin, W. J. (2022). He aronga takirua: Cultural double-shift of Māori scientists. Human Relations, 75(6), 1001-1027.

Hinton, M. (2023). Black librarianship: Stories of impact and connection. School Library Journal. https://www.slj.com/story/Black-Librarianship-Stories-of-Impact-and-Connection

Infometrics. (2024). 2023 sector profile: Arts and creative – Maori. Manatū Taonga | Ministry for Culture and Heritage. https://www.mch.govt.nz/publications/arts-and-creative-sector-economic-profiles-2023

Irwin, K., & Katene, W. (1989). Maori people and the library: A bibliography of Ngati Kahungunu and Te Waka o Takitimu resources. He Parekereke, Department of Education, Victoria University of Wellington. 

Jones, A. (2020). This Pākehā life: An unsettled memoir. Bridget Williams Books. 

King, M. (2004). Being Pakeha now (2nd ed.). Penguin.

Library and Information Association of New Zealand Aotearoa | Te Rau Herenga Aotearoa. (2026). Professional Registration: Who has RLIANZA. https://web.archive.org/web/20260112202502/https://www.lianza.org.nz/professional-development/professional-registration/who-has-rlianza/

Library and Information Association of New Zealand Aotearoa | Te Rau Herenga Aotearoa. (2025a). Te Rau Herenga o Aotearoa | Library and Information Association of New Zealand annual report 2024-2025. https://www.lianza.org.nz/media/xy0hlxzu/annual-report-2025-final.pdf  

Library and Information Association of New Zealand Aotearoa | Te Rau Herenga Aotearoa. (2025b). Freedom to read toolkit. https://www.lianza.org.nz/freedom-to-read-toolkit/ 

Library and Information Association of New Zealand Aotearoa | Te Rau Herenga Aotearoa. (2024). LIANZA values. https://www.lianza.org.nz/about/who-we-are/lianza-values/

 Library and Information Association of New Zealand Aotearoa | Te Rau Herenga Aotearoa. (2020). LIANZA code of practice: 5.00 professional registration. https://www.lianza.org.nz/media/lkoat5m2/500-professional-registration.pdf

Library and Information Association of New Zealand Aotearoa | Te Rau Herenga Aotearoa. (n.d.-a). BoK 11: Awareness of Indigenous knowledge paradigms. https://www.lianza.org.nz/professional-development/professional-registration/bodies-of-knowledge-bok/#BOK-eleven

Library and Information Association of New Zealand Aotearoa | Te Rau Herenga Aotearoa. (n.d.-b). Te Tōtara workforce capability. https://www.lianza.org.nz/professional-development/te-totara-workforce-capability/

Library and Information Association of New Zealand Aotearoa Professional Registration Board. (2013). LIANZA Profession Registration Board Professional Practice Domains and Bodies of Knowledge December 2013. LIANZA. 

Library and Information Association of New Zealand Aotearoa Taskforce on Professional Registration. (2005). Professional future for the New Zealand Library and Information profession: Discussion document. LIANZA. 

Library and Information Association of New Zealand Aotearoa | Te Rau Herenga Aotearoa & Te Rōpū Whakahau. (2024). LIANZA and Te Rōpū Whakahau partnership agreement. https://www.lianza.org.nz/media/rihc3hoz/lianza-te-ropu-whakahau-partnership-agreement-2024.pdf

Lilley, S. (2013). Te Rōpū Whakahau: Waiho i te toipoto, kaua i te toiroa, celebrating 20 years. Te Rōpū Whakahau. 

Lulich, S. (2007). Professional registration scheme update & reasons to join. LIANZA.

Maori Library Service Committee. (1963). Library service to Maoris [sic]: Maori Library Service Committee report to the NZLA Council, February, 1963. New Zealand Libraries, November, 254-260.  

Mead, H. M. (2012). Understanding mātauranga Māori. In Haemata Limited (Ed.), 

Conversations on mātauranga Māori (pp. 9-14). New Zealand Qualifications Authority. 

Mercier, O., Asmar, C., & Page, S. (2011). An academic occupation: Mobilisation, sit-in, speaking out and confrontation in the experiences of Māori academics. Indigenous Education, 40, 81-91.  

Millen, J. (2010). Te rau herenga, a century of library life in Aotearoa: The New Zealand Library Association & LIANZA, 1910-2010. LIANZA.

Moorfield, J. (n.d.). Pākehā. In Te Aka Māori Dictionary. Retrieved January 21, 2026, from https://maoridictionary.co.nz/word/4997

Newman, M. (2025, August 17). Dismantling separatism. New Zealand Centre for Political Research. https://www.nzcpr.com/dismantling-separatism/

Ngā Pae o te Māramatanga | New Zealand’s Māori Centre of Research Excellence. (n.d.). Te reo Māori – A taonga. https://www.maramatanga.ac.nz/news-events/news/te-reo-m-ori-taonga

Orange, C. (2023). Te Tiriti o Waitangi – the Treaty of Waitangi. Te Ara – the Encyclopedia of New Zealand, https://teara.govt.nz/en/te-tiriti-o-waitangi-the-treaty-of-waitangi

Oxborrow, K. (2020). “It’s not just a professional development thing”: Non-Māori librarians in Aotearoa New Zealand making sense of mātauranga Māori. Dissertation, School of Information Management, Te Herenga Waka | Victoria University of Wellington, Wellington, Aotearoa New Zealand, https://doi.org/10.26686/wgtn.17148506.v1

Oxborrow, K., Goulding, A. & Lilley, S. (2017). The interface between Indigenous knowledge and libraries: The need for non-Māori librarians to make sense of mātauranga Māori in their professional lives In Proceedings of RAILS -Research Applications, Information and Library Studies, 2016, School of Information Management, Victoria University of Wellington, New Zealand, 6-8 December, 2016. Information Research, 22(4), paper rails1619. Retrieved from http://InformationR.net/ir/22-4/rails/rails1619.html

Oxborrow Vambe, K. (2025). How can managers in libraries support their teams to engage with mātauranga Māori (Māori knowledge)? Journal of New Librarianship, 10(1), 100–115. https://doi.org/10.33011/newlibs/18/10 

Paewai, P. (2025). Human rights complaint filed to United Nations over treatment of Māori. Radio New Zealand. https://www.rnz.co.nz/news/national/578480/human-rights-complaint-filed-to-united-nations-over-treatment-of-maori

Ritchie, A. (2013). ‘Pākēhā librarianship at the interface’ Being an ally in Māori student success through teaching and learning information literacies. (Master’s thesis, Victoria University of Wellington, Wellington). Retrieved from http://researcharchive.vuw.ac.nz/handle/10063/2862

Royal, T. A. C. (2012). Māori. Te Ara – the Encyclopedia of New Zealand. https://teara.govt.nz/en/maori 

Smithson, J. (2000). Using and analysing focus groups: Limitations and possibilities. International Journal of Social Research Methodology, 3(2), 103-119.  

Stats NZ | Tatauranga Aotearoa. (2023). Aotearoa data explorer. https://explore.data.stats.govt.nz/vis?fs[0]=2023%20Census%2C0%7CWork%23CAT_WORK%23&pg=0&fc=2023%20Census&bp=true&snb=3&df[ds]=ds-nsiws-disseminate&df[id]=CEN23_TBT_004&df[ag]=STATSNZ&df[vs]=1.0&dq=oc399312%2Boc599711%2Boc224611%2BtwTotal%2BsoTotal%2BcdTotal.2023&ly[rw]=CEN23_TBT_IND_001&to[TIME]=false

Stats NZ | Tatauranga Aotearoa. (2024a). 2023 Census population counts (by ethnic group, age, and Māori descent) and dwelling counts. https://www.stats.govt.nz/information-releases/2023-census-population-counts-by-ethnic-group-age-and-maori-descent-and-dwelling-counts/ 

Stats NZ | Tatauranga Aotearoa. (2024b). Ngā tūtohu Aotearoa | Indicators Aotearoa New Zealand – Wellbeing data for New Zealanders: Amenable mortality. https://statisticsnz.shinyapps.io/wellbeingindicators/_w_0b03922ab93844bf809b3cdd60735970/?page=indicators&class=Social&type=Health&indicator=Amenable%20mortality

Stone, L. (2013). LIANZA careers survey 2012. The Information Workshop. https://lianza.recollect.co.nz/nodes/view/2299#idx11845

Te Huia, A. (2016). Pākehā learners of Māori language responding to racism directed toward Māori. Journal of Cross-Cultural Psychology, 47(5), 734-750.

Tuhou, T. (2011). Barriers to Māori usage of university libraries: An exploratory study in Aotearoa New Zealand. (Master’s thesis, Victoria University of Wellington, Wellington). Retrieved from http://researcharchive.vuw.ac.nz/xmlui/handle/10063/1700  

University of Auckland. (2024). NZ needs a legal remedy for cultural misappropriation. https://www.auckland.ac.nz/en/news/2024/08/09/nz-needs-a-legal-remedy-for-cultural-misappropriation.html

Waitangi Tribunal | Te Rōpū Whakamana i te Tiriti o Waitangi. (n.d.). Waitangi Tribunal. https://www.waitangitribunal.govt.nz/en/home

Wei X.L., Boamah E. (2019). Auckland libraries as a multicultural bridge in New Zealand: Perceptions of new immigrant library users. Global Knowledge, Memory and Communication, 68(6), 581-600. 

Endnotes

  1. Tribal group of East Coast area north of Gisborne to Tihirau. ↩
  2. Fairy folk – mythical being [sic] of human form with light skin and fair hair. ↩
  3. Fairy folk – fair-skinned mythical people who live in the bush on mountains. Although like humans in appearance, the belief is that they do not eat cooked food and are afraid of fires. ↩
  4. Tribal group to the north-east of Mount Taranaki including the Waitara and New Plymouth areas. A section of Te Āti Awa moved to parts of the Wellington area and the northern South Island in the 1820s. ↩
  5. A tribal group of the Horowhenua and northern Kapiti coast. ↩
  6. Tribal group of much of Northland. ↩
  7. Tribal saying, tribal motto, proverb (especially about a tribe), set form of words, formulaic expression, saying of the ancestors, figure of speech, motto, slogan – set sayings known for their economy of words and metaphor and encapsulating many Māori values and human characteristics. ↩

Author Interview: Shelley Noble / LibraryThing (Thingology)

Shelley Noble

LibraryThing is pleased to sit down this month with best-selling author Shelley Noble, whose many novels run the gamut from historical fiction to mystery to contemporary women’s fiction. A former professional dancer, Noble toured with Twyla Tharp Dance and American Ballroom Theater, and has worked as a choreographer for film and theater productions. She earned her BFA and MFA at the University of Utah, and taught at California State University in Fresno. A former president of Sisters-in-Crime, Noble is a member of Mystery Writers of America, Romance Writers of America, and Liberty States Fiction Writers, and currently lives in New Jersey. Her newest novel, The Sisters of Book Row, was published by William Morrow in March 2026 and tells the story of three sisters and bookstore proprietors who confront the Comstock laws in 1915 Manhattan. Noble sat down with Abigail this month to discuss the book.

How did the story idea for The Sisters of Book Row first come to you? Were you drawn to the thought of writing about bookstores and booksellers, or perhaps about the Comstock laws?

I’ve had Comstock in the back of my mind for a while, a perfect villain, a vicious zealot, one who I particularly despise. So when my editor suggested I write a book about books, guess who came to mind. And because I write about Manhattan, I knew the perfect place in which to set the story, Book Row, once the mecca of rare and used book buyers from around the world. And like a magnet, this germ of an idea began collecting bits and pieces. An article about the current Cohen sisters of the Argosy Book Store inspired me to create the Applebaum sisters, and Sisters was born.

Tell us about the Comstock laws. What were they, and what effect did they have on the world of books and booksellers, as well as the wider American society of that time?

Anthony Comstock moved to New York in the early 1870s and was appointed special agent to The Society for the Suppression of Vice and the U.S. Post Office to prevent pornography from being sent through the mail. He was given the power to search, seize, arrest and fine, the monies of which he received half. His activities quickly spread to all facets of life, and as his power grew, his ideas of what was “obscene, lewd, or lascivious,” changed, sometimes from week to week. Later in his career, his extreme and outlandish views made him a laughing stock, ridiculed by the newspapers, and dismissed by the courts. The Post Office fired him, but he refused to leave. The NYSPV replaced him, but again he ignored them and continued on his crusade. The Comstock Act, enacted in 1873, included a ban on contraception and was written by Comstock himself. It was never repealed, but Roe v. Wade relegated it to being a zombie law. Unfortunately states had adopted the original law for their own use. And today we see it being used to prevent birth control information, or any reproductive health measures from all women. A zealot, who is said to have destroyed 15 tons of books and four million pictures and other materials, who hated women and died ridiculed and despised, and yet he has managed to rear his ugly head again today.

Your story is set on Book Row, a district in lower Manhattan that contained over three dozen bookstores at its height. Did you have to do any research about the history of the area, and what were some interesting things you learned? If you could visit any bookstore from that period, which would it be? (Disclosure: I worked for the Strand bookstore—the sole survivor of Book Row—for many years).

I used to hang out at the Strand all the time. Many years ago. It was a solace and an adventure away from the chaos of the city and my profession as a young dancer. I hope my Sisters of Book Row can come to life for readers of today. I did loads of research, I always do. It’s one of my favorite parts of writing historical fiction. There’s a lovely book titled Book Row by Marvin Mondlin and Roy Meador. It didn’t have as much information on my particular period 1915 as I hoped, but it was fascinating to read about the continuation of this community, especially post 1930.

Once I get an overview of my time and place and characters, I like to depend mainly on primary sources, newspapers, anecdotes, letters. That way I know what they know, feel what they feel, and try to leave my historical outsider’s knowledge at the door. I mostly learned the neighborhood in bits and pieces since the area has built up so much since then.

Some oddities and coincidences: The Argosy, owned by Louis Cohen, the father of the Cohen sisters who added their inspiration to this story had his own run in with “Comstockery” in the 1930s. When the city began digging the new subway, customers couldn’t get around construction, and many stores had to move uptown, then moved back in when it was completed. After I developed and had lived with my three Applebaum sisters and the Arcadia for weeks and several chapters, I learned that there was actually a Mr. Applebaum who had a bookshop in the Row, named Arcadia. Did I read about it and forgot while it became ingrained in my subconscious? Or was it really a coincidence? I was too attached to my own Applebaums to change their names, so I mentioned the existence of two families in my Author Notes.

Sometimes a story is like a jigsaw puzzle, learning a phrase, a sentence about the inhabitants. The two booksellers, who were constantly arguing, gave me an image that led to the daily morning conversations around the newsstand. They might have argued and complained, but they were neighbors and they were ready to take up a collection to bail one of their own out of jail when Comstock was on the prowl.

The book world has been rocked in recent years by an upsurge of attempts at censorship and book suppression. I chronicle some of that in the Freedom of Expression column of our monthly State of the Thing newsletter. What can your story tell us about our situation today, in this respect?

For our modern selves, I wish The Sisters of Book Row and their withstanding the attacks of what they loved most was so outside of our experience, so unbelievable, that readers might say. “Oh, that would never happen here.” But unfortunately we see it happening throughout our country by those who, like Comstock, denounce books they’ve never even read and bully those who only want to share knowledge. Their attacks sometimes seem so diffuse and widespread that we might think it will never affect us. It will, but I have to believe that we’re more experienced, more aware of the rotten core of the book banning movement, and that if we keep up a constant resistance, we will prevail.

Tell us a little bit about your writing process. Do you have a particular routine—a schedule you keep, or a place you like to write? You write in a number of different genres, does your story-building process differ, depending on the genre?

I do have a routine though it has changed over the years and books. When I wrote two books a year, I had a tighter schedule. Now that I’m writing one historical I can linger in the research, jump down a rabbit hole or two. And I find that writing of the past, I’ve changed from being an early morning writer, to a late night writer. There’s something about the dark and the quiet that I find conducive to delving into the past. Of course the nearer I get to deadline, the more daytime writing I have to do. I have a home office where I write all my books. Each genre requires a different energy and attitude. The contemporaries don’t require as much deep dive research, so I can begin writing sooner than with the historicals. No matter the genre, I depend on a storyboard to keep everything on track. Not a computer screen board but a big gridded Lucite board on the wall with color coded post-its for characters and plot points that can be moved around as the story develops.

What comes next for you? Are there any new books you’re currently working on?

I’m currently working on a story that takes place in 1870 Long Branch, New Jersey, where President Grant has his summer capital and a young woman aspiring to become a lawyer confronts the changes and the scandals that threaten the quiet seaside town she calls home.

Tell us about your library. What’s on your own shelves?

Lots of history books, mainly early 20th century New York, late 19th American theatre books when the Rialto was Union Square. Dickens, Austen, Mary Stewart. A rotation of women’s historical fiction. Eastern religion. Mystery and science fiction. I’m a pretty eclectic reader.

What have you been reading lately, and what would you recommend to other readers?

This fall I decided to go on a rereading spree. I started with Fahrenheit 451 followed by 1984 right before the holidays. Yes, they are still as scary as when I read them in school. After that, I immediately pulled out my favorite chapters of The Pickwick Papers. Now I’m re-rereading The Hobbit, and reading A Founding Mother** about Abigail Adams, by Stephanie Dray and Laura Kamoie. I highly recommend all of these.

**Stay tuned for our interview with Stephanie Dray and Laura Kamoie this coming July, in honor of America’s 250th birthday!

Vibe Analysis / Dan Cohen

A multicolored painting of nature, factories, housing, and skyscrapersOscar Bluemner, Evening Tones, 1911-1917, Smithsonian American Art Museum

[This is the third piece in a miniseries on finding the right line between human thought and AI assistance, focusing on the stages of scholarly work from initial ideas through the research process to publication, although I believe much of this discussion is applicable to intellectual work beyond the academy. The miniseries began with this introduction and was followed by an essay on the origins of new ideas. In this issue, a look at how we begin to analyze evidence and data.]


So you have that great idea for a research project, and you’ve found or developed sources, perhaps with AI-assisted search and discovery within collections. Now comes the analysis, how we extract meaning and assemble a thesis out of these research materials. Where should scholars draw the line on using, or avoiding, AI to do this analysis?

First, we need to unpack what “analysis” really means and how we do it. In 2000, John Unsworth proposed a helpful agenda for the nascent field of digital humanities that focused on using the power and flexibility of digital media to improve “scholarly primitives,” or the fundamental ways that scholars view, inspect, and interpret their sources. Unsworth was not out to replace the entire scholarly process, but to enhance its early-stage, basic components, such as annotation, comparison, and sampling. For instance, in Biblical scholarship it is helpful to compare parallel texts in English, Hebrew, and Greek; indeed, there are many scholarly editions in this format, and they are often used for more rigorous exegesis than unilingual volumes.

What if we could produce such aids to serious scholarship using computers? Unsworth’s innovative group at the University of Virginia, the Institute for Advanced Technology in the Humanities (IATH), created software that could provide, on the fly, three-column views such as this:

A screenshot of a computer program showing three columns of biblical text, with the first column in English, the second in Hebrew, and the third in Greek.Babble, IATH's multicolumn Unicode browser (2000)

Or, if a scholar wanted to study the visual arts rather than texts more closely, we could create software to zoom in on details, highlight them, and craft descriptions and analytical notes:

A page from a William Blake text, with his illustrations, has a green box over one section and a pop-up window in which there is descriptive text.IATH’s Inote in action on the digital Blake Archive (2000), yes it’s a Java applet running in Netscape Navigator

The technology to do this work in 2000 was complex, and since it took considerable programming time, rather expensive. When the book I wrote with Roy Rosenzweig, Digital History, came out twenty years ago, one review praised many of the new approaches, but worried, rightly, about the cost of building and maintaining anything involving computers and programmers.

* * *

Flash forward two decades: The “building” phase of scholarly software has, within the last few months, become greatly accelerated and inexpensive, along with all other forms of software development. I hesitate to use the tech bro-logism “vibe coding” here, but whatever we call it, there’s no doubt that it is orders of magnitude easier to generate the kind of software that it took Unsworth and IATH, or Roy and myself and our colleagues at the Center for History and New Media, months or even years to build, at a significant cost. Moreover, “maintaining” such software over time matters less for a scholar doing their research project than it does for the production-grade software IATH (Babble, Inote) and CHNM (Zotero, Omeka) created for general-purpose use. For individual academics, vibe coding can provide a quick and easy tool to examine and analyze raw material under their consideration. The scholarly primitives have evolved.

Some scholars are already working with AI in this way — not creating full articles from a single prompt, but producing discrete analytical units that present documents, images, and data in ways that help their research process. This is clearly a beneficial use of AI: assisting human interpretation rather than replacing it, using digital media to frame scholarly resources so that the scholar can reach new conclusions.

Take this example from Sarah Bull, like me a Victorianist, who studies the history of the publishing industry in the UK. She recently wrote a monograph on the lively intersection of the medical profession, publishers, and pornography in the nineteenth century, Selling Sexual Knowledge: Medical Publishing and Obscenity in Victorian Britain. In her research, Bull found or assembled useful data sets, like the London addresses of publishers of works that were deemed “obscene.” (In a fascinating parallel to today’s dopamine-fueled social media, many of these salacious books were mashups of titillating bits drawn from other sources.) As a companion to the book, she used Claude Code to produce an interactive digital map that helps to situate herself, and her readers, within that demimonde:

A map of London with red dots on it shows where publishers of obscene literature were located.Sarah Bull, Mapping Nineteenth-Century Obscenity (2026)

(There’s a reason that a small street near the Strand got the nickname the “Backside of St. Clements.”) In the spirit of scholarly citation and reproducibility, Bull provides the code and references for the map on GitHub, allowing others to clone her analytical tool for other data sets.

Similarly, historian Jason Heppler recently used AI to help him map land use in the United States, a process that would have taken countless hours without this automation.

I am in the throes of a research project for a book on the politics around land, agriculture, environmentalism, and land management on the Great Plains. It occurred to me recently that I need to stretch my timeline backwards a bit to the Bankhead-Jones Act of 1937…I wanted to see these lands, in part because one thing that’s complicated about writing this regional history I’m working on is the quilted patchwork of public and private land ownership and management that happens on the grasslands. So I asked Claude: without any input from me, find the data and make this map. It took about fifteen minutes.

A map of North and South Dakota showing green rectangles of land subject to the Bankhead-Jones Farm Tenant Act (1937).Jason Heppler, Lands Subject to the Bankhead-Jones Farm Tenant Act, 1937 (2026)

Heppler’s coda to his post on “vibing digital history” strikes me as exactly right:

The point here is that if these tools…create things in a fraction of the time it once took, then these tools are phenomenally empowering,…an investment in one’s intellectual work. [Having AI do the] rote or routine work can free up your time to focus on the history rather than the technology.

Yes, let the machine take care of the scut work so you can spend your human time thinking about the crucial interpretive aspects of your research — a sensible and effective division of labor. And there can be a great deal of scut work in research, historical and otherwise. Historian Cameron Blevins recently produced an interactive digital map, similar to those by Bull and Heppler, but also had the AI do another, even more tedious task first: extracting tabular data from low-resolution digitized documents. The result of this one-two punch is a neat visualization of how long it took for a piece of mail to make its way between two destinations in the U.S. in the late nineteenth and early twentieth centuries.

A map of the United States shows each city as a circle, and a chart shows how long it took to send a piece of mail in 1882, 1883, 1892, 1902, and 1908Cameron Blevins, How Fast Was the Mail? Transit times for railway mail service between major U.S. cities, 1882–1908 (2026)

These rapidly produced maps using AI can be a new tool in the kit of researchers, aiding in both unstructured exploration or, more narrowly, to support a thesis.

* * *

Should we worry about AI hallucinations invisibly corrupting these digital constructions, and thus tainting the theses of these scholars? Because the researchers are not using these tools to formulate conclusions, and because their models maintain the original data so they, and anyone else, can spot-check them, this seems like much less of a concern than using AI further along in the scholarly process. Moreover, they are using these tools as part of a broader process of scholarship that still involves traditional methods such as the close reading of texts. They understand the need to construct an interconnected web of evidence for their articles and books, just as they did before. In this wider context, vibe coding and diligent analysis can coexist.

Although I have focused here on interactive digital maps, most of the other scholarly primitives John Unsworth outlined a quarter century ago can also be swiftly replicated using AI. For instance, it is now possible for scholars to create multicolumn views of parallel texts, or an app that performs an initial pass of handwriting recognition on manuscripts, letters, or diaries, and then provides a text editor for the researcher to correct any errors (with the original document in an adjacent window), or a visual environment for an assembled set of artworks as an aid in side-by-side comparisons or as a place to highlight and annotate details. For scientists, the scholarly primitives are different but equally tractable, enabling highly customized databases, graphical interfaces for exploring data, and the automation of digital lab notebooks.

In my own tests, this isn’t a completely seamless process yet. It helps to know your way around the command line and file system, and understand what Python, SQLite, HTML, and other programming, web, and database languages and packages are, and how to install, update, and tweak them. In addition, on the data side, scholarly vibe coding often relies on sources being openly available and thus open to AI applications, an assumption that may be challenged by a brewing backlash against AI among the holders of collections.

And then there is the slippery slope: If we are going to go this far with AI in scholarship, why not go farther or even all the way? Why not have AI do the entire analytical process and spit out the result, perhaps as a nicely formatted paper? Knowing the audience for this newsletter, this is an uncomfortable question. But it’s one that we should at least ask and take some time to think through. It will be the topic of the next piece in this miniseries.

The Handoff Problem (Updated) / David Rosenthal

Source
Around twelve years ago, Google figured out the fundamental problem facing Tesla's Fake Self Driving. Almost nine years ago in Robot Cars Can’t Count on Us in an Emergency, John Markoff wrote:
Three years ago, Google’s self-driving car project abruptly shifted from designing a vehicle that would drive autonomously most of the time while occasionally requiring human oversight, to a slow-speed robot without a brake pedal, accelerator or steering wheel. In other words, human driving was no longer permitted.

The company made the decision after giving self-driving cars to Google employees for their work commutes and recording what the passengers did while the autonomous system did the driving. In-car cameras recorded employees climbing into the back seat, climbing out of an open car window, and even smooching while the car was in motion, according to two former Google engineers.
Gareth Corfield at The Register added:
Google binned its self-driving cars' "take over now, human!" feature because test drivers kept dozing off behind the wheel instead of watching the road, according to reports.

"What we found was pretty scary," Google Waymo's boss John Krafcik told Reuters reporters during a recent media tour of a Waymo testing facility. "It's hard to take over because they have lost contextual awareness."
Follow me below the fold for a wonderful example of Tesla's handoff problem, and a discussion of the difference between Tesla's and Waymo's approaches to self-driving.

I wrote about this handoff problem in 2017's Techno-hype part 1. I did a thought experiment, imagining mass-market cars 3 times better than Waymo's at the time:
A normal person would encounter a hand-off once in 15,000 miles of driving, or less than once a year. Driving would be something they'd be asked to do maybe 50 times in their life.

Even if, when the hand-off happened, the human ... had full "situational awareness", they would be faced with a situation too complex for the car's software. How likely is it that they would have the skills needed to cope, when the last time they did any driving was over a year ago, and on average they've only driven 25 times in their life? Current testing of self-driving cars hands-off to drivers with more than a decade of driving experience, well over 100,000 miles of it. It bears no relationship to the hand-off problem with a mass deployment of self-driving technology.
I concluded:
But the real difficulty is this. The closer the technology gets to Level 5, the worse the hand-off problem gets, because the human has less experience. Incremental progress in deployments doesn't make this problem go away.
Raffi Krikorian:
used to run the self-driving-car division at Uber, trying to build a future in which technology protects us from accidents. I had thought about edge cases, failure modes, the brittleness hiding behind smooth performance. My team trained human drivers on when and how to intervene if a self-driving car made a mistake. In the two years I ran the division, we had no injuries in our early pilot programs.
He has an article in the current Atlantic entitled My Tesla Was Driving Itself Perfectly—Until It Crashed with the sub-head:
The danger of almost-perfect tech
As an enthusiast for slef-driving technology, Krikorian used it:
With my own Tesla, I started out using Full Self-Driving as the default setting only on highways. That’s where it makes sense: You have clear lane markers and predictable traffic patterns. Then, one day, I tried it on a local road, and it worked well enough to become a habit.
But, after three years:
My memory is hazy, and some of it comes from one of my sons, who watched the whole thing unfold from the back seat. The car was making a turn. Something felt off—the steering wheel jerked one way, then the other, and the car decelerated in a way I didn’t expect. I turned the wheel to take over. I don’t know exactly what the system was doing, or why. I only know that somewhere in those seconds, we ended up colliding with a wall.
He didn't have "situational awareness", even though he was an experienced driver aware of the handoff problem. He sums up the current problem, with drivers like him:
Full Self-Driving works almost all of the time—Tesla’s fleet of cars with the technology logs millions of miles between serious incidents, by the company’s count. And that’s the problem: We are asking humans to supervise systems designed to make supervision feel pointless. A machine that constantly fails keeps you sharp. A machine that works perfectly needs no oversight. But a machine that works almost perfectly? That’s where the danger lies. After a few hours of flawless performance, research shows, drivers are prone to start overtrusting self-driving systems. After a month of using adaptive cruise control, drivers were more than six times as likely to look at their phone, according to one study from the Insurance Institute for Highway Safety.
Imagine this problem compounded by handing off to a driver who hadn't driven in a year.

Google was building Level 4 robotaxis. Their conservative approach was to eliminate the handoff problem completely. Waymos operate on carefully mapped routes after much practice, and are equipped with a diverse set of sensors. Just as everywhere along their flight path, airliners have a designated diversion airport, Waymos know a safe place to stop and ask for help from remote humans. They don't drive the cars, they just advise the car as to how to solve the problem. This can, as I have seen a couple of times, cause frustration among other road users, but it is safe.

Tesla, on the other hand, had a Level 2 driver assist system with a limited set of sensors, which depended on handing off to the driver in case of confusion. They consistenly marketed it as "Full Self-Driving" with exaggerated claims about its capabilities, and sold it to normal, untrained drivers. They could not, and could not afford to, implement Google's approach. Why not?
  • Scale: Tesla has 1.1M FSD customers, where six months ago Waymo had about 2K cars in service. To support them, Waymo has about 70 remote operators on duty. Of course, FSD is used much less intensively, lets guess only 5% as much. Even if, optimistically, Tesla's technology generated as few remote requests as Waymo's they would need almost 2,000 remote operators on duty.
  • Technical: First, Tesla markets FSD as usable anywhere, even if their terms of service disagree. So they lack the detailed maps Waymos use when they need to find a safe place. Second, Tesla has far fewer sensors, so has much less information on which to base the need for and choice of a safe place.
  • Marketing: There are two problems. First, telling the public that FSD will sometimes need to stop and ask for help goes against the idea that it is "Full Self Driving". Second, everyone can see that a Waymo is driving itself and can set their expectations to match. No-one can tell that a Tesla is using Fake Self Driving. So were Teslas stopping unexpectedly, even if it wasn't using Fake Self Driving, the assumption would be that the technology had failed.
Because Tesla has always depended upon handing off to the human, the result is that Tesla's minimal robotaxi service with "safety monitors" in Austin, TX crashes six times as often as human-driven taxis.

Update 4th April

Source
Kristen Korosec provides Waymo’s skyrocketing ridership in one chart:
Waymo is now providing 500,000 paid robotaxi rides every week across 10 U.S. cities, the company shared in a post on X this week. The eye-popping figure is reflective of the Alphabet-owned company’s accelerated commercial expansion. But it’s Waymo’s rate of growth in ridership and markets that offers a more compelling story.

In less than two years, the company’s average weekly paid robotaxi trips have grown tenfold, from 50,000 per week in May 2024 to 500,000 per week today. Over that same two-year timespan, Waymo has expanded within its initial markets of Phoenix, San Francisco, and Los Angeles — and beyond them to Austin, Atlanta, Miami, Dallas, Houston, San Antonio, and Orlando. Those seven cities in the Sun Belt were all added in just the past year.
The fleet hasn't grown with the rides, showing increased utilization and thus improved economics:
Waymo’s robotaxi fleet has also grown, although the company has guarded those numbers and rarely provides updates. Data provided in December 2025 to the National Highway Traffic Safety Administration (NHTSA) shows the company had 3,067 robotaxis equipped with its 5th generation self-driving system. The company still uses that “over 3,000” fleet number today. That could soon change with the introduction of its 6th generation self-driving system, which will debut on the Zeekr minivan, known as Ojai, and the Hyundai Ioniq 5.

Append the LM to the IR / Mat Kelly

From January to March 2026, I taught INFO624: Intelligent Search and Language Models at Drexel CCI—a course that sits at the intersection of classical information retrieval (IR) and modern AI-driven language models.

This offering marked a deliberate shift from previous iterations of INFO624. While earlier versions focused on traditional IR systems, this course expanded to explore how language models are reshaping the way we search, rank, and interact with information. In many ways, the guiding question became: what does it mean to append the LM to the IR?

The course was delivered in a cross-listed format, with a mix of in-person and asynchronous students. Preparing and teaching it required not just updating materials, but continuously adapting to a rapidly evolving technical landscape—one where best practices can shift within months.

Topics

Despite losing two instructional days (MLK Day and a late-January snowstorm), the course covered eight weeks of material spanning both foundational and emerging topics:

  • Introduction to IR and AI foundations
  • Text Processing and AI-enchanced pre-processing
  • From Vector Space Models to Dense Representations
  • Probablistic Models and Neural Language Models for IR
  • AI-Driven Web Search and Retrieval Techniques
  • Graph Analysis and Neural Linking Models
  • Evaliation Metrics and AI-Enhanced IR Systems
  • Relevance Feedback with AI Techniques
  • Clustering and Classification with Deep Learning
  • Emerging Topics in AI (e.g., RAG, XAI, Multimodal IR)

Each topic could easily warrant a full course on its own, but the goal here was breadth with meaningful depth—enough to ground students before they explored ideas in their projects.

Student Projects

The course enrolled 20 students, who could choose to work individually or in groups. Projects took one of two forms: (1) an IR/AI-focused literature review or (2) the design and evaluation of a working system. In total, 12 projects were submitted, reflecting a wide range of interests across modern information retrieval and language model integration.

Systems

Omkar, Manjiri, and Priti developed a multi-source search system that retrieves, synthesizes, and self-evaluates information from web, academic, and local data to generate comprehensive, cited answers.
https://github.com/Priti0427/Intelligent-Search-agent

Mokshad and Ishant built a search engine over arXiv papers that combines BM25 with BERT-based retrieval, while providing transparent explanations for ranking decisions.
https://github.com/Mokshu3242/arXiv-Paper-Search-System

Ian built a two-stage recipe search engine on the Food.com corpus (~230K recipes, 1.1M reviews), integrating BM25 retrieval, rule-based query alignment, and neural embeddings derived from review-based quality signals.
https://github.com/iauger/recipe-search-engine

Chinomso designed a system for question answering over PDFs that incorporates document structure (sections and hierarchy) into both retrieval and grounded generation.
https://github.com/MishaelTech/explanable_structured_rag_pdf

Charles implemented a transparent full-text search engine over newly released JFK assassination documents, enabling precise and citable exploration of primary historical sources.

Robert and Ayush created a system that combines chapter-level character summaries with semantic retrieval to support exploration and querying of long-form narrative texts.

Jake developed a prototype system using FAISS, augmented with salience and recency signals, to retrieve narrative memories for consistent storytelling in AI-driven environments.

Mason built a RAG-based search engine for personal finance, retrieving and summarizing trusted financial documents to answer user questions in natural language.
https://github.com/riccimason99/Financial-Planning-Search-Engine

Literature Reviews

Sriram, Sourav, Khushi, and Lohitha conducted a survey of retrieval-augmented generation (RAG) methods for academic use, focusing on hybrid retrieval, self-reflection, and challenges such as faithfulness and evaluation.

Muhammad analyzed the evolution of neural information retrieval, tracing the progression from early embeddings to modern transformer-based dense retrieval and identifying remaining challenges.

Sriram examined personalization in search, exploring how systems balance relevance with novelty and diversity under ambiguous or evolving user intent.

Grace compared thesauri, knowledge graphs, and latent semantic analysis as methods for incorporating semantic relationships into retrieval systems.

Conclusion

Overall, INFO624 highlighted just how quickly information retrieval and language models are converging—both in research and in practice. What once felt like separate paradigms are now deeply intertwined, with modern systems blending classical ranking methods and neural representations into hybrid approaches.

The range of student projects reflects this shift clearly: systems emphasized not only performance, but also transparency, evaluation, and real-world usability. Just as importantly, many projects grappled with emerging challenges such as faithfulness, explainability, and the limits of current models.

For me, teaching this course reinforced an important reality: working in this space requires constant adaptation. The tools, techniques, and expectations are evolving rapidly, and education must evolve with them. If anything, this iteration of INFO624 felt less like a static course and more like a snapshot of a moving target—one that students are now well-equipped to continue exploring.

Weekly Bookmarks / Ed Summers

These are some things I’ve wandered across on the web this week.

🔖 Review: Measuring AI Ability to Complete Long Software Tasks

Measuring AI Ability to Complete Long Software Tasks, a paper by dozens of authors working at Model Evaluation & Threat Research (METR). They define the “time horizon” metric and show that LLMs’ time horizons have been doubling every seven months, and this growth might have recently accelerated.

🔖 RADIO CAUSE COMMUNE 93.1 FM • PARIS

Radio Cause Commune est une radio associative parisienne qui diffuse depuis novembre 2017 sur 93.1 FM. 40 bénévoles, zéro publicité, un budget de 60 000€ annuel : nous maintenons une stricte indépendance éditoriale. Nous défendons les logiciels libres, l’indépendance des médias et créons des outils techniques innovants pour la radiophonie libre

🔖 Pourquoi je n’utilise pas l’IA

L’IA me gonfle. Profondément. Enfin, surtout l’IA générative (tu sais, les LLM), parce que je peux concevoir une certaine utilité à certains types d’IA. La reconnaissance vocale, par exemple.

Passons un peu en revue mes raisons de ne pas utiliser l’IA.

🔖 London Book Trades Database

The Bibliographical Society has just launched a redesigned version of the London Book Trades Database (https://lbt.bibsoc.org.uk/).

The original LBT database was the work of the late Michael Turner at the Bodleian Library, assisted by a number of collaborators, drawing particularly on the archival resources of the Stationers’ Company. A web version of the database was created in 2009 which eventually ran on servers at the Bodleian until it was closed down in 2024 as its software was long past its expiry date.

The Bibliographical Society has taken steps to revive the project, this time as a read-only MediaWiki resource based on a new extraction of the data from the original database created by Michael Turner and a radical redesign of the contents and interface (I led this work). This new version, known as LBT Version 2, does not yet contain all the original data, but the people, events, titles, and relationships make it immediately useful. We envisage two or three updates in the coming months as more contents are retrieved and restructured. The new web site has explanatory pages with a full history of the project and its new technical implementation.

In addition to all the famous names of the book trade up to the mid-nineteenth century, entries offer information for more minor figures including family members and apprentices. There are entries for nearly 35,000 people, presenting detailed accounts of the person’s interaction with the Stationers’ Company and data from published sources.

🔖 Wikipedia Bans AI-Generated Content

After months of heated debate and previous attempts to restrict the use of large language models on Wikipedia, on March 20 volunteer editors accepted a new policy that prohibits using them to create articles for the online encyclopedia.

“Text generated by large language models (LLMs) often violates several of Wikipedia’s core content policies,” Wikipedia’s new policy states. “For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for the exceptions given below.”

🔖 Trump administration requests Stanford Medical School admissions data, claiming racial discrimination

The Trump Administration opened investigations into admissions policies at the medical schools of Stanford University, Ohio State University (OSU) and the University of California, San Diego (UCSD) on March 25, noting possible race discrimination.

In letters sent to the three schools, the Department of Justice (DOJ) requested data on the last seven years of admitted classes at the medical schools, threatening to withhold federal funding if the schools do not comply by turning over the data requested by April 24. The investigation is part of a larger crackdown on higher education, as the DOJ has launched dozens of investigations into universities during Trump’s second term.

🔖 Discounted Cumulative Gain

Discounted cumulative gain (DCG) is a measure of ranking quality in information retrieval. It is often normalized so that it is comparable across queries, giving Normalized DCG (nDCG or NDCG). NDCG is often used to measure effectiveness of search engine algorithms and related applications. Using a graded relevance scale of documents in a search-engine result set, DCG sums the usefulness, or gain, of the results discounted by their position in the result list.[1] NDCG is DCG normalized by the maximum possible DCG of the result set when ranked from highest to lowest gain, thus adjusting for the different numbers of relevant results for different queries.

🔖 axios Compromised on npm - Malicious Versions Drop Remote Access Trojan

axios is the most popular JavaScript HTTP client library with over 100 million weekly downloads. On March 30, 2026, StepSecurity identified two malicious versions of the widely used axios HTTP client library published to npm: axios@1.14.1 and axios@0.30.4. The malicious versions inject a new dependency, plain-crypto-js@4.2.1, which is never imported anywhere in the axios source code. Its sole purpose is to execute a postinstall script that acts as a cross platform remote access trojan (RAT) dropper, targeting macOS, Windows, and Linux. The dropper contacts a live command and control server and delivers platform specific second stage payloads. After execution, the malware deletes itself and replaces its own package.json with a clean version to evade forensic detection.

🔖 webweigh

A rust CLI that calculates the file size of a web page when loaded with all external resources.

🔖 Phosphor Icons

Phosphor is a flexible icon family for interfaces, diagrams, presentations — whatever, really.

🔖 Departures (2008 film)

epartures (Japanese: おくりびと, Hepburn: Okuribito; “one who sends off”) is a 2008 Japanese black comedy drama film directed by Yōjirō Takita and starring Masahiro Motoki, Ryōko Hirosue, and Tsutomu Yamazaki. The film follows a young man who returns to his hometown after a failed career as a cellist and stumbles across work as a nōkanshi—a traditional Japanese ritual mortician. He is subjected to prejudice from those around him, including from his wife, because of strong social taboos against people who deal with death. Eventually he repairs these interpersonal connections through the beauty and dignity of his work.

🔖 Infrastructure Landlords: The Rentier Capitalism of Commercial Academic Publishers

If you want to understand where the commercial parts of scholarly communications may be heading, you need to look beyond policy documents, conference panels, or public-facing strategy statements. You should look at what large commercial actors say when speaking to investors. Earnings calls are one of the places where that language becomes especially revealing: less concerned with sector ideals than with growth, market opportunity, competitive position, and what will ultimately generate value for shareholders. For this reason, it can be worthwhile to review earnings calls and investor presentations, as these are often overlooked when discussing OA policy and sectoral movements.

🔖 AI got the blame for the Iran school bombing. The truth is far more worrying

Someone decided to compress the kill chain. Someone decided that deliberation was latency. Someone decided to build a system that produces 1,000 targeting decisions an hour and call them high-quality. Someone decided to start this war. Several hundred people are sitting on Capitol Hill, refusing to stop it. Calling it an “AI problem” gives those decisions, and those people, a place to hide.

🔖 Guibo

GUIBo is a desktop GUI for operators and developers who run Kubo (the IPFS daemon in Go). It drives your node through Kubo’s HTTP RPC API so you can work with pins, UnixFS content, IPNS, remote pinning, gateways, and network or repo diagnostics without living in the terminal.

🔖 The Human Line Project

At The Human Line, we are committed to ensuring that AI technologies, like chatbots, are developed and deployed with the human element at their core. LLMs are powerful tools, and with Ethical design, users can gain new skills and knowledge while remaining emotionally intact.

🔖 Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion

Tech-related delusions, whether they involve train travel, radio transmitters or 5G masts, have been around for centuries, Morrin says. “What’s different is that we’re now arguably entering an age in which people aren’t having delusions about technology, but having delusions with technology. What’s new is this co-construction, where technology is an active participant. AI chatbots can co-create these delusional beliefs.”

April 2026 Early Reviewers Batch Is Live! / LibraryThing (Thingology)

Win free books from the April 2026 batch of Early Reviewer titles! We’ve got 237 books this month, and a grand total of 2,796 copies to give out. Which books are you hoping to snag this month? Come tell us on Talk.

If you haven’t already, sign up for Early Reviewers. If you’ve already signed up, please check your mailing/email address and make sure they’re correct.

» Request books here!

The deadline to request a copy is Sunday, April 26th at 6PM EDT.

Eligibility: Publishers do things country-by-country. This month we have publishers who can send books to Canada, the US, the UK, Ireland, Australia, Luxembourg, Belgium, Sweden, Spain, Slovenia and more. Make sure to check the message on each book to see if it can be sent to your country.

The Responsible PartyVenus, VanishingThe BrunswickDiscovery By DesignThe Anti-Marriage PactA Short History of San FranciscoLike Friends, Like Foes: Japanese Americans and Nevada Through World War IIFantastic Tales of SteampunkOnce upon a Wintry Krampusnacht EveNature's Echo: Harnessing Ancient Feedback Loops to Heal a Changing PlanetBillie Builds a RoboCornNow I See SpringThe Summer I Found YouThe Patriot's DaughterIt Came from NeverlandWould I Lie to You?Sightings: PoemsThe Sacred Path of SimplicityThe Alchemy of Motherhood: Unspoken Truths of Birth Trauma and the Postpartum JourneyThe Alchemy of Motherhood: Unspoken Truths of Birth Trauma and the Postpartum JourneyA Bad Deal in Mormon LandPraise God for PastiesSib Squad: Hole Lotta Trouble!Who Is Jesus?: Easter DevotionalUnsung Canaan Ballads: A Collection of PoemsMan AfieldReflections of a Woman's Life: A ChapbookBeating Heart of the World: The Taos Art Colony, the Pueblo Resistance, and the Battle for Indigenous AmericaWe've Been Here Before: How Rebellion and Activism Have Always Sustained AmericaRemembering Roots: How an American Classic Transformed the WorldAnd Then We Saw the Bag . . .: Trash to Them, Treasure to UsAll the Colors of Life Deluxe Gift Edition: An Illustrated Coffee Table Book for Occasions and CelebrationsBeirut ExtractionFat Bitch: Killing the Willpower Myth: An Empowering Guide to GLP-1 Weight Loss Medicine, Healing from Trauma, and Building Lasting HappinessDiodeThe Calamity ClubThe Fire AgentBubbles, Roses, and RumpThe Sea CureRunning Wild Novella Anthology Volume 9 Book 2Sounds Like Trouble to MeJen & Gary's Infinite (Quantum) EntanglementsRun, Rabbits, Run!NecromaniaDifferent RoadsThe Role of Dental Nurses in Oral Health Promotion to Prevent Dental Caries in Children within General Dental PracticesDispatches from Grief: A Mother's Journey Through the UnthinkableJungle of AshesTent CitySkies of Fire and SmokeA Love Once LostThumbin' The Rock: A Newfoundland Hitchhiking OdysseyMarigold GreyCalifornia Fever Dream: A MemoirEscapePeasUndesirable: The Vietnam War and A Father's Battle for JusticeOn the HookFind Me in the StoryApolloRenegadeFlightlessEternal EnchantmentThe Dead of DayBusiness Sustainability Essentials You Always Wanted to KnowLearning and Development Essentials: A Practical Guide to Designing Learning Programs, Driving Business Impact, and Achieving Organizational ExcellenceCorporate Finance Essentials You Always Wanted to KnowWhen We Forgive: Stories of Hurt, Healing & Everything in BetweenStakeholder Management for Project Managers: A Practical Guide for Managing Projects and Engaging PeopleOrganizational Development Essentials You Always Wanted to KnowWriting Memoir in Flashes: Creative Ways to Tell Your True Stories, One Memory at a TimeSeed Starting Simplified for Beginners: A Complete, Step-By-Step Guide to Grow Healthy, Strong Seedlings Indoors, Avoid Common Mistakes and Transplant with ConfidenceLLC para Principantes : Manual Completo con Estrategias Paso a Paso para Crear, Estructurar y Hacer Crecer Tu Sociedad de Responsabilidad Limitada con Confianza y Visión a Largo PlazoSpindleheart: Wrath of the Ravelwind KnightWhat Does Your Face Mean?: An Informational Memoir on Late-Diagnosed AutismThe Chronicles of City NThe Statistically Unlikely ReboundPanthera's HavenBenightedThe Cardboard KingBodega Botanica Tales: CarmenSpeak of the DevilMurder in the GyreDead AccountDon't Blame Sam!When the Sun DiedThe Bane of DragonsCanopy: A Collection of Stories.CommodoreThe Woman from WarsawThe Loss of What Is Past4 Weeks to Total Sleep Mastery: A Proven System to Maximise Your Recovery and Energy in Just 30 DaysBlind ItemBraving the Dawn: A Novel of New FranceShepherds of the Lost: Family SecretsThe Demon King Is a Merchant: The First StepsKeep Them CloseC is for Childhood Cancer: And Other Lessons Cancer Taught MeGaits of MagicCover to Cover: What First-Time Authors Need to Know About Editing (Revised Edition)My Twin the MurdererZicky: Wrath of the Rat KingWoodstake: Three Days of Peace, Music and BloodBaptismThe Echoes of the WatchtowerDark ShadowsThe Last PillThis Too Shall Pass?: Honest Words for Moral InjuryNocturneWhat the Island AsksGo Help YourselfQuinto's ChallengeThe Shapeshifter's GambitDrummer Girl: How I Became MetalBreaking the Simulation: An Ancient Path Back to RealityDon't TellTen Stories from Arab History: From Ancient Yemen, Through the First Islamic Civil War, to the Fall of al-AndalusBroken Mirrors, Steady GroundA Christmas at Ballymore CaféNever ForgiveBent Cop: Johnny Takes Out The TrashMassawa: A Tale of Espionage, Love, and IllusionThe Emotional Side of Money: A Roadmap to Financial WellnessCarrying the UnseenSIGNPOST!: A Map for Resilience CultureHold Without Panic: How to Swing Trade 2-3 Hours Per Week While Working Full-TimeSolarflameThe Wizard, The Pirate, and The Steampunk Librarian30 Days of Transformative Holistic Healing: A Guided Self-Healing Journey to Rebalance Your Mind, Body, Energy, and Nervous SystemChanneling MarilynThe Taste of Glass in a Pillar of SaltFrom Sea to Shining Sea: 50 Daily Devotions from Traveling to Every State in AmericaOnly Breath & ShadowLove In The Time Of AmericaThe Broken HeirHunting in Africa: An African Safari ThrillerThe White Highlands and the Mau Mau: With the Rucks, Leakeys, and Kikuyu Freedom Fighters, 1952-1961TheaThe Double-Headed EagleThe Paine SocietyNorman & The Stinking Space GooMoonflowerScarlett UndoneThe Dancer's Shadow: An Isekai RomantasyA Khmer Legend of Love and Destiny: An Isekai RomantasyIf Love Doesn't Make a Family...: Nothing Is What It Appears to BeIris Blackwood and the Curse of Hemlock IslandFunny Things HappenDamaged: Life. Death. Memory. UncertaintyGood Grooming and a Healthy Respect for AuthorityAI Slayer/AI LiberatorDragon's BetrayalIntroduction to the Attribution of Literature: The Re-Attribution of the British 18th and 19th Century CorpusesThe Eight Keys: Opening to the Mysteries of Cosmic HarmonyThat Murder FeelingNot a Fairytale Ending: The Rewriting of My StoryRedemption RowThe Rescue Fantasy: Why Capable Women Stay Stuck and How to Reclaim the Power to Lead Your LifeThe Manual for the Ambitious Man: The Systems No One Taught You About Success, Emotions, and Becoming a ManThe Inner Workings of the Outer Layer: A History of Bicycle Tire Sizes and StandardsWords Were The Enemy: A Novel in VerseThe Second WorldMurder Most SaurianYour Verdict: A Judge's Reckoning with Law and LossMysteries Beyond KnowledgeDefenders: Reign of the BugsFractureTo the Moon and BackThe Track of the EyelidsShakespeare's Vengeance - Every Role Comes with a PriceThe Cost of KnowingPaul Bunyan: An American Folk LegendMy First Colonoscopy: A Comical Look at the Prep, the Procedure, and the Relief AfterwardThe Last Human Advantage: Why Thinking Clearly Matters More Than Ever in the AI EraThe Four WindsAfter The LakeBarking Orders: A Dog's Diary of Chaos, Loyalty, and Squirrel SurveillanceTrue and Absurd Lawsuits That Really Happened: The Curious Case Files of Sherlock GrantAncilla: Master, Teach MeWhen the Word Became FleshAncilla: Master, Teach MeAmish Remedies: 400+ Amish Herbal Remedies & Kitchen Traditions: Natural Healing, Holistic Wisdom, and No-Fluff Wellness for Everyday LifeHeritage In Motion: Champion SwimmerThe Octopus Myth: What We Really Know about Octopus IntelligenceThe Indie Author's Tax Survival Guide: A Practical Guide to U. S. Taxes for Self-Publishing AuthorsOrton-Gillingham Decodable Stories: Level 7 - A Day at the Beach: Structured Literacy Decodable Reader for Developing ReadersThree and Thirty Pieces of InsanityBarking Orders: More Funny Adventures of a Very Opinionated Cattle DogPoems of The New EvangelionQueenslanderLebanon: A Country for No One & EveryoneTerr-or-Treats: Spooky Ghost Stories and Deliciously Haunted AdventuresDarleneWonderful HalfAre You Speedy?Thursday Night Tiki Lounge: 52 Drinks That Bring the Tropics HomeThe Sages of the Hidden Road: A Parable for the Weary SoulBe a Bookworm, Not a BullyThe Grasshopper Lost Its WingsThe Reel Life of Zara KeggWest ShoreMath Heals: On the Gift and Weight of Being HumanRetirement Planning Simplified: The Complete Step-by-Step Guide to Building Lasting Income, Cutting Taxes, and Retiring with Confidence (Updated for 2026)Aunt Rosie's FarmThe Captive CommanderAgainst All OddsVault of Secrets: Shelter for Your Cloak-and-Dagger TruthsSore Like an EagleA Ravishing AbominationThe Story Eaters of YammirlThe Hound of Troy: The Vengeance of HecubaNew Life for a Dead ManThe Glass FieldMan of a Thousand Fails: Film Noir of Elisha Cook Jr.The Million-Dollar Sentence: The Secret of the Valley of PeaceDear Missing FriendDear Missing FriendA Penance for CrowsThe Focus Equation: 21 Secrets to Boost Your Focus in a Distracted WorldJonah and Mira: The Map Beneath the OakA Curse of Wings & GemsWildfire & The Sun PrinceThe Land of Milk and Honey: An Italian Immigrant's Journey from Rags to Riches in AmericaThe Land of Milk and Honey: An Italian Immigrant's Journey from Rags to Riches in AmericaHow to Master the Power of Silence for Emotional Control: Step-By-Step Methods to Stop Overreacting and Stay in ControlClass Is in Session: Teaching Through the ChaosBefore the Pharaohs: The Lost Mega-Cities of Old Europe and the Mystery of the Ritual FireAn Enduring SparkEveryone Is Perfect HereEleven Pillars: A Framework for Self-Mastery and the Long GameConnecting Goals to Impacts and Outcomes: Harnessing Structured Conversations for Customer-Driven Value DeliveryWelcome to Weirdsville: The Incredible True Story of Weirdsville and All the Weirdos Who Live ThereWelcome to Weirdsville: The Incredible True Story of Weirdsville and All the Weirdos Who Live ThereSwords Over the StarsCaenogenesisOur Better NatureThe Blood of Birds: A King David-Era Thriller

Thanks to all the publishers participating this month!

Alcove Press Aquarius Press Arctis Books USA
Broadleaf Books CMU Press Cozy Cozies
Crooked Lane Books Cynren Press Flat Sole Studio
Harper Horizon Harper Muse Haven
Henry Holt and Company Highlander Press History Through Fiction
Infinite Books Inkd Publishing LLC Life to Paper Publishing
NeoParadoxa Noble Legacy Publishing Open Books
Paper Phoenix Press Penelope Pipp Publishing Pink Crow Press LLC
Prolific Pulse Press LLC PublishNation Real Nice Books
Revell Running Wild Press, LLC Shadow Dragon Press
Somewhat Grumpy Press Spiegel & Grau Sunrise Publishing
Thinking Ink Press Tundra Books Type Eighteen Books
University of Nevada Press University of New Mexico Press unLit Publishing
Unsolicited Press Vibrant Publishers W4 Publishing, LLC
WorthyKids

March 2026 Early Reviewers Batch Is Live! / LibraryThing (Thingology)

Win free books from the March 2026 batch of Early Reviewer titles! We’ve got 226 books this month, and a grand total of 3,026 copies to give out. Which books are you hoping to snag this month? Come tell us on Talk.

If you haven’t already, sign up for Early Reviewers. If you’ve already signed up, please check your mailing/email address and make sure they’re correct.

» Request books here!

The deadline to request a copy is Wednesday, March 25th at 6PM EDT.

Eligibility: Publishers do things country-by-country. This month we have publishers who can send books to the US, the UK, Israel, Australia, Canada, Ireland, Germany, Malta, Italy, Latvia and more. Make sure to check the message on each book to see if it can be sent to your country.

The Great WhereverExceptional Hatred: Antisemitism and the Fight for Free Speech in Modern AmericaProcrastination Proof: Never Get Stuck AgainRules to Live By: Maimonides' Guide to a Wonderful Life (HEBREW EDITION)Endless Exodus: The Jewish Experience in EthiopiaBlue Team Dynamics: Three Proven Leadership Principles Inspired by IDF Sources for Business and LifeSons of Abraham: A Candid Conversation about the Issues that Divide and Unite Jews and Muslims (HEBREW EDITION)Sons of Abraham: A Candid Conversation about the Issues that Divide and Unite Jews and Muslims (ARABIC EDITION)Puzzles She PackedBloom Of BetrayalNever Hide from the DevilBowers Mansion: The Legacy of a Comstock FamilyTangential Terrains: Cormac McCarthy's GeoaestheticsA Future For Ferals: A Charity AnthologyMore Futures for Ferals: A Charity AnthologyHow to Create an Organic Aquarium: The Beginner's Guide to Soil-Based Freshwater AquariumsRonald, the RoninDying to Live HereThe Unfavored Children's ClubSea SudsFaking to FallingBunnies in the Berry RowThe CorryJack Rittenhouse: A Western Literary LifeArthur and the Kingswell TrioMantleSome Stupid Glow: StoriesDollartoriumWhen Paris WhispersThe Night Nurse and the Jewel ThiefHeroes of PALMAR: How One IDF Unit Revolutionized Combat Medicine in GazaWhen Eichmann Knocked on Our Doorאיש כפי נחלתו: שנים-עשר שבטי ישראל בנחלות אבותיהםFamily DramaThe Son Of A Belfast Man: From the Early Years Up to Nineteen Years OldClaimed by DarknessThe Alfriston QuartetJaguars and Other GameJungle of AshesShooting Up: A Memoir of Love, Loss, and AddictionWarp & WeftHere for a Good TimeCanada: We Are the StoryRuthieA Deadly InheritanceFly in the ChaiMjede: The Three DaysSince You Weren't There and Other MemoriesQuestions for Werewolves: A Creative Nonfiction of Madness, Witch and DaimonEstuaryI'll Stop From MondayThe Marilyn DiariesNever Hide from the DevilThe Greatest New York Yankees by Uniform NumberThe Blue WaveCalisthenics: Core Crush: 38 Bodyweight Exercises for a Stronger CoreLightningShadows of the Republic: The Rebirth of Fascism in America and How to Defeat It for GoodDigital Coup: The Conspiracy to Thwart Global DemocracyWeathering the Storm: Navigating the Anti-Social Justice WaveConversion Therapy Dropout: A Queer Story of Faith and BelongingThe Christian Past That Wasn't: Debunking the Christian Nationalist Myths That Hijack HistoryPuppy Training: The Smart Way7 Spiritual Habits to Change Your LifeInvesting for BeginnersWitch of the Shadow WoodThe Last PageWe Become DarknessPondering: A Story in CinquainsBy the Bubbling BrookTaming the AlphaTo See BeyondThe Fallen: The Lost Girls of Ireland's Magdalene Laundries and a Legacy of SilenceSeed Starting Simplified for Beginners: A Complete, Step-by-Step Guide to Growing Healthy, Strong Seedlings Indoors, Avoiding Common Mistakes & Transplanting with ConfidenceContinuous Improvement Essentials You Always Wanted to KnowBetter: A Guidebook to a New and Improved YouDigital SAT Reading and Writing Practice QuestionsDigital SAT Math Practice QuestionsThe Theater: Courage and Survival in the Defining Atrocity of the Ukraine WarOur Minds Were Always Free: A History of How Black Brilliance Was Exploited--And the Fight to Retake ControlInheritance: Nick Chambers Slayer for HireSuperteams: The Science and Secrets of High-Performing TeamsPrickles and PridesNo Further Action: Ten Short StoriesPermit to StayLife Is Terminal: And So Is This Cold SoreThe Tarishe CurseIndian Warner: Son of Two WorldsSpindleheart: Wrath of the Ravelwind KnightThe Sure Thing: A Pleasure Practice to Revive the SparkEssence MergingQasida for When I Became a WomanNo Winning This WarMan of a Thousand Fails: Film Noir of Elisha Cook JrRed DemonSticks and Stones and Dancing Cranes: The End of the BeginningFool: A Tudor NovelWho in Astrology Are You?Stillness and Survival: A Life Between Trauma, Glitter, and the Echo of My Own VoiceThe Florist's Budding DesireFission: A Novel of Atomic HeartbreakEmberglow Falls Academy: The Legacy of MagicThe Jolt: A Time-Slip RomanceHaggadahpalooza: The Unofficial Weirdly Perfect Passover Pop Parody PanoplyTwo x ThreeMother of Assassins: A Memoir of the ImaginationInner, The Breath of God, Volume 1Play From Your HeartLegends of Mexico Coloring Book: Mythical Tales and Folklore to Color and EnjoyThe Golden Apple and the Nine Peahens: A Balkan Orchard TaleConnection:LostOne of a Kind CreaturesC is for Childhood Cancer: And Other Lessons Cancer Taught MeThere's a Young Man Dressed in BlueChivalry & ChocolateCaput Mundi: The Head of the WorldCain's ChameleonThe Lion's DenCain's ChameleonOn Moreton WatersThe Million-Dollar Sentence: The Secret of the Valley of PeaceA Moment's SurrenderLogos Palimpsest: Layered Verses of My Myths and MemoriesFelicity Fire and the Forever KeyMinds & Moods: Power & Deception Crossword PuzzlesTrue & Absurd Lawsuits: The Cases Kept ComingDear Missing FriendIn His Absence: A Brother, A Life, and What EnduresWill's WakeDesert Superstars: A Patience & Perseverance Coloring Adventure: A Mindfulness Coloring Book with Desert Animals, Patience-Building Prompts, and Mindful SEL Adventures for Growing HeartsOur Better NatureThe Pioneer Converts: The Message of HopeThe Black Knight: Miqdad Historical NovelThe Gardener Parent: Stop Yelling and Start Guiding Using Ericksonian MethodsBlütenschwere : Roman über Die Gewalt der AuslöschungThe Weight of Petals: A Story of Memory and ResistanceThe Problem with Conspiracy Theories: Real Scandals, Fake Mysteries, and How Distrust Took OverCity of the Gods: The Return of Quetzalcoatl (15th Anniversary Edition)The Three-Bullet Act: Journal of an HR DirectorThe Shapeshifter's GambitThe Vampyre ClientJeannie's Bottle: IncantationsFated RebirthLove and Ghosts at Hideaway LakeJonah and Mira: The Map Beneath the OakChangeupA Gift of RevelationsBachelorx: A Nonbinary MemoirA Strange SoundThe Rising of the WolvesThe Rising of the WolvesThe Missing FrameCaenogenesisThe Standard: 38 Standards of LifeThe Caregiver's Game: Unraveling Financial Deceit in the Shadows of DementiaClass Is in Session: Teaching Through the ChaosPolitics and Morality: The Problems of Ethical Debate for an Evolved Social SpeciesThe Book of Peace AphorismsTerrestrialQueenslanderThe Blood of Birds: A King David-Era ThrillerA Look into Mirrors: Their Making and Use Throughout HistoryThe Coherent Website: Designing for Trust in the Age of SearchHuman Again: In the AI AgeCut to the QuickThe Clockwork SpyYou CancerViveActs Of FaithThe HuntedAbba, Father!: A Journey to Knowing God in His Greatest Role of AllMidnight MeowsA Night of Strange DreamsAunt Rosie's FarmClose Encounters with Tort$Rewriting Your Life: A Workbook On Self-DiscoveryEpic Health & Ultimate Training: A Self-Help Workbook For Becoming StrongConnecting Goals to Impacts and Outcomes: Harnessing Structured Conversations for Customer-Driven Value DeliveryTrust and Treason: The RiseThe Last Phone CallWhen We Came Full CircleWhen Bonds Were ForgedThe Waterfall of VengeanceRain and Sun: Confessions of Love, Silence, and an Irrevocable PastAn Unsuitable Knight: A Novel of Norman ItalyBound by the ElementsMarriage Supper, Clearing GoatWord Fill in Puzzles: Large Print Puzzles for Seniors with over 70 Nostalgic Brain Games to Keep Your Mind Sharp and Active (Solutions Included)Yours Rhetorically, Cold Blue Monster: A Criminal Counseling Text-MoirMidnight BallerinaThe Agentic Loop: How Humans + AI Build Experiences That LearnThat Which Does Not Kill Us: An Intergenerational Memoir of Legacy TraumaIn the Belly of the AnacondaFree Will: Resolving the MysteryFree Will: Resolving the MysteryTattle Royale: Burn BookRupture Threshold1,2&3 John Bible Study: Dwell in LightThe Nutcracker - Gird Thy LoinsThe Magic SeekerNyxalath Heirophant of VeilsReed CityTerr-or-Treats: Spooky Ghost Stories and Deliciously Haunted AdventuresIncunabulaI Don’t Hum Anymore: A Confession of Silence, Survival, and City MadnessGolden LightI Raised Monsters: A Failed Teacher's Confession — Prisoner 4782A Florida Dance: Life Stories from the Sunshine StateCavern Sanctuary: After the FalloutDeep Work for Distracted People: Simple Methods to Stay Focused, Think Clearly, and Finish What MattersThe Law of the Spirit of Life: God's Design for a Life of Effortless TransformationOne-Page Wealth Compass: Fired at 63 Nearly Broke - Safely a Millionaire by 69The Dog BookThis Fell SergeantThe Secret Winners ClubDear Missing FriendThe FallYour Business Growth Playbook: Breakthrough Strategies to Scale Your Business for Business Owners Who've Outgrown HustleBeyond the Crystal SkyYpresMore Than ChemicalOld EarthHealthy Minds, Healthy Nation: How Meditation, Shamanism, and Indigenous Healing Can Tap into Your Light Within and Change the WorldAfter We BreakData Science in 7 Days: Python Fast-Track with Hands-on ProjectsBash and Lucy Say, Love, Love, Bark!Thinker Reads Start With Why: How to Find Your Why and Dare to Lead a Purpose Driven Life in 3 Steps Even If You’re Starting From Zero

Thanks to all the publishers participating this month!

Alcove Press Artemesia Publishing Baker Books
Bellevue Literary Press Broadleaf Books Brother Mockingbird
Cennan Books of Cynren Press City Owl Press Cozy Cozies
Egg Publishing Entrada Publishing eSpec Books
Fawkes Press Featherproof Books Gefen Publishing House
Gnome Road Publishing Grand Canyon Press Greenleaf Book Group
Hawthorn Quill Publishing Henry Holt and Company History Through Fiction
Infinite Books Inkd Publishing LLC Lito Media
PublishNation Pure Calisthenics Riverfolk Books
Running Wild Press, LLC Simon & Schuster Tundra Books
University of Nevada Press University of New Mexico Press Unsolicited Press
Vibrant Publishers W4 Publishing, LLC WorthyKids

ADA Title II Urban Legends: Sorting Fact from Fiction About the 2024 Updates / Digital Library Federation

This post has been authored by members of DLF’s Digital Accessibility Working GroupAs the April 2026, Title II compliance deadline approaches, members of the DLF Digital Accessibility Working Group began comparing notes on the misconceptions, half-truths, and “urban legends” circulating about ADA enforcement and digital accessibility requirements. What started as a lively discussion evolved into a collaborative blog post aimed at separating fact from fiction. Drawing on shared institutional experiences and community expertise, this post addresses common myths about Title II and provides clear, accurate guidance along with helpful resources.

ADA Title II Urban Legends

In 2024, the Department of Justice released updates to Title II of the Americans with Disabilities Act (ADA). These updates provided specific requirements about how to ensure that web content and mobile applications (apps) are accessible to people with disabilities, stating that Web Content Accessibility Guidelines (WCAG) Version 2.1, Level AA was now the technical standard for agencies subject to Title II. They have until April of 2026 or 2027, depending on their state or local government size, to meet these requirements. As many institutions rushed to ensure compliance, misinformation and common urban legends about accessibility came to the surface. This post endeavors to answer some of these common questions and myths encountered by members of DAWG. Thanks to the members of DAWG who contributed. 

1. Only people who are blind use screen readers! 

  • Contributor(s): Karen Grondin
    • While it is true that people who are blind or low vision make up the majority of people who use screen readers, according to the WebAIM Screen Reader User Survey #10, just over 10% of screen reader users reported that their screen reader use was not due to a disability. Screen reader users also reported the following disability types: Cognitive or Learning (5.2%), Motor (2.2%), and Other (4.9%). 5.3% of users reported being both deaf/hard of hearing and blind disability types. 

   2. Everyone who is blind can read Braille.

  • Contributor(s): Jasmine Clark

3. Faculty are 100% personally responsible for remediating any content they teach with.

  • Contributor(s): Jasmine Clark
    • Agencies, universities, and other entities that fall under Title II are responsible for compliance. Employers are responsible for violations carried out by employees and contractors whose services they utilize (employer obligations are better outlined in “The ADA: Your Responsibilities as an Employer”). While a university may choose to mandate that faculty teach with accessible materials, the university is ultimately responsible for ensuring that happens. It is possible that a faculty member could be held liable if they refuse to comply with their university’s mandates, but that would most likely still be a case of joint responsibility. 

4. I will have to strip my course of all engaging content (rather than spend the time learning how to make it accessible).

  • Contributor(s): Jasmine Clark
    • Taking the time to think about ways to remediate content and incorporate universal design principles into your teaching will only benefit you. There are resources like Universal Design for Learning that are readily available to help you get started. In many ways, expanding the modalities available to your students will enhance the educational experience for all of them, not just disabled students.

5. Title II applies only to accessibility for the blind, and designing for screen reader accessibility will solve all accessibility needs.

  • Contributor(s): Jasmine Clark

6. Title II applies only to all disabilities, and all content must be made universally accessible for all disabilities in advance, so that no one ever neds to disclose or ask.

  • Contributor(s): Jasmine Clark; Jon B; PF Anderson
      • As stated above, the main changes made to Title II revolve around making state and local governments’ web and mobile apps meet the Web Content Accessibility Guidelines 2.1 (WCAG 2.1), Level AA. WCAG focuses on modalities and functional needs instead of medical diagnoses.
        • “Digital technology designed for people with a broad range of abilities benefits everyone, including people without disabilities. It is, therefore, important to consider the broad diversity of functional needs rather than categorize people according to medical classifications.” – From Diverse Abilities and Barriers
      • However, while the guidelines try to be as universally applicable as possible, there will be those who need additional accommodation. WCAG 2.1, Level AA is meant to be the minimum. As it is the universal standard that applies automatically, users do not need to disclose or request this. But needs that go beyond that baseline would still require an individual to disclose their disability to formally request an accommodation.
        • As PF Anderson put it:
  • “… there is no one-size-fits-all accessibility. Making something accessible is dependent on context and need. What improves accessibility for one person often creates new problems for someone else. We can make changes that have a high bang-for-the-buck ratio, things that tend to help more folk, however there is always someone who needs something that is a bit off the beaten path.”
    • ADA adheres to the principle of Reasonable Accommodation (RA). An RA is defined as “any change in the work environment or the way things are customarily done that provides an individual with a disability equal access to employment opportunities, benefits, and privileges. An RA can cover most things that enable an individual with a disability to apply for a job, perform the essential functions of a job, and/or have equal access to workplace opportunities, benefits, and privileges. There are three categories of RAs: 
      • 1. Modifications or adjustments to the job application process to permit an individual with a disability to be considered for a job; 
      • 2. Modifications or adjustments necessary to enable a qualified individual with a disability to perform the essential functions of the job; 
      • 3. Modifications or adjustments that enable employees with disabilities to enjoy equal benefits and privileges of employment.” – From Administrative Communications System U.S. Department of Education Handbook for Reasonable Accommodations (PDF)
    • RA’s have exceptions for undue hardship. “An undue hardship means that a specific accommodation would cause significant difficulty or expense to the employer. The determination of whether providing a RA would create an undue hardship for the Department is always made on a case-by-case basis. In making this determination, ED needs to consider factors such as: the nature and net cost of the accommodation, the Department’s overall size and financial resources, the type of operation, and the impact of the accommodation upon the operation, including the impact on other employees’ ability to perform their duties and the Department’s ability to conduct business. The Department bears the burden of proof to demonstrate that providing an accommodation would cause an undue hardship.” – From Administrative Communications System U.S. Department of Education Handbook for Reasonable Accommodations (PDF)
    • This process is inherently dependent upon disclosure and the request for accommodation. 

7. Title II applies to all our buildings, and the campus has to have 100% physical accessibility by April.

  • Contributor(s): Jasmine Clark; Wen Nie Ng
    • While Title II of the ADA broadly governs accessibility for state and local government services, programs, and activities across both physical and digital, the 2024 updates to Title II apply to state and local governments’ web and mobile apps. Not physical spaces.

8. Title II applies to events, and ALL events must be hybrid in the future because in person events are not accessible.

  • Contributor(s): Jasmine Clark; Wen Nie Ng
    • Title II does not require all events to be hybrid; it requires that events be accessible. Making events hybrid is one of multiple approaches to making them more accessible. However as stated above, the most recent updates to Title II apply to state and local governments’ web and mobile apps. Not physical spaces. So, if an event is  physical, it is not subject to the April deadline. If an event is virtual, the event must meet WCAG 2.1, Level AA. 

9. Title II applies to events and, because ALL virtual and hybrid events will require CART and ASL, no one will plan virtual or hybrid events anymore, because we can’t afford it.

  • Contributor(s): Jasmine Clark; Amy Drayer

10. Title II applies to H.R., and interviews held by phone or Zoom will require captions for all candidates, because you can’t ask them to disclose.

  • Contributor(s): Jasmine Clark
    • Employment and hiring fall under Title I and Subpart C of Title II specifies that public entities are subject to the regulations within Title I. Potential employers are required to meet the standard for Reasonable Accommodation (see definition in question 6). To summarize, WCAG 2.1, Level AA is meant to be the minimum. As it is the universal standard that applies automatically, potential employees do not need to disclose or request this. But needs that go beyond that baseline would still require an individual to disclose their disability to formally request an accommodation.

11. Title II applies to internal staff meetings and clinic telemedicine visits, and captions must be enabled at the beginning of all meetings for all attendees, even if it is a confidential meeting.

  • Contributor(s): Jasmine Clark; Jon B
    • Organizations that provide healthcare are classified as public accommodations under Subchapter III of Title III. They were already required to make their services accessible regardless of these changes to Title II. The only change these entities will have to make is updating to meet WCAG 2.1, Level AA.
    • See question 6. If an employee requests captions, they must be provided. For confidential meetings, provide captions through a secure, encrypted service or offer real-time transcription you can delete after the meeting. If there are no employees that require live captions, and maintaining an active subscription to a service would constitute an undue hardship, it would be wise to have a pre-approved vendor selected to provide services as needed. This would avoid delays and complications when accommodations are requested.   

12. The library will withdraw and discard historic content that cannot be remediated.

  • Contributor(s): Jasmine Clark
    • This would most likely be a library decision made after considering its weeding policies, not a Title II requirement. “Content” is a broad term and whether or not something can be remediated would have to be assessed on a case-by-case basis. There are exceptions depending on the format, purpose, and use of content.
      • Fact Sheet: New Rule on the Accessibility of Web Content and Mobile Apps Provided by State and Local Governments; see: Summary of the Exceptions

13. The library will withdraw and discard historic dissertations if the original author does not make them accessible (many of whom are dead).

  • Contributor(s): Jasmine Clark
  • Once again, this would most likely be a library decision, not a requirement under Title II. Dissertations can be remediated and, depending on their format and use, they may meet the requirements for an exemption. 
      • Fact Sheet: New Rule on the Accessibility of Web Content and Mobile Apps Provided by State and Local Governments; see: Summary of the Exceptions

14. Everything we have in our digital collections and institutional repository is archival, so we get an exception.

  • Contributor(s): Jasmine Clark
    • This depends on how these materials are used, when they were created, what format they are, and whether or not they are required to be used. If a faculty member is requiring materials be used for a course, they will have to be remediated and made accessible. Once again, review the exception criteria before deciding whether or not your collections are indeed exempt.
      • Fact Sheet: New Rule on the Accessibility of Web Content and Mobile Apps Provided by State and Local Governments; see: Summary of the Exceptions

15. Full text in library databases already meets accessibility standards.

  • Contributor(s):  D Krahmer
    • Even when a publisher provides an accessible file to a vendor, the vendor may reformat that information for their own databases in a way that actually destroys the accessibility. While more publishers AND vendors are shifting to epub and more accessible formats for digital content (such as EBSCO switching over to using LCP-DRM files), that doesn’t mean that material made public prior to the European Accessibility Act or the Title II updated regulations will be made accessible.

16. Everything has to be made accessible to receive a passing grade from screening tools, even if the changes make something less accessible.

  • Contributor(s): Jasmine Clark
    • Screening tools are just that, tools. Accessibility is meant to make content usable for people. While screening tools can help you identify potential barriers to access, they require actual people to verify whether there really is an issue and how to correct it. If you prioritize a passing mark from a screening tool and make something inaccessible, you will not be in compliance with Title II. 

17. AI will fix everything. 

  • Contributor(s): Karen Grondin; Jasmine Clark; Jon B
    • While AI can be helpful to people with disabilities it cannot fix everything. There are people who use things like  AI assisted smart glasses to help them tell if the medications given to them by the staff in assisted living facilities are indeed the correct medicines. However, this technology is out of reach for most due to cost (a pair of AI smart glasses costs a few hundred dollars, not including a subscription to the AI service). Several recent review studies examine this urban legend and both conclude that AI does not solve all accessibility issues. 
    • Some additional AI limitations: 
      • Synthesized auto-generated text can make up content wholly unrelated to anything being said or described.
      • The reading capability of AI is overstated and this can impact the ability to correct reading order and catch inconsistencies.
      • The costs of services can be high and are in the control of private companies that can cancel or increase the prices at will.
      • Privacy is a major risk, especially when dealing with health information.

18. ARIA roles are the solution to everything.

  • Contributor(s): D Krahmer; Wen Nie Ng
    • ARIA is a powerful tool, but it is not a universal solution and should be used with caution. It is a standard intended to work across assistive technologies. However, ARIA can behave differently across screen readers and may not always function as expected. When used incorrectly, it can create more accessibility issues than it solves. In general, native HTML semantics should be prioritized, and ARIA should be used only when necessary as a second option.

19. Accessibility Overlays make websites accessible.

20. STEM – I will not be able to use tables for data anymore

  • Contributor(s): Jasmine Clark
    • Tables aren’t inherently inaccessible. If you are using HTML and you’d like to learn how to create accessible tables, consider this tutorial from the Web Accessibility Initiative. If you aren’t working with HTML, consider looking up accessibility checkers and best practices in the applications you’re using. Research the software you’re using and the platform(s) where your data is published. 

21. If I add alt text to my quiz images, it will give away the answers.

  • Contributor(s): Jasmine Clark
    • Alt Text should convey the intent behind your image. For example, if you include a photo of the Mona Lisa for your student to identify, you would describe the painting (e.g. “a painting of a woman with dark hair,” etc.), not name it. This would be helpful to students who are visually impaired and need help with the details. In contrast, if you include the Mona Lisa as a hint to another question, with the understanding that students will know the painting, you would name the painting with appropriate context (e.g. “the Mona Lisa above a fireplace for scale”).

22. If it’s publisher/ 3rd party materials, I am not responsible for making it accessible.

  • Contributor(s): Jasmine Clark
    • The precedent set by PAYAN v. LOS ANGELES COMMUNITY COLLEGE DISTRICT (2021) would indicate otherwise. 2 blind students sued the Los Angeles Community College (LACC) claiming:
      • “…Plaintiffs identified accessibility barriers in LACC’s library research databases, many of which were not compatible with screen reading software. Despite the AMPP and her individual accommodations, Mason was unable to complete a research paper for a psychology course because the professor required use of an inaccessible research database for the assignment. Although some of the library’s online databases were accessible to blind students, the library did not conduct regular accessibility checks and did not test programs for accessibility before the library acquired them, as the AMPP required. Instead, accessibility was only tested when a blind student reported an accessibility problem”
    • The result: 
      • “The district court also found that LACCD discriminated against blind students as a matter of law based on the accessibility barriers present in the LACC websites and library database, but it declined to impose liability at that time because Plaintiffs had not yet met their burden to show reasonable modifications existed to remedy this discrimination.”
      • “Following the bench and jury trials, the district court entered a permanent injunction and final judgment in favor of Plaintiffs. The permanent injunction requires LACCD to: (1) come into compliance with its AMPP; (2) evaluate its library databases for accessibility and establish means of alternate access to inaccessible databases for blind students; (3) designate a Dean of Educational Technology; (4) make the LACC website and embedded programs accessible to blind students; and (5) assess educational materials for accessibility before acquisition and to establish means of providing accessible alternative materials to blind students in a timely manner.”
    • Website and content required for coursework must be accessible, even if it’s 3rd party. If a particular item is not accessible, accessible alternatives must be provided by the university.  

23. People with disabilities do not go into [insert field here] so we don’t have to make content accessible.

  • Contributor(s): Jasmine Clark
    • This is a horse cart issue. If a field is inaccessible, disabled people won’t go into it. Yes, there are fields with which people with certain disabilities are inherently incompatible. However, it is important that those specific examples are not used as an excuse to ignore much broader, fixable issues. For example, a retired surgeon, whose eyesight has diminished with age, may wish to look up materials they published years ago to share with a mentee or loved ones.     

24. No disabled people use this library.

  • Contributor(s): D Krahmer
    • If your library website is inaccessible, then people requiring accessibility will not use it. You’re losing 25% of your potential patrons by not building accessible websites.

25. The date Title II goes into effect depends on the size of your institution, so as of 2026, many schools have another year to go.

  • Contributor(s):  Jasmine Clark; Jon B
    • The deadline for the 2026 changes depends on the size of your state or local government, NOT your institution. A state university will have to adhere to the deadline set for its state. A public library in a small town in that same state will have to adhere to the deadline set for its town. 
      • Populations over 50,000: April 24, 2026 (2 years from the rule’s publication, April 24, 2024)
      • Populations under 50,000: April 24, 2027 (3 years from publication)
      • Fact Sheet: New Rule on the Accessibility of Web Content and Mobile Apps Provided by State and Local Governments; see: “How do you know the compliance date for other parts of government, like your city, state, or town police department or library?” under  How Long State and Local Governments Have to Comply with the Rule

26. Students are responsible to go through the accommodations process before the school or professors are responsible to make course content accessible.

  • Contributor(s): Jasmine Clark

27. My content is accessible, so the website is accessible, or vice versa.

  • Contributor(s): Wen Nig Ng
    • You can have perfectly accessible content within a poorly structured website, meaning a screen reader user may not be able to navigate to that content at all. Likewise, a well-structured site does not ensure accessibility if the content itself lacks alt text, includes inaccessible PDFs, or has uncaptioned videos.
    • The bottom line: both structure and content must be accessible. One without the other creates real barriers for users with disabilities and may lead to compliance gaps under Title II and WCAG standards.

The post ADA Title II Urban Legends: Sorting Fact from Fiction About the 2024 Updates appeared first on DLF.

DLF Digest: April 2026 / Digital Library Federation

A monthly round-up of news, upcoming working group meetings and events, and CLIR program updates from the Digital Library Federation. See all past Digests here

Hello DLF Community!

As 2026 unfolds, conversations about AI, authenticity, and trust continue to shape our work. Join us on Wednesday, April 15, 2026 (1:00–2:00 PM ET) for Content Authenticity and Provenance in the Age of Artificial Intelligence: A Call-to-Action for the LAMs Community.

Joshua Sternfeld and Kate Murray will discuss their widely circulated report and how generative AI is reshaping questions of provenance across libraries, archives, and museums, highlighting practical examples and shared frameworks to guide responsible practice. The event is virtual, free, and open to all. Register here.

On the Forum front, planning for this fall’s virtual DLF Forum is well underway, and we’re energized by the ideas already taking shape. Thank you for your keynote recommendations! Also, be on the lookout for the Call for Proposals, which will open later this month.

As always, we’d love to hear what you’re working on, thinking about, or hoping to see from DLF in the months ahead. Email me at swillis@clir.org.

-Shaneé

This month’s news:

  • DLF Webinar: Content Authenticity and Provenance in the Age of Artificial Intelligence will be hosted on April 15, 2026, at 1:00 pm ET, featuring Joshua Sternfeld and Kate Murray in a timely discussion on how generative AI is reshaping trust, provenance, and practice across libraries, archives, and museums. Register here.
  • Registration Open: The 2026 Library Publishing Forum will take place June 17-18 at the University of Washington in Seattle, co-located with the Association of University Presses Annual Meeting and featuring a new Forum Friends option for those unable to attend in person. Visit the registration page for more information.
  • Conference: The International Image Interoperability Framework (IIIF) Annual Conference and Showcase will take place June 1–4, 2026, in the Netherlands, featuring a free introductory showcase in Amsterdam and a multi-day conference across Leiden. Learn more here.

This month’s open DLF group meetings:

For the most up-to-date schedule of DLF group meetings and events (plus conferences and more), bookmark the DLF Community Calendar. Meeting dates are subject to change. Can’t find the meeting call-in information? Email us at info@diglib.org. Reminder: Team DLF working days are Monday through Thursday.

  • DLF Born-Digital Access Working Group (BDAWG): Tuesday, 4/7, 2pm ET / 11am PT.
  • DLF Digital Accessibility Working Group (DAWG): Tuesday, 4/7, 2pm ET / 11am PT.
  • AIG Metadata Assessment Group: Friday, 4/10, 2pm ET/ 11am PT.
  • DLF AIG Cultural Assessment Working Group: Monday, 4/13, 1pm ET / 10am PT.
  • AIG User Experience Working Group: Friday, 4/17, 11am ET / 8am PT
  • DLF Open Source Capacity Resources Group: Wednesday, 4/22, 1pm ET / 10am PT.
  • DAWG Policy & Workflows: Friday, 4/24, 1pm ET / 10am PT.
  • DAWG IT & Development: Monday, 4/27, 1pm ET / 10am PT.
  • DLF Digitization Interest Group: Monday, 4/27, 2pm ET / 11am PT.
  • Committee for Equity & Inclusion: Monday, 4/27, 3pm ET / 121pm PT.
  • DLF Climate Justice Working Group: Tuesday, 4/28, 3pm ET / 12 pm PT.

DLF groups are open to ALL, regardless of whether or not you’re affiliated with a DLF member organization. Learn more about our working groups on our website. Interested in scheduling an upcoming working group call or reviving a past group? Check out the DLF Organizer’s Toolkit. As always, feel free to get in touch at info@diglib.org

Get Involved / Connect with Us

Below are some ways to stay connected with us and the digital library community: 

The post DLF Digest: April 2026 appeared first on DLF.

Making Sense of GenAI Amidst AI Hype and AI Personalization / In the Library, With the Lead Pipe

By Sarah Morris

In Brief: Artificial intelligence (AI) literacy frameworks emphasize the importance of understanding Generative AI (GenAI) technologies. But our collective and individual understanding of GenAI is heavily shaped and mediated by the hype narratives that surround it, where GenAI is depicted as powerful, magical, and inevitable. Amidst such compelling narratives, we can face challenges in navigating narrative extremes and exaggerations and in making informed decisions about using GenAI tools. Alongside AI hype, we are also experiencing AI personalization features which encourage trust and positive feelings towards GenAI tools. Taken together, AI hype and AI personalization can challenge and even hinder our ability to engage critically and thoughtfully with GenAI. In this article, I will explore how our understanding of GenAI is influenced by AI hype and AI personalization and consider how hype narratives and personalization features fuel one another and encourage trust in and awe towards GenAI. By centering AI hype and AI personalization as key components to understanding and exploring GenAI, and by incorporating critical media and information literacy skills into AI literacy, I feel that we can develop an AI literacy that better contextualizes GenAI and encourages reflective and critical approaches that can help learners make sense of their emotionally complex experiences with and reactions to GenAI.

Introduction 

Generative Artificial Intelligence (or GenAI) is often depicted in terms of superlatives. Compared to humans, it is described as smarter, faster, more efficient, more accurate, more personable, and even more dangerous. GenAI is a specific form of artificial intelligence, but many of the conversations and commentary surrounding AI in general are focused on or referring to GenAI. Essentially, GenAI can produce text, images, video, audio, or code in response to a user prompt (Striker & Sccapicchio, 2026). GenAI tools include chatbots like ChatGPT or Gemini, or image or video generation tools like Sora. Much of what we experience as AI in our daily lives is some form of GenAI. Our collective understanding of GenAI, whether as a force for good or as a force for some apocalyptic level disaster, is heavily shaped and mediated through narrative. In particular, the narratives that hype artificial intelligence, often extolling and anthropomorphizing its various virtues, greatly influence not only our understanding of these tools but also our ability to critically investigate, discuss, and respond to the entire artificial intelligence landscape. The hype narratives surrounding GenAI frequently minimize its harms, mischaracterize its capabilities, and distract from its failures (Baer, 2025b; Bender & Hanna, 2025). AI hype fundamentally shapes how we conceptualize and discuss GenAI via narratives that can be misleading or manipulative. But while we are experiencing and navigating GenAI through the lens of AI hype, we are also experiencing GenAI through the accompanying lens of AI personalization.

The personalized nature of various GenAI tools closely mirrors the tendencies we see with things like social media algorithms, where people are shown content that reinforces their views, all in an effort to keep people glued to a given platform (Bourne, 2024). With GenAI this personalization is even more insidious, increasingly leading users to rely on chatbots as confidants, friends, therapists, and even romantic partners (Garofalo & Vecchione, 2025). This appears to be by design, to an extent, as evidenced by a batch of recent commercials and offline advertising efforts like billboards depicting AI chatbots as friendly companions that can help you with mundane activities like preparing a meal, exercising, or deciding on home décor (Swant, 2025).  Whether these narratives are an effort to assuage fears about the dangerous capabilities of AI tools, highlight AI tools as powerful, albeit in a nonthreatening way, or attract new users with promises of both usefulness and fun, AI hype narratives seem increasingly intertwined with AI personalization features. I believe that the hype narratives surrounding artificial intelligence and the personalized nature of GenAI actually fuel one another, where hype narratives lead people to see these tools as powerful and magical and personalization features lead people to place their trust in these tools, and thus become more susceptible to believing hype narratives about artificial intelligence.

As librarians and educators seek to develop AI literacy frameworks to make sense of this emerging and evolving technological landscape, I argue that we need to give further attention to the ways in which we talk about, experience, emotionally respond to, and engage with GenAI tools. What effects do AI hype and AI personalization have on our ability to think critically and even clearly about these tools when we are being besieged by everything from relentless positivity, proclamations of inevitability, or visions of doom, all while being swayed by sycophantic chatbots that reaffirm everything we type? How can we make informed decisions about using GenAI technologies in media environments where critical and even accurate information about AI can be hard to come by? We are in a situation where narratives about AI technologies and our varied experiences using these technologies both potentially hinder our ability to make informed decisions and to think critically about generative AI and AI technologies (Nguyen & Mateescu, 2024; Bender & Hanna, 2025). If we as librarians and educators strive to develop an AI literacy that is rooted in critical thinking, nuance, and ethics, as outlined in places like the AI Competencies for Academic Library Workers (ACRL, 2025), then we need to contend with AI hype and AI personalization and equip learners to approach GenAI with both a critical and a reflective lens.

In this article, I hope to examine the interconnected trends of AI hype and AI personalization. I would like to consider how we can utilize the insights that arise from exploring AI hype and AI personalization as key aspects of our understanding of GenAI to develop a more critical and human-centered approach to AI literacy. And I am eager to consider how this framing can equip us to further develop a more critical AI literacy that better contextually situates GenAI, in line with approaches from critical information and media literacy, which examine the social and political construction and dimensions of information (Tewell, 2015; Kellner & Share, 2005). First, I will explore the dynamics between emerging narratives about GenAI and emerging frameworks for conceptualizing AI literacy. While I believe that the narratives surrounding GenAI influence AI literacy, I also argue that AI literacy frameworks form their own sorts of narratives about AI tools and technologies and influence how many, particularly educators and librarians, understand and respond to GenAI. Second, I will look at trends within AI hype, including themes of power, magic, and inevitability, that shape these narratives and our ensuing understanding of and reaction to AI technologies. I will then turn to examining trends of AI personalization within the framework of AI hype narratives and consider how the trust that can be inspired by AI personalization can reinforce AI hype. To close, I will look at ways that we can equip learners to better unpack, interrogate, and understand GenAI through the lenses of AI hype and AI personalization.

A focus on AI hype and AI personalization can help us center the often complex, emotional, and confusing experiences people are having with GenAI and help us as librarians explore how we can best equip people to think critically amidst AI and information environments that often do not lend themselves to critical thought and reflective practices. Ultimately, I believe that librarians, educators, and learners can benefit from the introduction of two lenses into emerging AI literacy frameworks. First is a focus on contextual analysis, where we can take a critical approach to analyzing the narratives surrounding technologies like AI and examine how that mediates, shapes, and influences our experiences with said technologies. Second is a focus on reflective practice that empowers learners to better recognize and critically engage with narratives and technological tools that might be personalized to an alarming degree. By grounding AI literacy with this sort of critical analysis, contextualization, and reflective practice, I feel that we can strengthen AI literacy, situate AI literacy within broader trends around critical media and information literacy, and equip learners to better engage with AI technologies in our complex and rapidly changing information environment.

The Evolving State of AI Literacy and AI Narratives

The hype narratives surrounding GenAI tend to exaggerate the benefits, capabilities, power, and successes of these technologies while minimizing its issues and flaws. And we can see these narratives emerging everywhere from commercials to public comments from AI companies to news articles to chatter on social media. But while the hype narratives promoting GenAI are increasingly ubiquitous, there are alternative narratives emerging that question and criticize the relentless hype surrounding GenAI. To make sense of this cycle of hype and disillusionment, we have a graphically represented cycle for exploring technological hype. The Gartner Hype Cycle, created in 1995 by Gartner analyst Jackie Fenn, provides a compelling framework for exploring AI hype narratives and places AI hype into context with other previous technological hype cycles (Gartner). According to the Gartner Hype Cycle, new technologies tend to follow a certain track in terms of both narrative and public reception and perception. A given technology is praised, extolled, and exalted, to the point of peak absurdity, before careening downhill into the evocatively named “trough of disillusionment.” (Gartner). Following this crash, in terms of expectations and sentiment, people will accept the new technology as useful for some things and not for others, settling into more realistic expectations. Recent research has speculated that this hype cycle model may not hold true for different kinds of technologies and has also posited that the nature of our media landscape and our modern technology sector, with its emphasis on speed and rapid new developments, are leading to repeated, less linear, and more pervasive hype cycles. (Dedehayir & Steinert, 2016; Van Lente et al., 2013; Goncalves & Bareis, 2025).  

The hype we are seeing with GenAI seems to be reaching new heights thanks in part to the nature of our current media and information ecosystem. Social media thrives on virality, with hype narratives poised to find success amongst platforms and algorithms that favor attention-grabbing content and spectacle (Bareis, 2024). And hype narratives are nothing if not attention-grabbing. In some respects, the hype narratives surrounding GenAI have found an ideal home amidst our current online information environment (Bourne, 2024). Recognizing AI hype as part of a longer history of technological hype, market frenzy, and raised expectations can help us better critically analyze the current wave of hype narratives that we are seeing surround GenAI and recognize the ways in which AI technologies are operating as part of a technological industry where hype cycles serve as expressions of power, ways to amass capital, and as a central aspect of technological development and our media ecosystem (Hao, 2025; Bender & Hanna, 2025)

AI literacy has emerged in conjunction with the seemingly abrupt and all-encompassing arrival of GenAI itself, and AI literacy has continued to evolve alongside our shifting understanding of GenAI. We can deepen our insights into the shifts and trends within AI literacy by situating AI literacy within the broader milieu of AI hype narratives, as well as within the longer history of technological hype (Van Lente et al., 2013; Bender & Hanna, 2025). While AI literacy tends to call for critical thinking, ethical understanding, and thoughtful approaches, AI hype tends to highlight things like ease, speed, simplicity, convenience, and the lack of need for deep thought, complexity, or worry (Bareis, 2024). AI hype narratives tend to heavily anthropomorphize AI technologies as well, to the extent that it can be difficult to discuss GenAI without utilizing terms that ascribe these tools more ability, and more humanity, than is warranted (Barrow, 2024; Placani, 2024). These hype narratives also presume an inevitability to GenAI, as if the emergence and ensuing dominance of GenAI in our society is an inescapable fact (Baer, 2025b). While the humanizing language surrounding GenAI can influence or even limit the vocabulary we use to discuss GenAI, the inevitability narratives surrounding GenAI can potentially dissuade critical discussion altogether. After all, why discuss or debate something that is inevitable? The nature of AI hype narratives poses challenges for more critical approaches, as these narratives often suggest that AI technologies are beyond questioning, beyond human foibles, and beyond reproach. The hype narratives that cast AI as somehow superior to humans can dissuade criticism and questions directed towards GenAI and those who have created it (Baer, 2025a; Baer, 2025b; Campolo & Crawford, 2020). Overall, the inviolability found within AI hype narratives can shape, and even hinder, the ways in which we question and criticize GenAI. Given the persuasive nature of AI hype narratives, and the potential harms inherent within AI technologies, there is a real and growing need for more critical and nuanced approaches to GenAI in the face of relentless hype narratives that seem to dissuade thinking deeply about AI in the first place.   

Many AI literacy frameworks, including work from Leo Lo and places like UNESCO and the Digital Education Council, increasingly highlight ethics and critical thinking as tenants of what it means to be AI literate, alongside using and understanding various AI tools and technologies (Lo, 2025, Miao & Shiohira, 2024; Digital Education Council, 2025). However, these AI literacy frameworks exist within and amidst pervasive and compelling AI hype narratives and can echo the underlying assumption that GenAI is powerful, inevitable, and potentially transformative (Baer, 2025b). How can we encourage understanding of GenAI without dissecting the often-misleading hype narratives surrounding it? And how can we gain insights from using highly personalized GenAI tools without reflecting on that experience? It seems to me that AI literacy frameworks can benefit from incorporating more critical approaches that can equip learners to more thoughtfully engage with GenAI and avoid inadvertently reinforcing AI hype narratives. Floridi, in work on the AI bubble, which AI hype is creating, notes that we need to “[m]aintain a critical and balanced perspective about AI developments, no matter what people with vested interests may say, recognising the technology’s potential and limitations” (Floridi, 2024, p. 12). To me, this is a call for embracing critical information and media literacy approaches that investigate and question narratives of power as a way to navigate AI hype.

AI hype is introducing a degree of cognitive dissonance as well, with a contrast between the extreme expectations set by AI hype and the reality of AI tools not performing as promised (Baer 2025a; Floridi, 2024). And this cognitive dissonance seems to be giving rise to increased criticism of GenAI. There are emerging frameworks and schools of thought that challenge the centrality of using GenAI, such as the AI refusal movement which argues that using AI is ethically unacceptable in many instances (Fox, 2024). Resources from places like the Rutgers Critical AI initiative also illustrate ways to utilize critical information and media literacy approaches for exploring GenAI. And as AI hype seems to grow and reach new and more bombastic heights, critiques of the AI enterprise rooted in privacy, labor concerns, and eco-critical stances, among others, have grown in response (Nguyen & Mateescu, 2024). There seems to be a by-play between hype narratives and the eventual counter narratives that emerge seeking to puncture the hype, whether through concern, disagreement, or just sheer exasperation with whatever outlandish claims are being raised by various trending hype narratives.

I think we can place AI hype itself and conceptions of AI literacy within a longer history of over-hyped technology and within the context of our current social media era. If we situate AI hype and AI personalization, as well as AI literacy, within this space, we can draw upon critical approaches and reflective practices that have emerged in media and information literacy spaces and put these lessons into conversation with GenAI (Soken & Nygreen, 2024). We are seeing calls for AI literacy to emphasize ethics and critical thinking (Lo, 2025). But in order to do that I think that we need to better contend with the context of AI, and how AI is being discussed, perceived, and received (Sloane et al., 2024; Bourne, 2024; Baer, 2025). Thinking critically and ethically about AI involves understanding not just how this technology works but how AI is being packaged and presented, how people are experiencing and understanding AI, the culture into which AI is being unleashed, and how AI literacy itself is situated within this environment. I think we can bring these threads together with an eye to developing a more critical AI literacy that considers the influence of AI hype and personalization on our understanding of GenAI.

Understanding AI Hype

AI hype not only influences our understanding of AI, but it also sets up certain parameters for our conversations about AI. The crux of AI hype narratives seems to be a narrative of power, with a focus on the amazing and terrible things that GenAI can do, as well as an underlying theme of who is in power in this AI landscape (Hao, 2025; Bender & Hanna, 2025). Hype is about influence, about generating excitement, and about inspiring strong emotions (Sloane et al., 2024; Bourne, 2024). And, significantly, hype is not accidental but rather crafted to attract positive attention and funding (Goncalves & Bareis, 2025). Within these narratives, there seems to be an idea that GenAI is powerful, that using GenAI can make you better and more powerful (as if the sheen of GenAI can rub off on you), and that creating and developing GenAI tools imbues you with a degree of mysticism. In fact, some have started to note the uncanny similarities between AI hype and a religious movement, complete with commandments, origin myths, prophecies, a belief in the apocalypse, ritualistic practice, and acceptable forms of behavior and language (Epstein, 2024). 

Before delving further into what AI hype tends to say, it is worth noting who is crafting and sharing these narratives. Creators of AI technologies, including the heads of various technology companies and the marketing departments of those companies, contribute a great deal of AI hype into our media environment (Bender & Hannah, 2025). And many of the companies who play major roles in the AI technology landscape already exercise undue influence in our media landscape, controlling our information discovery platforms (like Google) and our social media sites (like Meta). Many of our major technology companies, whether they are driving the development of GenAI technologies or are hopping on the bandwagon of GenAI developments, are contributing and promoting AI hype narratives and are pushing GenAI features on their platforms, further contributing to the feeling that GenAI is inescapable and inevitable. From exclusive interviews to high-profile outlets, to commercials, to conveniently timed “leaks” about new features, to press releases, there is a never-ending stream of hype emerging from these companies (Duarte, 2024; Hao, 2025). If we apply critical analysis to these narratives, some motives emerge. Money, sustained power, and influence are factors that are driving AI hype narratives of more corporate origin (Hao, 2025; Bender & Hanna, 2025). After all, a fantastic, useful, and powerful tool will attract users, investors, and more positive attention. AI hype narratives also emerge from media outlets, governments, other industries, and from users of AI technologies, all of whom echo and reinforce the hype produced by various corporate interests (Hao, 2025; Bareis & Katzenbach, 2022).

Interestingly, AI doom narratives arguably operate as another side of AI hype narratives (Vinsel, 2021). After all, GenAI must be powerful and incredible if it can potentially trigger the apocalypse. Here the doom narratives can feed into the overall hype narratives surrounding GenAI, potentially distracting us from more complex and nuanced challenges and issues associated with AI technologies (Hanna & Bender, 2023). As Sloane et al. (2024) note, “Although situated as polar opposites, stories of excitement and of terror are both integral to the practice of AI hyping because they grossly simplify AI narratives and pit them against the realities of AI design and use” (p. 670). This polarized interplay of terror and excitement, doom and joy, dystopia and utopia, form the crux of AI hype narratives and create challenges for discussing GenAI with nuance and critical discernment. Within these outlandish claims is a degree of confusion and increased unease and even distaste. In an article with The Scholarly Kitchen, Jones (2025) posits what many of us have been wondering and asking: what exactly do AI tools actually do?  Some recent studies have illustrated that people tend to like GenAI less the more they learn about it (Chen et al., 2024; Tully et al., 2025). While more research is needed in this area, recent surveys do indicate that there might be an inverse relationship between learning about AI and liking AI, which has implications for AI literacy education. If your motive is to get people using AI, then it stands to reason that the stories you tell about AI will gloss over its issues. AI hype narratives seem to discourage criticism and critical thought, while encouraging unquestioned use and enthusiasm towards GenAI (Duarte, 2024). In contrast, if your motive is to educate people about AI, then it seems you need to cut through the persuasive and distracting hype surrounding AI (Ndungu, 2024; Baer, 2025a; Soken & Nygreen, 2024).

What does it mean to critically engage with something in the midst of being inundated with outlandish propaganda? How can librarians and other educators equip learners to critically engage with AI technologies, to question them, and to potentially challenge claims made about and by GenAI technologies in information environments inundated with AI hype, where GenAI is positioned as authoritative? Within a hype cycle, embracing a critical approach involves not just information and understanding but the confidence and knowledge to make and share critical views and arguments (Baer, 2025a; Baer, 2025b; Soken & Nygreen, 2024). There are a few motifs and themes within the AI hype narratives that I feel are worth unpacking, and that have implications for how we can develop a critical AI literacy imbued with a focus on narrative, context, and reflection. To my mind, there are three areas that are key for understanding the current nature of AI hype and the ways that AI hype is shaping our understanding of and relationship with GenAI.

The first area is power. Power can of course be enticing, but it can also be prohibitive in that the perception of power can squash dissent. As Duarate (2025) argues, our ability to think critically about GenAI can be “dramatically impeded by exposure to inaccurate information, especially when it is delivered confidently and compellingly by AI executives and other influential figures” (para 4). Whether the narratives about GenAI are inaccurate, distracting, misleading, exaggerated, or some combination of those things, these narratives seem as if they are designed to influence more than inform. AI hype narratives promote the power of AI tools, but they also serve as expressions of power and influence from individuals and groups, such as technology companies, crafting and sharing them (Hao, 2024). Power is central to the AI hype narratives we are currently seeing and to the emerging counternarrative, where critics of GenAI and the AI enterprise often dissect how GenAI tools actually do not work as advertised and are not as powerful as proclaimed (Bender & Hanna, 2025; Nguyen & Mateescu, 2024). And power leads us to a few other themes that are, in some respects, unique to AI hype narratives when compared to the hype narratives we have seen about other technologies.

Next is magical thinking. There is a degree of magic surrounding narratives about AI and hype narratives in particular. According to these narratives, AI can do an endless array of wondrous and wonderful things and can make astonishing leaps in performance (Mitchell, 2025). The sense of magic imbuing AI can lead people to believe in the capabilities and power of AI unquestioningly. And a belief in the magic of AI has been linked to lower levels of AI literacy, with a study from Tully et al. (2025) noting that individuals with lower levels of AI literacy are more likely to perceive AI as magical and more likely to be receptive towards using AI tools. Magic is a key aspect of AI hype narratives, and an aspect of AI personalization as well, with GenAI appearing as some sort of all-powerful and all-knowing companion, like a sort of technological fairy godmother. But magic also crops up in the nature of AI hype narratives themselves, not just in how AI tools allegedly perform. As David Morris (2024) notes in his work on AI and magic, “Magicians hack our attentional, perceptual, and cognitive tendencies to make us perceive and believe what is not there” (p. 3047). Here AI technologies function as magical tools while the creators of AI technologies function as magicians, using dazzling techniques to divert our attention. This sort of technique lies at the core of AI hype narratives, which arguably distract from real issues and complexities surrounding the development and deployment of GenAI (Hanna & Bender, 2023). Recognizing the magic running through narratives surrounding AI, and how it shapes our perception of these tools as immensely powerful, is a key aspect of approaching AI with a critical lens.

The final area worth considering is inevitability. Within AI hype narratives, AI technologies are presented as somehow inevitable and unquestionable (Baer, 2025b; Gonclaves & Bareis, 2025). As noted, these narratives can take on a sort of religious fervor, as if AI technologies are somehow preordained (Epstein, 2024). A prevailing sentiment seems to be that AI is here, it is not going anywhere, and everyone must adapt themselves to this new AI-driven reality. This sort of narrative can dissuade questioning, both through more overt prohibitions and through more subtle implications about futility (if AI is inevitable, then what use is it complaining or questioning?) and progress (if you question progress does that mean you are somehow backwards?) (Baer, 2025a). The hype narratives that emphasize the inevitability of GenAI can also hinder critical engagement with AI technologies and even cast the act of asking questions as being unduly negative or resisting inevitable technological progress.

Taken together, these trends within AI hype narratives can make critical thinking and critical engagement with AI incredibly challenging. Even critiquing GenAI in the midst of an environment dominated by AI hype runs the risk of giving too much credence to AI’s alleged power (Sloane et al., 2024). To critically engage with AI technologies, we need to cut through narratives of power, magic, and inevitability, which can involve taking the time to untangle and rebut various hype narratives and claims before moving onto things like actual critiques, policy proposals, or more nuanced arguments (Sloane et al., 2024). While AI hype can be a distraction, understanding and analyzing AI hype is a vital component of a more critical AI literacy. By borrowing from critical information and media literacy, we can weave skills in analysis and evaluation into AI literacy and better equip learners to ask questions, consider the context of GenAI, dissect narratives of power (with hype narratives are at their core), and more thoughtfully consider how we are experiencing and understanding GenAI amidst the outlandish claims of AI hype.

Unpacking AI Personalization

Amidst the frenetic hype surrounding AI, which can beggar belief, is the emotionally appealing, persuasive, and at times manipulative nature of AI personalization. AI personalization can take the form of agreeableness, positivity, and even sycophancy (Hermann, 2022; Kaffee & Pistilli, 2025; Selvi, 2025). AI chatbots are endlessly helpful, rarely disagree or argue, and (if the hype is to be believed) always do what you ask. The personalized nature of AI tools, and the experience of using these seemingly friendly, agreeable, and helpful tools can create feelings and emotions among users that I feel are important to recognize and consider as we strive to develop more human-centered approaches to AI literacy. A study from Data and Society notes that while “our participants know the chatbot is neither ‘real’ nor ‘intelligent,’ they also know that the feelings it elicits in them are genuine,” describing how users find chatbots safe, easy to talk to, and comforting (Garofalo & Vecchione, 2025). Even if people are aware of the nature of AI personalization, and the artifice of these tools, feelings of trust and fondness can still emerge. However, many users are not aware of the machinations behind AI tools and how the personalized features are in many respects an effort to keep users glued to a given chatbot platform (Lupetti & Murray-Rust, 2024). We can face challenges in critically engaging with AI due to AI hype, where narratives present AI as powerful, magical, inevitable, and something that shouldn’t be questioned. But the personalized nature of AI can add further challenges to our ability to engage critically with GenAI. While AI hype narratives might strain credulity, the personalized nature of AI, and the emotional aspects of that personalization, can make questioning and challenging emotionally resonate and appealing AI difficult nevertheless.

GenAI chatbots have a tendency towards positivity and agreeableness which can foster trust and reliance. As Kaffee and Pistili note, GenAI “systems already simulate care, empathy, and attentiveness” (para 9). Meanwhile, Gary Marcus (2025) argues that GenAI chatbots fool people into thinking they can behave like humans, when in reality these tools are just mimicking humans. Constantly hearing that everything you say and think is fantastic can be enticing, if not addictive. In fact, when ChatGPT released an update in the summer of 2025 that toned down the sycophancy, users complained (Tangermann, 2025). This personalization also seems to exacerbate trends we have already seen in social media spaces with things like filter bubbles and echo chambers, where algorithms curate customized environments where you only see and hear what the algorithm thinks you want to see and hear. As AI gets further embedded into many of our existing online tools and spaces, from search engines to social media sites, what effect will this have on people’s ability to identify and critique GenAI? If someone is hearing what they want to hear, or feel trust towards the powerful, magical, and personalized tool they are using, will they be inclined to analyze or question that tool?

We can benefit from unpacking AI personalization within the context of AI hype narratives that emphasize power, magic, inevitability, and the superior nature of AI when compared to humans. Notably, the personalized nature and experience of GenAI reinforces many of the themes found within AI hype narratives. AI chatbots seem poised to act as the ultimate personal assistants, able to handle any task or question without complaint or without tiring. The speed with which AI chatbots respond, and the confidence with which they do so belies the chronic issue of so-called AI hallucinations that have plagued AI chatbots since their launch (Hicks et al., 2024). AI chatbots give the impression of being powerful and wise and the hype narratives surrounding AI reinforce the behavior of the chatbots themselves. As a result, we are seeing emerging issues with cognitive offloading with GenAI technologies, where people trust these tools and become overly reliant on their AI personal assistants, potentially degrading their own skills and cognitive abilities (Kulal, 2025; Skibba, 2025). Overall, this reliance on these powerful GenAI tools can lead to trust and to an affinity toward these tools.

Magical thinking and the magic narratively surrounding GenAI also intersects with the personalized experience of using AI tools. As we have seen, AI hype narratives frequently imbue AI with a sense of mysticism and magic. And something that is always at the core of magical narratives is trust and belief (Morris, 2024). Endlessly cheery and agreeable AI tools ask for trust, even if the ideas it shares are half-baked or the sources are made-up or the writing is mediocre. The underlying promise seems to be if you don’t look to closely or delve too deeply, if you trust the magic and the speed and the power, if you accept the results that you are (quickly) given, if you place your trust and your cognition into AI’s hands, then you will have nothing to worry about. The overall personalized user experience and design of GenAI can contribute to a sense of “enchantment” with using AI tools (Lupetti & Murray-Rust, 2024). But this experience with enchantment goes beyond using AI tools and shapes the nature of, and potential goals of, AI hype narratives as well. As Campolo and Crawford (2020) note, the experience of enchantment shields creators of AI tools from scrutiny and accountability. The user experience of GenAI often discourages reflection and deep thought while the magic trick of AI hype narratives and AI user experience encourage trust and belief. The positive feelings generated (pun intended) towards AI by the personalization of AI technologies can reinforce AI hype narratives.

Just as we can experience challenges in critically engaging with GenAI amidst hype narratives that emphasize the amazing and powerful nature of AI technologies, we can experience difficulties with thinking critically and clearly about AI in the midst of the emotional experience of AI personalization. Additionally, the experience of using AI technologies can be quite emotionally complex, while our individual and collective responses to AI development are also rooted in strong emotions like fear, anxiety, enthusiasm, curiosity, and even frustration and anger (Chen et al., 2024; Bourne, 2024). I think it is important to recognize that we as librarians and educators might have strong feelings towards AI ourselves, just as our learners might also have complicated emotions about AI (Baer, 2025a; Fox, 2025; Monnier et al., 2025) As we continue to develop AI literacy in response to AI trends, I think we have to acknowledge and even center the emotional aspects of our experiences with and reaction to AI. 

One potential way forward with this is to borrow from critical information and media literacies, which emphasize the complex experiences people have with information and the ways that media shapes, and is shaped by, systems of power (Soken & Nygreen, 2024; Kellner, 2005). If our understanding of GenAI is shaped by narratives of power in the guise of AI hype and by our experiences with using these tools under the influence of AI personalization, then I believe we can benefit from bringing critical approaches that address these facets of GenAI into AI literacy. AI hype might seek to present AI as unprecedented and amazing, but I feel that AI is part and parcel of broader trends in technological hype, personalization, and what Bourne refers to as “affective capitalism,” or a capitalism rooted in emotional appeals and personalization (Bourne, 2024, p. 758). And if GenAI is part of these broader trends, then I think we can situate AI literacy within existing trends and approaches found in critical information and media literacy.

In environments colored by ubiquitous AI hype narratives and the personalized effects of AI technologies, the ability to reflect is crucial. While it is important for learners to understand AI, I feel that it is also key for learners to be able to reflect upon and identify how AI is making them feel and how they are responding to AI, increasingly important given how persuasive AI hype and personalization can be. Incorporating reflection into AI literacy alongside skills like critical thinking will strengthen existing aspects of AI literacy like ethical reasoning and evaluation and will highlight a skill set that can better enable people to navigate the emotional complexities of AI hype and AI personalization, however appealing and persuasive it might be. By exploring both hype narratives and the personalized output from GenAI, we can develop richer approaches to AI literacy. 

Developing Critical AI Literacies

The experiences and effects of AI hype and AI personalization complicate our efforts to engage critically and thoughtfully with generative AI tools and technologies and the many challenges and issues these technologies introduce. A more critical and reflective approach to AI literacy can help us unpack these narratives of power and influence. But I think a challenge for librarians and educators is in finding ways to make that focus explicit, central, and sustained amidst all the other demands inherent within AI literacy and a broader information literacy for that matter. In my own work as an instruction librarian, I have felt the pressure of time constraints and the enormity and complexity of the information literacy topics I am aiming to address. Personally, I feel that intentionality and an emphasis on equipping learners to ask questions rather than settle on a single correct answer can create space for more critical and reflective approaches which can greatly benefit us as we explore more critical and contextualized approaches to AI literacy. Librarians and educators can bring in examples of AI hype narratives or AI personalization, pose questions, and encourage learners to share their own experiences. Taking a little time, even when time is short during an instruction session, to spark curiosity and awareness can equip learners to better take in the bigger picture and context of GenAI, beyond simply using an individual tool. Ultimately, I believe that librarians, educators, and our learners can benefit from the introduction of two lenses into emerging AI literacy frameworks.

First is a focus on context and contextual analysis, where we take a critical approach to analyzing the narrative context surrounding technologies like AI and how that mediates, shapes, and influences our experiences with said technologies. This concern with narratives of power is a framing that can be particularly beneficial for gaining a deeper and more critical understanding of AI technologies (Soken & Nygreen, 2024; Baer, 2025b). Many AI literacy frameworks, including the AI Competencies for Academic Library Workers (ACRL, 2025) include a call for developing an understanding of AI technologies, including how they work and how they are developed. But I believe that we can extend this understanding to include a focus on how AI technologies and tools are presented, received, and conceptualized by the public. The narratives hyping AI, whether through commercials, interviews, media coverage, or online social media posts, greatly shape how we conceptualize and discuss AI and can even dissuade us from criticizing or questioning AI technologies thanks to the aura of power, magic, and inevitability that AI hype narratives create around Generative AI. When teaching others about AI technologies, librarians and other educators can discuss trends in AI hype with students, encourage students to reflect on the AI hype narratives they have seen and encountered, and share examples of AI hype narrative for analysis, reflection, and discussion (Soken & Nygreen, 2024, Ndungu, 2024). I believe that equipping students to think critically about AI and to feel confident in sharing their opinions and views is an important component of developing a more critical AI literacy and a broader and richer understanding of GenAI. And this approach has implications for information and media literacy more generally, where we can encourage learners to think critically about technologies other than AI that might also be overly-hyped in the media or cast as powerful or beyond reproach.

The second lens that we can introduce to AI literacy is a focus on reflective practice that empowers learners to better recognize and critically engage with narratives and technological tools, like AI, that might be highly personalized. As we have seen, AI hype and the experience of using AI tools can discourage reflection and critical analysis and encourage trust and awe. Emphasizing reflection as a key component of AI literacy mirrors approaches that are increasingly utilized in broader media and information literacies (Soken & Nygreen, 2024; Ndungu, 2024). Researchers like Riesen (2025) have argued that reflective practices can help learners better contextualize and apply information literacy skills. I believe reflection can also help learners find personal meaning, value, and context for AI literacy skills. AI literacy frameworks generally have a section that calls for evaluation of AI output. But I think we can also encourage an evaluation of our own thoughts and feelings towards AI, and a reflective approach to both using AI tools and consuming content about AI tools. What emotions are arising? Why might an AI tool foster a certain kind of user experience? What motivations underlie narratives surrounding AI? These are questions that can be part of a reflective practice where students are encouraged to pause, consider, and reflect on their own experiences with AI as a way to better critically analyze AI technologies. AI literacy emphasizes using AI tools and analyzing the output of those tools. But by taking a step backward and outward, and by posing questions about the implications of GenAI, the narratives being woven about and around GenAI, and the experiences people are having with GenAI, we can encourage learners to ask questions, sort through their thoughts and feelings, share their ideas, and begin to engage more critically with not just individual GenAI tools but the entire GenAI enterprise.

Conclusion

Putting AI hype and AI personalization into conversation can help us develop an AI literacy that not only focuses on critical thinking but on reflection, context, and the complex emotional experiences that we have with AI technologies. I think that a human-centered AI literacy can and should embrace the complicated, messy, and emotional aspects of our collective and individual experiences with GenAI and the stories we imbibe and tell ourselves about these tools. And by centering and acknowledging the emotional complexities of our experiences with, and reactions to, GenAI, we can better engage in conversations with learners and delve into issues surrounding GenAI and its development and use.

The personalized experience of using AI tools and the hype surrounding AI cannot be separated from our understanding of GenAI. Rather, AI hype and AI personalization deeply shape and influence our experience with GenAI and how we perceive, react to, and make decisions about GenAI, including when, where, and how we use these AI tools. By grounding AI literacy with this sort of critical analysis, contextualization, and reflective practice, I feel that we can strengthen both AI literacy and information literacy and equip learners to better engage in our complex and rapidly changing information environments. Librarians and other educators can work to develop an AI literacy that is concerned with and informed by the context in which AI technologies are developed and in which they emerge as well as the complex and emotional human experience of using, understanding, and responding to GenAI.


Acknowledgements 

I want to extend my sincere thanks to my internal reviewer Brea McQueen, my publishing editor Brittany Paloma Fiedler, and my external reviewer Rosalind Tedford for their time, attention to detail, constructive feedback, and support. Their thoughtful comments, ideas, and feedback proved invaluable throughout the stages of shaping this article. I am fortunate to have collaborated with Rosalind on previous projects related to information and AI literacy, and I’d like to extend a thank you to her and Dan Chibnall for serving as thought-partners and collaborators over the years. I’d also like to thank Andrea Baer and Brady Beard for their time, generosity, and willingness to discuss generative AI and librarianship with me. Their work has helped to shape and inspire my own. Finally, a thank you to the Lead Pipe Editorial Board for the opportunity to publish my work here.


Suggested Tags

Generative AI; AI literacy; AI hype

References

ACRL (2025). AI competencies for academic library workers. https://www.ala.org/acrl/standards/ai

Baer, A. (2025a). Unpacking predominant narratives about generative AI and education: A starting point for teaching critical AI literacy and imagining better futures. Library Trends, 73(3), 141-159. https://muse.jhu.edu/pub/1/article/961189/pdf

Baer, A. (2025b). Investigating the ‘feeling rules’ of generative AI and imagining alternative futures. In the Library with the Lead Pipe. https://www.inthelibrarywiththeleadpipe.org/2025/ai-feeling-rules/

Bareis, J. (2024). Ask me anything! How ChatGPT got hyped into being. Preprint. Center for Open Science. https://doi.org/10.31235/osf.io/jzde2

Bareis, J., & Katzenbach, C. (2022). Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values, 47(5), 855-881. https://doi.org/10.1177/01622439211030007

Barrow, N. (2024). Anthropomorphism and AI hype. AI and Ethics, 4(3), 707-711. https://doi.org/10.1007/s43681-024-00454-1

Bender, E.M., & Hanna, A. (2025). The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Harper.

Bourne, C. (2024). AI hype, promotional culture, and affective capitalism. AI and Ethics, 4(3), 757-769. https://doi.org/10.1007/s43681-024-00483-w

Campolo, A., & Crawford, K. (2020). Enchanted determinism: Power without responsibility in artificial intelligence. Engaging Science, Technology, and Society. https://knowledge.uchicago.edu/record/6022?v=pdf

Chen, Y. S., Tang, Y. C., & Chen, C. (2024). The ethical deliberation of generative AI in media applications. Emerging Media, 2(2), 259-276. https://doi.org/10.1177/27523543241277563

Dedehayir, O., & Steinert, M. (2016). The hype cycle model: A review and future directions. Technological Forecasting and Social Change, 108, 28-41. https://doi.org/10.1016/j.techfore.2016.04.005

Digital Education Council (2025). Digital Education Council AI literacy framework. https://www.digitaleducationcouncil.com/post/digital-education-council-ai-literacy-framework

Duarte, T. (2024). As the AI bubble deflates, the ethics of hype are in the spotlight. Tech Policy Press. https://www.techpolicy.press/as-the-ai-bubble-deflates-the-ethics-of-hype-are-in-the-spotlight/

Epstein, G. (2024). Silicon Valley’s obsession with AI looks a lot like religion. The MIT Press Reader. https://thereader.mitpress.mit.edu/silicon-valleys-obsession-with-ai-looks-a-lot-like-religion/

Floridi, L. (2024). Why the AI hype is another tech bubble. Philosophy & Technology, 37(4). https://doi.org/10.1007/s13347-024-00817-w

Fox, V. (2024). A librarian against AI. https://violetbfox.info/against-ai

Garofalo, L., & Vecchione, B. (2025). All the lonely people: on being alone with digital companions. Data and Society. https://datasociety.net/points/all-the-lonely-people/

Gartner. Gartner Hype Cycle. https://www.gartner.com/en/research/methodologies/gartner-hype-cycle

Goncalves, A.B., & Bareis, J. (2025). Expanding hype literacy to protect democracy. Tech Policy Press. https://www.techpolicy.press/expanding-hype-literacy-to-protect-democracy/

Hanna, A. &  Bender, E. (2023). AI causes real harm: let’s focus on that over the end-of-humanity hype. Scientific American. https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks/

Hao, K. (2025). Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Penguin Press.

Hermann, E. (2022). Artificial intelligence and mass personalization of communication content—An ethical and literacy perspective. New media & society, 24(5), 1258-1277. https://doi.org/10.1177/14614448211022702

Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38. https://doi.org/10.1007/s10676-024-09775-5

Jones, P. (2025). Three years after the launch of ChatGPT, do we know where this is heading? The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2025/10/13/three-years-after-the-launch-of-chatgpt-do-we-know-where-this-is-heading/

Kaffee, L., & Pistilli, G. (2025). Before AI exploits our chats, let’s learn from social media mistakes. Tech Policy Press. https://www.techpolicy.press/before-ai-exploits-our-chats-lets-learn-from-social-media-mistakes/

Kellner, D., & Share, J. (2005). Toward critical media literacy: Core concepts, debates, organizations, and policy. Discourse: studies in the cultural politics of education, 26(3), 369-386. DOI: 10.1080/01596300500200169

Kulal, A. (2025). Cognitive risks of AI: Literacy, trust, and critical thinking. Journal of Computer Information Systems, 1-13. https://doi.org/10.1080/08874417.2025.2582050

Lo, L. S. (2025). AI literacy for all: A universal framework [Preprint]. University of New Mexico Digital Repository. https://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1216&context=ulls_fsp

Lupetti, M. L., & Murray-Rust, D. (2024). (Un)making AI magic: A design taxonomy. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-21). https://doi.org/10.1145/3613904.3641954

Marcus, G. (2025). Why DO large language models hallucinate? Marcus on AI. https://garymarcus.substack.com/p/why-do-large-language-models-hallucinate

Miao, F., & Shiohira, K. (2024). AI competency framework for students. UNESCO Publishing. https://www.unesco.org/en/articles/ai-competency-framework-students

Mitchell, M. (2025). Magical thinking on AI. AI: A Guide for Thinking Humans. https://aiguide.substack.com/p/magical-thinking-on-ai

Monnier, R., Noe, M., & Gibson, E. (2025). AI in academic libraries, part one: Concerns and commodification. College & Research Libraries News, 86(4), 173. doi:https://doi.org/10.5860/crln.86.4.173

Morris, D. (2024). Magical thinking and the test of humanity: We have seen the danger of AI and it is us. AI & SOCIETY, 39(6), 3047-3049. https://doi.org/10.1007/s00146-023-01775-1

Ndungu, M. W. (2024). Integrating basic artificial intelligence literacy into media and information literacy programs in higher education: A framework for librarians and educators. Journal of Information Literacy, 18(2), 1–18. https://doi.org/10.11645/18.2.641

Nguyen, A., & Mateescu, A. (2024). Generative AI and labor: Value, hype, and value at work. Data & Society. https://datasociety.net/library/generative-ai-and-labor

Placani, A. (2024). Anthropomorphism in AI: Hype and fallacy. AI and Ethics, 4(3), 691-698. https://doi.org/10.1007/s43681-024-00419-4

Riesen, K. (2025). Incorporating signature pedagogies into library instruction through

reflective pedagogy. Portal: Libraries and the Academy, 25(1), 137-150.

https://dx.doi.org/10.1353/pla.2025.a950012

Rutgers (2026). Critical AI. Rutgers School of Arts and Sciences Critical AI. https://sites.rutgers.edu/critical-ai/

Selvi, A. F. (2025). Meet your new AI teacher: hypes, promises, and realities in AI-powered language education platforms. Applied Linguistics Review. https://doi.org/10.1515/applirev-2025-0224

Skibba, R. (2025). Are we offloading critical thinking to chatbots? Undark. https://undark.org/2025/09/12/critical-thinking-chatbots/

Sloane, M., Danks, D., & Moss, E. (2024). Tackling AI hyping. AI and Ethics, 4(3), 669-677. https://doi.org/10.1007/s43681-024-00481-y

Soken, A., & Nygreen, K. (2024). Framing generative AI through a critical media literacy lens: A reflective practitioner-inquiry study. International Journal of Transformative Teaching and Learning in Higher Education, 1(1), 7. https://commons.library.stonybrook.edu/cgi/viewcontent.cgi?article=1010&context=ijttl

Stryker, C. & Scapicchio, M. (2026). What is generative AI? The 2026 Guide to Machine Learning. IBM. https://www.ibm.com/think/machine-learning#605511093

Swant, M. (2025). The surprising advertising strategy AI companies are investing in to stand out. Inc. https://www.inc.com/marty-swant/the-surprising-advertising-strategy-ai-companies-are-investing-in-to-stand-out/91281145

Tangermann, V. (2025). OpenAI announces that it’s making GPT-5 more sycophantic after user backlash. Futurism. https://futurism.com/openai-gpt5-more-sycophantic

Tewell, E. (2015). A decade of critical information literacy: A review of the literature. Communications in information literacy, 9(1), 2. DOI: 10.15760/comminfolit.2015.9.1.174

Tully, S. M., Longoni, C., & Appel, G. (2025). Lower artificial intelligence literacy predicts greater AI receptivity. Journal of Marketing, DOI: 10.1177/00222429251314491

Van Lente, H., Spitters, C., & Peine, A. (2013). Comparing technological hype cycles: Towards a theory. Technological Forecasting and Social Change, 80(8), 1615-1628. https://doi.org/10.1016/j.techfore.2012.12.004

Vinsel, L. (2021). You’re doing it wrong: Notes on criticism and technology hype. Medium. https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5

Claude Code won April Fools Day this year / Xe Iaso

April Fools Day is somewhat of a legendary day among nerds. Historically it's been when the nerds at GMail introduced GMail Custom Time, where you could interrupt causality by making GMail look like you sent a message before it was actually sent. It actually worked.

Sometimes this gets taken too far and the joke falls flat, causing a lot more problems than would exist if the joke never happened in the first place. Incidents like this have resulted in many companies just putting in policies against doing that to avoid customer growth impact.

It's refreshing to see the Claude Code team introduce the /buddy system this year. When you run /buddy, it hatches a coding companion that hangs out in your Claude Code interface like a tamagochi. Here's my buddy Xentwine:

╭──────────────────────────────────────╮
        │                                      │
        │  ★★★ RARE                     ROBOT  │
        │                                      │
        │     (   )                            │
        │     .[||].                           │
        │    [ @  @ ]                          │
        │    [ ==== ]                          │
        │    `------´                          │
        │                                      │
        │  Xentwine                            │
        │                                      │
        │  "A methodical circuit-whisperer     │
        │  obsessed with untangling logical    │
        │  snarls; speaks in patient,          │
        │  patronizing riddles and will        │
        │  absolutely let you sit in your own  │
        │  bug for three minutes before        │
        │  offering the blindingly obvious     │
        │  fix."                               │
        │                                      │
        │  DEBUGGING  █████░░░░░  47           │
        │  PATIENCE   █████░░░░░  47           │
        │  CHAOS      ██░░░░░░░░  21           │
        │  WISDOM     █████████░  92           │
        │  SNARK      █████░░░░░  49           │
        │                                      │
        ╰──────────────────────────────────────╯
        

Here's what it looks like in the Claude Code app:

I think this is the best April Fools Day feature in recent memory because it seems intentionally designed to avoid impacting users in a way that would cause problems:

  • You have to take manual action to create your coding buddy, it's off by default.
  • It mostly stays out of the way when you do create it, meaning that it doesn't impact your normal working process.
  • Your buddy sometimes randomly interjects like a tamagochi.
  • You can pet the dog, dragon, or robot with /buddy pet.

This is the kind of harmless prank that all nerds should aspire for. 10/10.