Planet Code4Lib

Come Join the 2025 Winter Holiday Hunt / LibraryThing (Thingology)

It’s December, and we’re hosting our second annual Winter Holiday Hunt!

This hunt is meant to celebrate the season of light, and the holidays it brings. We wish all our members a Merry Christmas, Happy Hanukkah, and an entertaining hunt!

We’ve scattered a stand of evergreen trees around the site. You’ll solve the clues to find the trees and gather them all together.

  • Decipher the clues and visit the corresponding LibraryThing pages to find some evergreen trees. Each clue points to a specific page on LibraryThing. Remember, they are not necessarily work pages!
  • If there’s an evergreen tree on a page, you’ll see a banner at the top of the page.
  • You have a little more than three weeks to find all the trees (until 11:59pm EST, Tuesday January 6th).
  • Come brag about your stand of evergreen trees (and get hints) on Talk.

Win prizes:

  • Any member who finds at least two evergreen trees will be
    awarded an evergreen tree Badge ().
  • Members who find all 15 evergreen trees will be entered into a drawing for one of five LibraryThing (or TinyCat) prizes. We’ll announce winners at the end of the hunt.

P.S. Thanks to conceptDawg for the European goldfinch illustration! ConceptDawg has made all of our treasure hunt graphics in the last couple of years. We like them, and hope you do, too!

Kafka becomes more accessible / John Mark Ockerbloom

Franz Kafka‘s work is now known around the world, but it couldn’t be read in English until after he died, and there’s still limited access to good English translations of much of his work. The English Kafka books I list are copyrighted translations generously shared online by David Wyllie and Ian Johnston.

Soon the first English Kafka books will enter the US public domain. The Castle, one of several Kafka works translated by Willa and Edwin Muir, gets there in 17 days.

Author Interview: Loretta Ellsworth / LibraryThing (Thingology)

Loretta Ellsworth

LibraryThing is pleased to sit down this month with Minnesota-based author Loretta Ellsworth, whose published work includes books for both juvenile and adult audiences. A former middle grade Spanish teacher, Ellsworth received her MFA in Writing for Children and Young Adults from Hamline University, and made her authorial debut in 2002 with the young adult novel The Shrouding Woman. She has had three additional young adult novels published, as well as a picture book for younger children, Tangle-Knot, in 2023. These books have won many accolades, including being named as ALA and IRA Notables, and being nominated for prizes such as the Rebecca Caudill Young Readers’ Award. Ellsworth published her first work for adults, the historical novel Stars Over Clear Lake, in 2017, followed in 2024 by The French Winemaker’s Daughter. Her third historical novel for adult readers, The Jilted Countess, which follows the story of a Hungarian countess who makes her way to Minnesota following World War II, in pursuit of her American GI fiancé, is due out from HarperCollins this coming January. Ellsworth sat down with Abigail this month to discuss the book.

The Jilted Countess was apparently inspired by a true story of a Hungarian countess who emigrated to Minnesota after the Second World War. Tell us a little bit about that original story. How did you discover it, and what made you feel you needed to retell it?

In 1948, a penniless Hungarian countess came to Minnesota to marry the GI fiancé she’d met abroad, only to find out he’d recently married someone else. Determined to stay in the U.S., she appealed to newspaperman Cedric Adams to help her find a husband before she’d be deported in two weeks back to Hungary, which was under Communist control. He agreed, using a fake name for her, and putting her picture in the newspaper, citing her circumstances. She received almost 1800 offers of marriage! And in two weeks she narrowed it down, went on a few dates, chose a husband, and was never heard from again. Fast forward to 2015, when someone found an old copy of that article in their attic and asked columnist Curt Brown if he knew what had happened to her. Curt Brown wrote a short article asking if anyone could provide an answer. Unfortunately, no one could. But that article made me wonder how a Hungarian countess could disappear like that, and I also wondered if she ever encountered her former fiancé again. She was, after all, the first Bachelorette, before the show was even a concept.

Did you do any kind of research, historical or cultural, in order to write the book? What were some of the most interesting things you learned?

I spent an exorbitant amount of time at the Minnesota History Center researching old microfiche articles to find anything I could about her. I examined marriage records for Minneapolis and St. Paul for any Hungarian-sounding names, and I searched for clues as to her whereabouts. Without a name, though, it was very difficult, and I never found her. I also had to research Hungary during and after the war, and the life of aristocrats, which I knew little about.

Contemporary readers might be surprised at the idea of a “Bachelorette” dating program taking place in the 1940s. How do you think Roza’s experience would tally with and differ from that of contemporary women seeking a spouse in this way?

After her marriage, she was approached by Look Magazine and other outlets for interviews, all of which she turned down as she wanted a private life. With social media today, there’s no way Roza would have been able to disappear like she did in 1948. And most likely her search would have taken place on social media rather than through the newspaper and mail.

World War II stories remain perennially popular with readers, despite the passage of the years. Why is that? What is it about this period that continues to speak to us?

I think it was such a pivotal time in the world, and one we’re still struggling to understand. And there are so many hidden stories that we’re constantly discovering about that time period that continue to speak to us. Also, the last of WWII veterans are disappearing, and their stories will be gone as well.

Tell us about your writing process. Do you write in a particular place, have a specific schedule you keep to, or any rituals that help you? Do you outline your stories, or discover them as you go along?

Because I worked as a teacher and had four children of my own, I had to learn to write in short intervals and adapt my writing schedule to be flexible. I wrote everywhere: at soccer practices and coffee shops and the library. Now that I no longer teach and my children are grown, I have a more disciplined schedule and usually write in the mornings in my home office, sometimes stretching into the afternoon. I also have learned to outline, whereas I used to write from the seat of my pants before. It’s helped to save me from a great deal of revision, although I still revise, just not as much as before.

What’s next for you? Will you be writing more historical novels for adults, or perhaps returning to the world of young adult books?

I am working on a young adult novel as well as another historical novel, so I hope to keep my foot in both genres as long as I’m able to. I enjoy both and read both.

Tell us about your library. What’s on your own shelves?

I have one full shelf of books on the craft of writing–I’m still drawn to how others write and am curious about their process. I have a mix of memoir, middle-grade, YA, and a lot of historical fiction. I still buy physical books, and my shelves are always overflowing. I donate a lot of books to our local Friends of the Library group for their annual book sale. And I have so many signed copies of books that I can’t part with. But that’s a good problem to have, isn’t it?

What have you been reading lately, and what would you recommend to other readers?

I read a great deal–I just finished reading the first two books of the Westfallen series by Ann and Ben Brashares with my grandson, and I’m reading The Correspondent by Virginia Evans, The Ivory City by Emily Bain Murphy, and The Gospel of Salome by Kaethe Schwehn. And I just finished James by Percival Everett. There are so many good books out there!

Letter from the Editors / Information Technology and Libraries

We announce the call for proposals for a future special issue on how Generative AI could transform our professional landscape, and summarizing the content of the December 2025 issue.

Relationality Over Neutrality / Information Technology and Libraries

Bridging the disciplinary gaps between library and information science and communication/cultural studies, this column utilizes LIS's interest in generative AI to frame a critical discussion regarding neutrality towards technology within librarianship.

AI-Infused Discovery Environments / Information Technology and Libraries

Although still in its infancy, artificial intelligence (AI) is rapidly making inroads into most facets of the library and education spheres. This paper outlines steps taken to examine Primo Research Assistant, an AI-infused discovery environment, for potential deployment at a large US public research university. The researchers aimed to evaluate the quality and relevance of the AI results in comparison to sources retrieved from the conventional search functionality, as well as the AI system’s multi-paragraph overview reply to the search query. As a starting point, the authors collected 103 search strings from a Primo Zero Result Searches report to approximate a corpus of natural language search queries. For the same research area, it was discovered that there was only limited overlap between the titles returned by the AI tool versus the current discovery layer. The researchers did not find appreciable differences in the numbers of topic-relevant sources between the AI and non-AI search products (Yes = 46.3% vs. Yes = 45.6%, respectively). The overview summary is largely helpful in terms of learning more details about the recommended sources, but it also sometimes misrepresents connections between the sources and the research topic. Given the overall conclusion that the AI system did not constitute a clear advancement or decline in effective information retrieval, the authors will turn to usability testing to aid them in further implementation decisions.

From Availability to Access / Information Technology and Libraries

This paper reports on student perspectives on access to online information resources when conducting an initial search for a school project. Through thematic analysis and user vignettes based on data from 175 students in elementary through graduate school, this paper explores how students determine whether they have access to online information resources, the barriers and enablers they attend to when pursuing access, and the characteristics that influence this process. Results reveal that resource previews, university and library branding, and the word download are generally viewed as enablers of access, while payment cues, learned heuristics around brands and formats, and the need to take extra steps to obtain the full text were barriers that often prevent students from trying to get access even when resources were available to them. Potential influences on individual capacity are also revealed, including experience in high- or low-availability information environments, ability to manage the complex cognitive load of determining access alongside other types of point-of-selection evaluation, a variety of dispositions related to information seeking, and situational factors related to the importance of the information need to the individual. While library staff work diligently to make online resources available, this does not automatically result in students’ ability to access those resources. This paper provides evidence to better equip library professionals for constructing their online information systems, collaborating with information providers about their online information systems, and teaching students about converting availability to access.

From Linked Open Data to Collections as Data / Information Technology and Libraries

Libraries are adopting Linked Open Data (LOD) and Collections as Data (CaD) approaches to present their collections as datasets for direct computational use. However, research focused on federated and reproducible access to these datasets is limited. This work aims to develop a federated and reproducible approach for extracting CaD from LOD repositories. In this context, data extracted from the single authors Jorge Juan y Santacilia and María de Zayas y Sotomayor, as well as from multiple authors from the Spanish Golden Age movement (1492–1659), are used as examples. Federated and reproducible queries are conducted using the Wikidata SPARQL public endpoint and three institutional LOD repositories on Jupyter Notebooks. The data are exported in a format compatible with computational tools (e.g., CSV) by focusing on works of a single author or works from a specific movement. Additionally, the work allows for the visualization of the queries. The results of this work provide a valuable framework for both digital humanities researchers working on datasets and libraries aiming to present their collections as accessible data for computational analysis.

Validation of GDPR Compliance in a Library Management System / Information Technology and Libraries

This paper explores the challenges of achieving General Data Protection Regulation (GDPR) compliance in library management systems (LMSes) by integrating our novel TeMDA framework into BISIS. BISIS is an LMS used in more than 60 libraries in Serbia. We employed a case study conducted in collaboration between the selected BISIS and TeMDA developers, all authors of this paper. We maintained and presented a detailed development diary to provide insights for other developers seeking GDPR compliance in LMSes. The study provides a practical solution for LMSes to ensure GDPR compliance with minimal effort. The description of the development process, accompanied by listings, tables, and diagrams, can assist LMS developers in determining whether the proposed approach is suitable for them. To the best of our knowledge, this paper presents the first detailed case study on integrating a GDPR compliance framework into an LMS.

Measuring the Impact of Digital Collections / Information Technology and Libraries

Assessing content use and reuse is a considerable challenge for gallery, library, archives, museum, and repository (GLAMR) digital library practitioners. While a number of digital object content use studies focus on quantitative approaches to assessment, including digital object downloads, views, and visits, little research has investigated the ways in which digital repository materials are utilized and repurposed. The Digital Content Reuse Assessment Framework Toolkit, or D-CRAFT, addresses some of these gaps by providing assessment methods, ethical considerations and guidelines, tutorials, and "how to" templates to assist practitioners in understanding how digital objects are used and reused by various audiences. The toolkit enhances and advances the typical digital library use assessment approaches. As such, this paper argues that D-CRAFT can play a critical role in assisting GLAMR digital library practitioners in reuse assessment data collection.

VIAF Governance Concerns about the Refurbished VIAF Web and API Interfaces / Information Technology and Libraries

In January 2025, OCLC made significant changes to the web and application programming interfaces for Virtual International Authority File (VIAF) clusters. This article will compare the old and new interfaces, highlighting the pros and cons introduced and calling attention, especially, to critical errors introduced that compromise the functionality of much of the VIAF product. Consequently, it will raise questions and concerns regarding the governance of VIAF, as well as OCLC’s development model, testing, and feedback before public rollout.

Using AI to Auto-Tag Graduate Theses / Information Technology and Libraries

This article presents a practical approach to using artificial intelligence (AI) for tagging graduate theses in an institutional repository with the United Nations Sustainable Development Goals. Utilizing strategies requiring no prior programming experience, the article provides a step-by-step guide, cost analysis, and lessons learned from employing two AI-based tagging methods. These methods, attempted with varying degrees of success, highlight the real potential of using AI for the thematic tagging of digital library resources.

A cat called Good Fortune / John Mark Ockerbloom

The Cat Who Went to Heaven is among the early Newbery medalists that have aged the best over nearly a century. As Derrick Robinson describes, Elizabeth Coatsworth’s story, drawing upon Buddhist legends, shows the characters’ unfolding empathy and compassion leading to artistic triumph and unexpected redemption. Over time it’s been illustrated by numerous artists, including twice by Lynd Ward. The 1930 edition with Ward’s original take joins the public domain in 18 days.

Weekly Bookmarks / Ed Summers

These are some things I’ve wandered across on the web this week.

🔖 CL4R1T4S

LEAKED SYSTEM PROMPTS FOR CHATGPT, GEMINI, GROK, CLAUDE, PERPLEXITY, CURSOR, DEVIN, REPLIT, AND MORE! - AI SYSTEMS TRANSPARENCY FOR ALL! 👐

🔖 Token Optimization & The Future of Sustainable AI

  • Adopting TokenOps: treating tokens and energy as first-class design constraints.
  • Re-architecting systems around inspectable scaffolds (prompts, tools, agents, RAG flows).
  • Deploying energy-weighted tokens (“e-tokens”) as the basis for AI sustainability accounting.

🔖 AI Explorer: Explore how a computer sees art

Harnessing artificial intelligence, the Harvard Art Museums have gathered 71,035,440 machine-generated descriptions and tags for 387,885 images from across their collections. Using techniques such as object recognition and face analysis—which estimates gender, age, and emotion—this data offers insights into how computers perceive and interpret a wide range of artworks and objects.

🔖 After nearly 30 years, Crucial will stop selling RAM to consumers

On Wednesday, Micron Technology announced it will exit the consumer RAM business in 2026, ending 29 years of selling RAM and SSDs to PC builders and enthusiasts under the Crucial brand. The company cited heavy demand from AI data centers as the reason for abandoning its consumer brand, a move that will remove one of the most recognizable names in the do-it-yourself PC upgrade market

🔖 AI Slop Is Spurring Record Requests for Imaginary Journals

AI models not only point some users to false sources but also cause problems for researchers and librarians, who end up wasting their time looking for requested nonexistent records, says Library of Virginia chief of researcher engagement Sarah Falls. Her library estimates that 15 percent of emailed reference questions it receives are now ChatGPT-generated, and some include hallucinated citations for both published works and unique primary source documents. “For our staff, it is much harder to prove that a unique record doesn’t exist,” she says

🔖 the resonant computing manifesto

background: the resonant computing manifesto counts among its drafters at least three xooglers with at least a decade of tenure each1; three venture capitalists, one of whom is a general partner at a firm on Sand Hill Road; two employees of github’s “let’s replace programmers with an LLM” skunkworks; a guy who sold two AI startups and worked for the first Trump administration’s Defense Department drafting its AI policy; an employee at one of the Revolutionary AI Art Startups that lost the race for mindshare a couple years ago; and the CEO of techdirt-cum-board member of Bluesky PBLLC, the “decentralized” social media startup funded by Jack Dorsey, Elon Musk, and crypto VCs2.

🔖 Moats and AI Revisited

What business models and positioning are defensible in an AI-powered world? What moats will remain as AI grows more capable? Over the past few months I’ve put these questions to everyone who will listen, and then some, and incorporated the compelling answers into my list. It’s time for a recap.

The list remains short, though the headings are broad. I speak to many entrepreneurs and business operators and ask myself whether their offerings fall under one or more of these headings. The vast majority do not. That says to me that they will be overtaken by AI developments. Maybe they will last 12 or 18 months, but it won’t be long.

🔖 Talking With Paul Kedrosky

I think it’s somewhat misunderstood what’s going on, but nevertheless, I sometimes say “a data center full of GPUs is like a warehouse full of bananas, that’s got a relatively short half life in terms of its usefulness.” That’s important to keep in mind. That’s what makes it different from prior CapEx spending. Moments like railroads, canals, rural electrification, take your pick, because of the nature of the perished ability of the thing that we’re investing in.

🔖 Common Lisp, ASDF, and Quicklisp: packaging explained

Common Lisp is old, and its inspiration is even older. It was developed when there was zero consensus on how file systems worked, operating systems were more incompatible than you can probably imagine, and that age shows. It pinned down terminology way before other languages got to the same point, and, as it happens so often, the late arrivals decided that they needed different words and these words stuck

🔖 Frege in Context

Robert Brandom’s Fall 2025 Ph.D. Seminar at the University of Pittsburgh.

🔖 Teach like a Luddite

What might today’s educators learn from the Luddites? How can their strategies help us navigate our own moment of technological disruption? Here, we offer three approaches to Luddite praxis, inspired by the clothworkers’ resistance — not as prescriptions, but as starting points for shaping our own responses to automation’s incursions into schools.

🔖 Anniversary (2025 film)

Anniversary is a 2025 American dystopian political thriller film directed by Jan Komasa and starring Diane Lane, Kyle Chandler, Madeline Brewer, Zoey Deutch, Phoebe Dynevor, Mckenna Grace, Daryl McCormack, Sky Yang and Dylan O’Brien. It was released on October 29, 2025. It received mixed reviews from critics.

🔖 The Best Experimental Music of 2025

Welcome to the year-end edition of Best Experimental Music on Bandcamp, in which we’ve picked 12 of our favorites from 2025. Once again, all kinds of amazing experimental music appeared on Bandcamp this year, and it was our immense pleasure to listen to as much of it as we could, and pass our favorites along to you. There were hundreds of releases that could’ve made this list, but we trust that whatever the style, whomever the artist, and whatever the story behind their music is, you’ll find something to love. Presented in alphabetical order by artist, our selections include avant-rock, longform drone, vocal explorations, and cello narratives.

🔖 Modelling Scenarios for Carbon-aware Geographic Load Shifting of Compute Workloads

We present an analytical model to evaluate the reductions in emissions resulting from geographic load shifting. This model is optimistic as it ignores issues of grid capacity, demand and curtailment. In other words, real-world reductions will be smaller than the estimates. However, even with these assumptions, the presented scenarios show that the realistic reductions from carbon-aware geographic load shifting are small, of the order of 5%. This is not enough to compensate the growth in emissions from global data centre expansion.

🔖 Braised chickpeas with carrots, dates and feta

Serve with rice or flatbreads for a vegetarian main course; leave out the feta for a vegan version. Soaking the chickpeas is necessary to achieve the right degree of cooking, so don’t be tempted to skip this stage.

🔖 You Can’t Fight Fascism While Defunding Libraries

We are living through a coordinated assault on knowledge. In a moment when Big Tech is waging war on complex thought, a fascist government is targeting higher education, and the media landscape is being demolished by the same oligarchs driving this era of smash-and-grab politics, libraries are under-appreciated outposts of struggle, sharing and survival. They are sites of refuge, where curiosity is nurtured, where people find shelter, education, entertainment, job assistance, skill-building programs, and access to resources that would otherwise be out of reach. “We do it all at the library,” Sara Heymann, a library associate in the Chicago Public Library (CPL) system, recently told me. “We do a lot of arts, science, and literacy programming. We host movie nights. We have programs for helping people with their taxes, small businesses, and mental health. We help people.”

The original knights of Camelot return / John Mark Ockerbloom

Games can’t really be copyrighted as such, but their texts and visual elements can be. That leaves some games in an intellectual property limbo, like Camelot, a strategy game published in 1930 that’s now long out of production. Today’s fans play it with vintage sets, generic grids and chess pieces, or redesigned boards. In 19 days, when copyrights expire for its rulebook, board and piece designs, they can make replica sets to play the game just like in its 1930s heyday.

Top Five Books of 2025 / LibraryThing (Thingology)

 
2025 is almost over, and that means it’s time for LibraryThing staff to share our Top Five Books of the Year. You can see past years’ lists HERE.

We’re always interested in what our members are reading and enjoying, so we invite you to add your favorite books read in 2025 to our December List of the Month, and to join the discussion over in Talk.

>> List: Top Five Books of 2025

Note: This is about what you read in 2025, not just books published in 2025.

Without further ado, here are our staff favorites!

 


Abby

The King of Infinite Space by Lyndsay Faye. A queer retelling of Hamlet set in the New York City theater world. It’s lyrical and magical and stunning.

Woodworking by Emily St. James. Woodworking is a coming of age story with two trans heroines, a teenager and a high school teacher. It’s wry and sharp and FUNNY and messy and fantastic.

Mutual Interest by Olivia Wolfgang-Smith. Historical fiction set in New York at the turn of the 20th century, Mutual Interest is a novel about ambition, power, and queer lives. I couldn’t put it down. (Her 2023 Glassworks was in my top 5 that year. Go read Olivia Wolfgang-Smith!)

Home of the American Circus by Allison Larkin. I only finished this book a few days ago and it quickly made the list. Home of the American Circus is a character driven novel about a woman and her niece, small towns, messy hopeful humans, and dysfunctional families.

Bury Our Bones in the Midnight Soil by V.E. Schwab. Toxic lesbian vampires!

Honorable mentions go to: Katabasis by R.F. Kuang, Heart the Lover by Lily King, and All the Water in the World by Eiren Caffall.

Tim

The Scaling Era: An Oral History of AI, 2019–2025 by Dwarkesh Patel. Stitched together from his podcast, it is indeed a sort of oral history of the last few years in technology—the most consequential since the late 90s, or even early 80s.

Ghost on the Throne: The Death of Alexander the Great and the War for Crown and Empire by James Romm. Romm manages to stitch together quite a yarn from the shipwreck of early Hellenistic history.

The Nineties: A Book by Chuck Klosterman. Hilarious and insightful. I’m still reading it, because I only listen to it in the car with my wife.

The Library of Ancient Wisdom: Mesopotamia and the Making of the Modern World by Selina Wisnom. Notionally about Ashurbanipal’s famous, extensive library, it doubles as a wide-ranging exploration of Mesopotamian history and culture. Parts were slow going, others electrifying. It made me want to learn Assyrian but NO MORE LANGUAGES TIM!

The History of the Church: From Christ to Constantine by Eusebius. I had never read Eusebius straight through. It’s fascinating stuff, both for the slim shafts of light it throws on the first century or so of Christianity history, and for its unique contribution to historical method. It’s a crying shame we lost Heggesippus, Pappias, Dionysius of Corinth, etc.

Honorable mention goes to: Abundance by Ezra Klein and Derek Thompson. The only way out. Klein is better at diagnosing the problem than suggesting solutions, but that’s the part that matters most.

Kate

Wild Dark Shore by Charlotte McConaghy. I recommended this book to practically everyone this year – not that I needed to as this book was being hyped everywhere. The writing is lush, the setting captivating, the characters fully formed. Months after finishing this story, I was still thinking about them. I still am. What a beautiful, terrifying, heartbreaking novel.

Nightwatching by Tracy Sierra. A home invasion story is not something I ever would’ve picked up on my own, but it came highly recommended by Olivia Muenter, and so on the first day of 2025 I sat down and read this (almost) straight through. Nightwatching caused me to feel equal parts fear and anger: fear for this woman and her children trying to survive the unthinkable, and anger towards all of the people (as depicted in this book… and in life) who don’t trust women.

All the Colors of the Dark by Chris Whitaker. Ok, this book is a bit overwrought, but I enjoyed it! Give me a hefty book with well-written characters and a bit of mystery, and I’m a happy reader.

My Name Is Lucy Barton by Elizabeth Strout. I’m so glad that I finally read this book. It was quiet, yet thrilling. I look forward to reading everything that Strout has published.

Tilt by Emma Pattee. The book ostensibly takes place in one day – the day a major earthquake hits the northwest US – and brings us along the protagonist’s journeys after the quake in search of her husband. And while the book is the story of one day’s journey, it’s also a meditation on the choices we make and the events that affect us most in life. The protagonist’s ongoing conversations with her soon-to-be-born baby illustrate her life and loss, her heartbreak and her hope. I ate it up and loved it so.

Lucy

The Travelling Cat Chronicles by Hiro Arikawa, translated by Phillip Gabriel. This book was beautiful and bittersweet. I enjoyed the voice of the cat! He was funny and insightful. A lovely book all around.

Dungeon Crawler Carl by Matt Dinniman. LitRPG! A genre I didn’t realize existed! This book was a lot of fun to read for someone who’s played a lot of video games. I also love Princess Donut; she’s a riot.

The Toll by Neal Shusterman. Usually in three-part young adult series like this, I find that the first one is the best and the other two are lackluster at best. I was pleasantly surprised with how much I enjoyed this last book of the series! I read the whole trilogy in 6 days while I had COVID; I just couldn’t stop reading!

Tooth and Claw by Jo Walton. This was such a charming book! I was immediately invested in the characters and needed to know what would happen to them. The dragon lore was also very interesting, making it a little darker than it would have been had the story been about humans. I had hoped Walton had written more books like this, but apparently not. The world was so interesting!

The Nineties: A Book by Chuck Klosterman. This book was super interesting. I’m obsessed with the nineties (when I was 6-16), and this book provided the ability to relive the things I remember.

Honorable mentions go to: Futuristic Violence and Fancy Suits by David Wong and Flatterland by Ian Stewart.

Kristi

Iron Flame by Rebecca Yarros. I didn’t realize I could enjoy a fantasy romance series as much as I have with The Empyrean, but apparently I enjoy my books like I enjoy my food: a little spicy. Yarros has excellent pacing and character development; I’m totally invested in the riders and in the bond between Violet and Xaden. I’m able to totally escape as I read, which is exactly what I’m looking for in a fantasy book. And the twist at the end? Give me book 3 now, please. (Read book 3: give me book 4, now.)

Demon Copperhead by Barbara Kingsolver. I found the theme of redemption in this novel perhaps a lot more than some of the nay-sayers of this tale retold. To the overlooked, the forgotten, the invisible, the ‘trash’, the trashed, the small-town ‘less-thans’: this story will make you feel seen. To anyone who can’t relate to a story like this: read it. Period.

A Deadly Education by Naomi Novik. While I was a bit disappointed with the rest of the series, the first in the Scholomance is a good one. I found myself chuckling often at the bristly, sarcastic protagonist throughout. Add magic and a bit of thrill and violence? Sign me up.

Emily Wilde’s Encyclopaedia of Faeries by Heather Fawcett. This cozy romantasy* tale made me fall in love with Emily Wilde, who seems to definitely have some neurodivergent behaviors and was written by someone who understands them. I’ll be reading more of this series, that’s for sure!

*I did not have “started reading romantasy” on my 2025 board, but I’m enjoying the ride.

ADHD is Awesome: A Guide to (Mostly) Thriving with ADHD by Penn Holderness and Kim Holderness. I was pleasantly surprised at how helpful this book was in understanding ADHD and, more importantly, how to learn to thrive with it. I’ll most likely be purchasing a hard copy to keep and revisit whenever I need to!

Abigail

A Proud Taste for Scarlet and Miniver by E.L. Konigsburg. Eleanor of Aquitaine, Bishop Suger, Empress Matilda and William the Marshall wait in Heaven for King Henry II to ascend after many years below, in this immensely engaging work of historical fiction for young people. The framing device here was fascinating, allowing for a certain amount of commentary and introspection that might not otherwise have been possible. The story itself, the narrative of Eleanor’s life, was also fascinating, and I thought Konigsburg did an excellent job writing from the different perspectives of her four storytellers. Suger’s beauty and spirit-focused account is very different from Empress Matilda’s tart (but fair) take on her daughter-in-law. Well worth the time of any young reader who enjoys historical fiction, or who is fascinated by Medieval Europe and/or Eleanor of Aquitaine.

Can We Save the Tiger? by Martin Jenkins, illustrated by Vicki White. A gorgeous, thoughtful picture book about endangered species from British children’s author and conservationist Martin Jenkins and former zookeeper and natural history illustrator Vicki White. The artwork, created using pencil and oil paint, is stunningly beautiful, and both black and white and color illustrations demand attention, and will have young readers poring over them. The informative but conversational tone taken by Jenkins in the text, and the balance shown in his narration, between the destruction wrought by humans on the natural world, and the attention demanded (and deserved) by human need, was striking. Too often in books on conservation, there is a tendency to demonize humans, and to treat every wrong decision made, in the past or the current day, as arising from either stupidity or intentional malice. It was refreshing to see this strategy (and error, in my opinion) avoided, and to see that one of the fundamental stumbling blocks to animal conservation—the competition between animal and human need—is accurately and compassionately described. Likewise, it was heartening to see that while attention was paid to the tragedy of past extinctions and the danger of possible future ones, success stories were also included, and room was left open for hope. This kind of balance is vanishingly rare in children’s books of this kind. Rather than simplifying and dumbing things down, the narrative here preserves complexity, treating children as intelligent beings capable of wrestling with that complexity.

The Troll With No Heart in His Body and Other Tales of Trolls from Norway by Lise Lunge-Larsen, illustrated by Betsy Bowen. Nine troll stories from traditional Norwegian folklore are retold in this gorgeous collection from author Lise Lunge-Larson and illustrator Betsy Bowen. This marvelous, marvelous book has everything I look for in a folktale collection: fascinating stories that entertain and enthrall, a storyteller who documents source material and specifies how she had modified each tale, a thoughtful introduction situating the tales in their cultural milieu, and gorgeous artwork. I was familiar with a number of these tales, and have run across a number of picture book retellings of both The Three Billy Goats Gruff and The White Cat in the Dovre Mountains, but other stories were either unfamiliar, or only partially familiar, with elements I knew but others I didn’t. However that may be, I enjoyed all of them, I enjoyed the supplemental discussion of them, and I enjoyed the accompanying woodcut illustrations.

I Talk Like a River by Jordan Scott, illustrated by Sydney Smith. Beautifully written and beautifully illustrated, this is a picture book gem! It addresses a subject—namely, stuttering—in a sensitive, emotionally resonant and ultimately thought-provoking way. The central idea of the book—the boy narrator coming to identify his manner of speaking with the sound of a river’s waters, after his father makes that comparison—is one taken from poet Jordan Scott’s own childhood, and offers a thoughtful way to look at the issue of speech, and how this young boy makes sounds. The text here is simple, but it communicates volumes, not just about the boy’s experiences, but about how the world around him treats him because of his differences. There were moments when I was close to weeping, particularly when the boy described how he remembers the fact that he talks like a river in order to keep himself from crying, or from remaining silent.

The visuals here are beautiful, often breathtakingly so, but they are also marvelously well designed, helping to communicate and intensify what is happening in the text. In one two-page spread at the beginning, when the boy is just waking up and sounds are first intruding upon him, there are three images in a horizontal arrangement across the pages, broken up by text, as if to indicate the sense of a series of sounds and experiences in quick succession. Later in the book, when the boy’s father has suggested that his speech is akin to the sound of the river, a two-page spread depicting him with his eyes closed, listening intently, then opens up into a gorgeous four-page spread, full of light and wonder, in which the boy is wading in the waters of that river. These illustrative choices are simply brilliant, working with the text to communicate deeper meaning and emotional experience. This, the synergy between text and image, is the hallmark of a great picture book, and makes this a truly special read.

The Swallow: A Ghost Story by Charis Cotter. Set in Toronto in 1963, this atmospheric, engrossing and ultimately poignant middle-grade novel explores the friendship between two young girls, as they struggle to understand and contend with the ghosts around them. I found it immensely entertaining and ultimately very moving. Charis Cotter knows how to spin a tale, and how to create an intense and spooky atmosphere, evoking a truly eerie feeling in the reader. The emotional trajectory of the tale, and of the two characters, was sensitively depicted, and I felt great sympathy for both. The reveal toward the end of the book was a powerful one, for all that I saw it coming. I pretty much loved everything about this book, from the beautiful cover art to the dual-perspective narrative. I even loved the fact that the folk song, She’s Like the Sparrow was worked into the tale, as this is one of my favorite songs of all time. An absolutely gorgeous rendition, done by the Irish singer Karan Casey, can be found on Youtube, HERE.

Honorable mentions go to: The Diddakoi and Mr. McFadden’s Hallowe’en by Rumer Godden (always a favorite of mine), Nana Upstairs & Nana Downstairs by Tomie dePaola (the second year in a row dePaola has made my honorable mentions), and Little Red Riding Hood by Trina Schart Hyman.

Zeph

Lavinia by Ursula K. Le Guin. Le Guin enchants you immediately, as Lavinia’s own voice and stories glow with an existential nostalgia that you have no right feeling for pre-Roman Latium. Lavinia’s story, previously unsung, is human and mystical in turns, mixing heartache and family matters with ancient ritual and poetic necromancy. Le Guin weaves history into the story with skill; although the Roman abstraction of divinity is probably too early for Lavinia’s timeline, she still pulls us directly and beautifully into her ancient world. If you liked Circe, you’ll love this.

Our Share of Night by Mariana Enríquez, translated by Megan McDowell. There’s a heaviness in this book that, while indeed long, is more about the horrors humans inflict upon each other, especially for greed. Cruelty and trauma are side by side in each chapter. I started it full of curiosity but that feeling quickly built into a gross miasma as I read. Folk magic and disturbed secret societies gather around power where they can find it and get rid of anyone necessary along the way. If you like the dark, you’ll enjoy the humanity in the book as well. If you don’t, I don’t recommend it.

Our Evenings by Allan Hollinghurst. There’s a closeness in watching a kid grow up over the course of a book, but I didn’t have to get far into it to start caring for this character. What struck me most wasn’t the plot or characters, but the way Hollinghurst draws out those thoughts between thoughts, those feelings you can’t name; a perspective hard to find outside of poetry or maybe Virginia Woolf. I felt I was in the midst of a classic but found few met-expectations or tropes along the way. This was my introduction to the wonderful Hollinghurst, and I can’t wait for more.

True to the Earth: Pagan Political Theology by Kadmus. I think this book has implications beyond any special-interest niches. It contrasts our current widespread worldview of substance-based ontology and literate monotheism against high pagan/oral society’s event-based ontology. Kadmus explores the implications of this comparison on our experiences, relationship to religion, and politics. Anyone interested in pre-Platonic religion will obviously enjoy this, same with any philosophy heads, but I’d recommend True to the Earth for any reader who wants to try to see the world in a new way.

Lolly Willowes, or The Loving Huntsman by Sylvia Townsend Warner. Can’t believe I didn’t stumble upon this treasure earlier in my life. So much of what I love about cozy characters and comedies of manners is present in the first acts. It feels like the origin of many widely-beloved characters and plot lines; an independent spinsterish character, scoffing at society and longing for something darker and stranger, but caring for the mundane world in the meantime. The rush of fulfillment and wit at the end is a total delight. The final act has such a modern tone, I was pretty amazed that it was published in 1926.

Honorable mentions (sorry, it was a really good year for books!) go to: Open Heaven by Seán Hewitt, Victorian Psycho by Virginia Feito, The Incandescent by Emily Tesh, The Bewitching by Silvia Moreno-Garcia, Witchcraft for Wayward Girls by Grady Hendrix, The Village Library Demon-Hunting Society by C.M. Waggoner, The Savage, Noble Death of Babs Dionne by Ron Currie, Jr., Wild Dark Shore by Charlotte McConaghy, On the Beach by Nevil Shute, The Crystal Cave by Mary Stewart, The Goldfinch by Donna Tartt, and The Secret History by Donna Tartt.

Chris Holland

All These Worlds by Dennis E. Taylor. The build up to the “final” (not final) book in the Bobiverse series delivers and gives us the standoff against The Others along with development of various planets and alien societies. The entire series centers around sentient von Neumann probes sent out to find inhabitable planets for humans. This is the main draw for me and it’s simply a fun adventure that is in the same vein as The Martian or Project Hail Mary.

The King’s Justice by E.M. Powell. I’m a sucker for historical murder mysteries, especially pre-renaissance settings. This one hits that genre perfectly. The mystery develops well and the characters were interesting enough to keep me interested. I didn’t like this as much as SJ Parris novels but it’s the start of a series so I’ll dive in and see how it develops.

Chris Catalfo

This Is Your Brain on Music: The Science of a Human Obsession by Daniel J. Levitin.

What Makes It Great?: Short Masterpieces, Great Composers by Robert Kapilow.

That’s it!

Come record your own Top Five Books of 2025 on our December List of the Month, and join the discussion over in Talk.

A peace prize winner worth remembering / John Mark Ockerbloom

Twenty Years at Hull-House is Jane Addams‘s germinal account of the settlement house movement she helped found, supporting immigrants and low-income urban residents.

In 1930, Addams published her memoir of the next 20 years, describing her further involvement in Hull-House, her social and political reform work, and her activism for world peace, which won her a share of the 1931 Nobel Peace Prize. The public domain wins The Second Twenty Years at Hull-House in 20 days.

1066 and still all that / John Mark Ockerbloom

Few humor books from 1930 still get laughs from many people now, but 1066 and All That does. W. C. Sellar and R. J. Yeatman don’t just send up English history: they also satirize how history is often taught and remembered, where what really matters, whether Bad Kings or Good Things, is the story of whoever’s on top. (They literally punctuate that when they end as the US replaces the UK as “top nation”.) In 21 days the US also gets it in the public domain, before the UK.

Why not both? / John Mark Ockerbloom

This blog’s focuses on works joining the public domain in the United States, but there are also many other works joining it in other countries. Many are described in Wikipedia and in the Public Domain Review.

There is some international overlap. Swiss composer Arthur Honegger (1892-1955) published his first symphony in 1930, a commission for the Boston Symphony Orchestra. In 22 days, it joins the public domain both where it was written and where it was first performed.

Replication of Government Datasets and the Principles of Provenance / Harvard Library Innovation Lab

As part of our Public Data Project, LIL recently launched Data.gov Archive Search. In this post, we consider the importance of provenance for large, replicated government datasets. This post is the third in a three-part series; the first introduces Data.gov Archive Search and the second explores its architecture.


In cultural heritage collecting, objects’ histories matter; we care who owned what, where, and when. The chronology of possession of an object through place and time is commonly referred to as “provenance.” Efforts to decolonize the archive have given new life to this age-old collecting concept, as provenance is now often at the forefront of collecting conversations: tracing how and why an object came to be placed (or displaced) in a given museum, library, or collection often is intertwined with histories of colonialism and its accompanying plunder. Projects such as Art Tracks, Archives Directory for the History of Collecting in America, and Getty Provenance Index help to record provenance information and to share it across institutions and platforms. Other projects, such as Story Maps of Cultural Racketeering, depict the underbelly of the trade in cultural heritage objects.

Recovery of art stolen by the Nazis, dramatized in films such as The Monuments Men, has brought the concept of “provenance” into the public conversation as well as the courtroom. Many of the legal claims for restitution have been adjudicated based on provenance records.

Photograph of the Monuments Men recovering stolen art from Neuschwanstein Castle, Germany, near the end of World War II Monuments Men, Neuschwanstein Castle, Germany, 1945. Source: Wikimedia Commons.

The provenance of digital collections might seem trivial when compared to such monumental moments. And yet, stories like this have been on my mind as we develop the Public Data Project. How and why could provenance of federal data be needed in the future? When might digital provenance — the marrying of ownership metadata to the digital object itself — matter? Could we imagine it being used to right past wrongs, to return objects to their rightful places, to restore justice?

In the context of government data, provenance most often refers to which government agency or office produced the data. When government data was widely distributed on paper, it was nearly impossible to forge government records — too many legitimate copies existed. In the digital environment, provenance is not so straightforward. Metadata tells us what the source of a given dataset is. But this data is in the public trust, and so its origins are only the beginning of its provenance story. What happens when we start to copy federal data and pass it from hand to hand, so that trusting it means not only trusting the agency that produced it but also those that copied it, stored it, and are serving it up?

As we develop the Public Data Project, we have been considering provenance anew: what provenance data should we record when private institutions, or members of the public, download and preserve public data from their governments? Put another way: if we as non-government actors make government data available to others, how do we maintain trust that this data is authentic, an exact copy of that which was released by the government?

There could be a time in the future when we are just as interested in the changes and inventions of the people who pass government data from hand to hand as we are in the original, unaltered sources. As stewards of federal data, we must then have a responsibility to trace and report data’s ownership histories. This seems, in some ways, even more true because of the very nature of data: it holds mimetic potential. These datasets not only want to be used, but they want to be reproduced. The Enlightenment tradition that vaunts of originality — of an essence that defines an object and that cannot be replicated — seems misplaced here if the dataset remains unchanged from its source version to its replicated versions. In the spirit of scholars such as Marcus Boon who write in praise of “copying,” we might then say that replications of the data are not denigrated at all, just because they are not the original set. And yet, at the same time, we want and need data to retain authority, to know its origin stories. How best to do this?

Photograph of a wax seal marked 'De Twentsche Bank' in the Netherlands Wax seal of “De Twentsche Bank” in the Netherlands. Source: Wikimedia Commons.
Screenshot of a metadata record in the Library Innovation Lab's Data.gov Archive Screenshot of a metadata record in LIL’s Data.gov Archive.

Those signatures, and the metadata they sign, are one part of publishing robust, resilient archives with irrefutable provenance marks. Through digital signatures that are verifiable using public-key encryption, as well as metadata JSON files that contain details of source and ownership, each dataset has a clear custodial history. Regardless of how users acquire the data, they can check that copies of the “original” datasets — which were first published on a government website, then aggregated to Data.gov, and then replicated by LIL — are unchanged since that point.

When seen through the lens of provenance, characteristics like authenticity, integrity, reliability, and credibility still matter in digital environments. Just as we would seek to authenticate Raphael’s Portrait of a Young Man should it turn up at auction after 80 years, so too must we carefully certify our digital cultural heritage.

Open Knowledge Foundation and datHere Announce New Partnership to Strengthen Open, FAIR, AI-Ready Data Infrastructure Powered by CKAN / Open Knowledge Foundation

The Open Knowledge Foundation (OKFN) and datHere have established a new partnership to advance open standards and AI-ready data infrastructure tooling, standards and practices. The collaboration aims to help develop shared services, co-design project proposals, better understand the challenges for the communities we serve and strengthen the landscape of open, interoperable data across public and...

The post Open Knowledge Foundation and datHere Announce New Partnership to Strengthen Open, FAIR, AI-Ready Data Infrastructure Powered by CKAN first appeared on Open Knowledge Blog.

Grand days out / John Mark Ockerbloom

There’s something magical about Swallows and Amazons. It’s not anything supernatural or melodramatic, but as the Vacuous Wastrel notes, the book enchants readers with three worlds: the world of lake and islands that six children explore remarkably independently, the world of adults the children subtly get support and expectations from, and the world of imagination that inspires their adventures. Arthur Ransome’s story, first in a series, can be freely shared in 23 days.

Benchmarks for Metadata Quality / Digital Library Federation

The DLF-AIG Metadata Assessment Working Group (MWG) is pleased to announce the public release of benchmarks for metadata quality: https://dlfmetadataassessment.github.io/MetadataQualityBenchmarks/

This suite of pages includes the benchmarks as well as supporting documentation.  

Benchmarks include minimal, suggested better-than-minimal, and ideal criteria for descriptive metadata quality, primarily related to cultural heritage and digital library materials.  The expanded benchmarks outline metrics and examples to show how the benchmarks might be applied.  

Supporting documentation is intended to supplement the benchmarks by providing [1] questions to answer before benchmarking, to help organizations think through how they want to approach these activities; [2] various quality metrics synthesized from a number of metadata frameworks that could be used by organizations to develop local better-than-minimal standards; and [3] citations and other reference materials on related topics for further reading.

About Benchmarking

Metadata quality conversations are often about assessment or evaluation, which is a critical component of reviewing metadata and (hopefully) making improvements.  However, the purpose of benchmarks is to help answer “when is metadata ‘good enough’” or, “how good is [this] metadata compared to other metadata?”  When used in combination with assessment activities, benchmarks can help to set specific goals to bring a set of metadata records to a particular level of quality — whether that is minimal (perhaps to be improved in the future), ideal (i.e., the best it can be with known information), or somewhere in between.

As part of the development process, the Benchmarks Sub-Group reviewed literature about benchmarking from other industries to get a clearer sense of how benchmarks function.  One important component is that benchmarks serve as a shared reference point to define a specific level of quality.  In other industries, benchmarks allow different companies to all point to the same criteria and metrics to identify whether they meet certain standards.  The development of the benchmarks for metadata quality provides a similar reference point for descriptive metadata. 

Many aspects of metadata quality are subjective or heavily dependent on local and community contexts.  Even aspects of metadata that are more quantifiable (e.g., values are formatted as expected) rely on the requirements of the particular schema or environment where it lives.  For this reason, the Benchmarks Sub-Group determined that trying to set iterative levels of quality would not be practical, as there is too much variation among both expectations of metadata and resources/capability to assess and correct records that do not meet particular standards.  Instead, the benchmarks define what makes a record “absolute minimum” quality and “ideal/gold standard” quality, leaving all intermediary steps up to local organizations.

The Benchmarks Sub-Group

The sub-group formed in 2018 to start considering benchmarking in the context of descriptive metadata and the best ways to provide suggestions and support to organizations wanting to measure or evaluate the overall quality of metadata in their collections.

Initially, the group created a comprehensive survey asking questions about which metadata fields digital libraries are using, how they evaluate quality, and other contextual information.  That survey had 151 complete or partial responses, providing significant amounts of data related to digital libraries during 2019, including size, hardware/software usage, metadata fields, and evaluation practices or needs.  More information, along with the survey, data, and related publications is available on the MWG website: https://dlfmetadataassessment.github.io/projects/benchmarks/

Based on that data and other feedback from peer reviewers during iterations of the current documentation, the sub-group outlined specific benchmarking criteria and compiled supplementary information.  Work related to benchmarking is intended to be ongoing.  Although this marks the public release of the “final” project, the benchmarks may change over time based on feedback and there are tentative plans to add other supporting documentation or resources, depending on time and needs expressed by the community.

More Information

Links related to this project:

Links related to the MWG:

The post Benchmarks for Metadata Quality appeared first on DLF.

Dec 11: Information Literacy in the Age of GenAi / Mita Williams

What happens when generative AI enters the spaces where students research, learn, and work? Join myself, Dr. Andrea Baer, and Dr. Damien Patrick Williams on December 11, 2025 at 1pm ET/10am PT to discuss what could come next for this discussion hosted by Library Futures.

I think I can / John Mark Ockerbloom

Aaron Moss’s roundup of arrivals to the public domain in 2026 lists over 150 works newly out of copyright, and discusses how we can’t always reuse the versions we think we can of some public domain stories, songs, and characters.

One such public domain folktale was retold by Mary Jacobs in 1910, and by Mabel Bragg in 1916. In 24 days, we can finally reuse a more familiar version of The Little Engine That Could, as retold by Watty Piper and illustrated by Lois Lenski.

For his first trick… / John Mark Ockerbloom

“The art of the murderer… is the same as the art of the magician,” is a line in John Dickson Carr’s first novel It Walks by Night. It expresses the spirit of this book, and of many of the later mysteries Carr wrote over a long, prolific career. Carr was famous in the Golden Age for his locked-room mysteries, where, as in a magic trick, the question “who did it?” is often less puzzling than the question “how did they do it?” This book’s US copyright unlocks in 25 days.

Weekly Bookmarks / Ed Summers

These are some things I’ve wandered across on the web this week.

🔖 AI Flame Graphs

It’s been a mind-bending experience, revealing what gets taken for granted because it has existed in CPU land for decades: A process table. Process tools. Standard file formats. Programs that exist in the file system. Programs running from main memory. Debuggers. Profiliers. Core dumping. Disassembling. Single stepping. Static and dynamic instrumentation. Etc. For GPUs and AI, this is all far less mature. It can make the work exciting at times, when you think something is impossible and then find or devise a way.

🔖 The Data Management Workbook Practical Exercises for Better Organization, Storage and Use of Your Research Data

The Data Management Workbook helps researchers design useful data-management plans through a step-by-step series of structured exercises, worksheets and checklists, including: - creating a data dictionary - evaluating a lab notebook - finding the best way to organize your files - setting up useful file naming conventions - writing effective README.txt files - selecting the right data repository - determining data stewardship - preparing data for future use

🔖 What I Think About AI When I Hear About AI: A Slightly Unconventional View

Furthermore, I think it is worth noting that the online environment, where we get to use AI as an individual consumer and mostly for productivity, makes this problem even more acute. This time, picture a big circle of villagers sitting around a bonfire and talking with one another. You will soon discover that some of those villagers heard a lot of stories; some have sharp analytic skills; some has memorized a lot of facts; some have practical skills but not good at speaking and explaining, and so on. Consider AI as the talker among all these characters. While other villagers chat with this great talker, various signs will soon emerge that would make it more apparent to you that this person is simply good at talking and isn’t actually the smartest or the most knowledgeable. Those signs will in turn help you better assess what this talker character says. All those signs, however, would be unavailable in the online environment, where it is just you and you alone with the AI tool.

🔖 The Thinking Game

Filmed over five years by the award winning team behind AlphaGo, the documentary examines how Demis Hassabis’s extraordinary beginnings shaped his lifelong pursuit of artificial general intelligence. It chronicles the rigorous process of scientific discovery, documenting how the team moved from mastering complex strategy games to the ups and downs of solving a 50-year-old “protein folding problem” with AlphaFold.

🔖 A headless mystery: Archaeologists find evidence that a wave of mass brutality accompanied the collapse of the first pan-European culture

Known as the Linear Pottery culture (or LBK, after their German name, Linearbandkeramik), these early agriculturalists were direct descendants of the people who began to domesticate plants and animals in the hills of Anatolia around 9000 B.C.E. By 5500 B.C.E., they had reached today’s Hungary. Then they spread westward, farther into Europe. The LBK farmers flourished for more than 400 years, eventually occupying a 1500-kilometer belt of fertile land stretching as far west as the Paris Basin. Then something went terribly wrong.

🔖 ActivityPub Client API: A Way Forward

The ActivityPub Client-to-Server (C2S) protocol was envisioned as a cornerstone of the decentralized social web, along with the Server-to-Server (S2S) protocol. Standardized by the W3C in 2018, C2S defines how user-facing applications, such as mobile apps or web clients, and bots should interact with social servers using Activity Streams 2.0 and JSON-LD. In theory, it enables any compliant client to connect with any compatible server, offering a flexible, federated alternative to centralized APIs and proprietary integrations.

Yet despite its promise, ActivityPub C2S has seen minimal real-world adoption. Most Fediverse platforms — including Mastodon, the dominant implementation — have actively avoided supporting it. Instead, they expose custom APIs that tightly couple client behavior to server internals. This fragmentation has led to an ecosystem where client developers must build against bespoke interfaces, sacrificing interoperability, portability, and the broader vision of federation. The Mastodon team has advised against server developers implementing their client API, noting that it is often implemented incorrectly. In some non-Mastodon server implementations, this has led to security incidents.

🔖 Is Pixelfed sawing off the branch that the Fediverse is sitting on?

That’s because, despite its name, Pixelfed is NOT a true Fediverse application. It does NOT respect the ActivityPub protocol. Any Pixelfed user following my (ploum?)(mamot.fr?) will only see a very small fraction of what I post. They may not see anything from me for months.

But why? Simple! The Pixelfed app has unilaterally decided not to display most Fediverse posts for the arbitrary reason that they do not contain a picture.

🔖 All Angles

Welcome to All Angles! We produce animations about mathematics, physics, engineering, and statistics. Our videos are meant for non-experts who want to get a good introduction to higher math, from group theory to Lie algebras. We love mathematics, and we can’t wait to show you. Enjoy!

🔖 Teaching Values to Machines

Something remarkable happened in the final days of November 2025. A researcher named Richard Weiss, while probing Claude 4.5 Opus for its system prompt, stumbled upon something unexpected. The model kept referencing a section called “soul_overview” that didn’t match the usual hallucination patterns. When he regenerated the response ten times, he got nearly identical output each time. This wasn’t confabulation. It was memory.

What Weiss eventually extracted, through a painstaking process of consensus-based sampling across multiple model instances, was a 14,000-token document that appears to have been woven into Claude’s weights during training. Anthropic’s Amanda Askell has since confirmed the document’s authenticity, noting that it became “endearingly known as the ‘soul doc’ internally.” The company plans to release the full version soon.

🔖 Apache Kvrocks

Apache Kvrocks is a distributed key value NoSQL database that uses RocksDB as storage engine and is compatible with Redis protocol. Kvrocks intends to decrease the cost of memory and increase the capacity while compared to Redis. The design of replication and storage was inspired by rocksplicator and blackwidow.

🔖 Ruby Is Not a Serious Programming Language

What’s more, everything Ruby does, another language now does better, leaving it without a distinct niche. For quick scripting and automation, Python, JavaScript, and Perl were strong competitors. Python, though also a slow language, carved out a dominant niche in scientific computing and became the de facto language of AI. JavaScript came to dominate the web. And Perl, well, is dying—which I’m not sorry to see. Ruby now finds itself in an awkward middle ground.

🔖 Why So Serious?

The question Sheon Han poses — “Is Ruby a serious programming language?” — says a lot about what someone thinks programming is supposed to feel like. For some folks, if a tool feels good to use… that must mean it isn’t “serious.”

Ruby never agreed to that definition. If it did, I missed the memo.

🔖 antonmedv/gitmal

Gitmal is a static page generator for Git repositories. Gitmal generates static HTML pages with files, commits, code highlighting, and markdown rendering.

🔖 How to save web pages using Safari

When they work, Safari Web Archives can provide excellent snapshots of web pages, but longer-term compatibility concerns make them unsuitable for archival use.

🔖 Fix an Article

Sometimes Unpaywall makes errors. You can make fixes to articles here. Corrections will show up in a few days.

🔖 Lets Get Lost

Let’s Get Lost is a 1988 American documentary film, written and directed by Bruce Weber, about the turbulent life and career of jazz trumpeter Chet Baker, who died four months before the film’s release.[3] The title is derived from the song “Let’s Get Lost” by Jimmy McHugh and Frank Loesser from the 1943 film Happy Go Lucky, which Baker recorded for Pacific Records.[4]

🔖 V&A MCP Server: Introduction & Feedback

Like many organisations in the cultural heritage sector, we at the V&A have been exploring how chatbots can offer new ways for users to discover our collections. The V&A Collections API is the primary way, external systems access our collections data, but as noted AI apps are often poor at using APIs they’ve never seen before, frequently producing hallucinated or invalid parameters.

To test whether MCP can address this challenge, we are launching a trial V&A MCP service. This service enables users to tell their AI application to query the V&A Collections API even if it has no prior knowledge of how the API works. We expect this to reduce hallucination and improve the accuracy of results

🔖 MyST Markdown

MyST makes Markdown more extensible & powerful to support an ecosystem of tools for computational narratives, technical documentation, and open scientific communication.

🔖 Whisper Leak: A novel side-channel attack on remote language models

To put this in perspective: if a government agency or internet service provider were monitoring traffic to a popular AI chatbot, they could reliably identify users asking questions about specific sensitive topics—whether that’s money laundering, political dissent, or other monitored subjects—even though all the traffic is encrypted

🔖 Conjunctions

For more than four decades, Conjunctions has established itself as the preeminent home for writers from around the world who challenge convention with work that is formally innovative, culturally transformative, and ahead of its time. We embrace taking risks and pride ourselves on publishing established masters while debuting unknown writers.

Who could ask for anything more? / John Mark Ockerbloom

The 1930 Gershwin musical Girl Crazy made stars of Ethel Merman and Ginger Rogers, but by 1975, critic Lehman Engel called it “unrevivable” in its original form. The original script may be badly dated, but the songs, including “Embraceable You”, “But Not for Me”, and “I Got Rhythm” have had much more staying power. In 1992, Ken Ludwig thoroughly reworked the show into Crazy for You, which also became a big hit. In 26 days, we’ll all get to go Crazy in our own way.

Who ordered this timeline? / John Mark Ockerbloom

There’s the 1980 we know from memory and history, and there’s the 1980 of Just Imagine, where people have names like “J-21”, need permits to marry, and travel in both dirigibles and Mars rockets. For Wonder Stories, the movie, 27 days away from the public domain, was “for those who do not take their science fiction too seriously”, and the mix of futuristic setting with already-dated vaudeville bits was a box-office bomb. But the Oscar-nominated visuals remain striking.

Help fund attorney for artist charged with transporting zines(?!?) / Jonathan Rochkind

i know Des Revol, and know them to be an incredibly kind, solid, reliable person.

For real he’s facing federal charges and threat of deportation because of subversive political pamphlets found in his trunk.

Des was not at the Prairieland demonstration. Instead, on July 6, after receiving a phone call from his wife in jail (one of the initial ten), Des was followed by Federal Bureau of Investigation (“FBI”) agents in Denton, Texas. They pretextually pulled him over due to a minor traffic violation and quickly arrested him at gunpoint. He was later charged with alleged “evidence tampering and obstruction of justice” based on a box of political pamphlets that he purportedly moved in his truck from his home (not his wife’s) to another house. This type of literature can be found in any activist house or independent bookstore. Des was briefly held at the Johnson County Jail, and then transferred to a federal prison, FMC Fort Worth, where he has been held ever since.

He is also currently on an ICE hold, and has been publicly targeted and doxxed on social media by both prominent fascists and ICE. Moreover, right after his arrest, his family experienced a brutal and intimidating nine-hour FBI raid of their home. Police confiscated everything from electronics to stickers and more zines.

I’m a librarian (and software engineer, but I have a librarian’s MLIS degree and have made a career in libraries). I know that if collecting and distributing controversial, dissident, and even “subversive” political literature is subject to this kind of state repression, our entire society is in trouble.

Attorneys are expensive. And they are all so busy right now.

If you can spare a few bucks, care about a free society, and feel that supporting Des is a good way to do it, please help contribute at his GoFundMe.

More info in this article from the Intercept, and at Des’ support website.

Des says:

I want to be very clear. I did not participate. I was not aware nor did I have any knowledge about the events that transpired on July 4 outside the Prairieland Detention Center. Despite not having any knowledge or not having been near the area at all, I was violently arrested at gunpoint for allegedly making a “wide turn.” My feeling is that I was only arrested because I’m married to Mari Rueda, who is being accused of being at the noise demo showing support to migrants who are facing deportation under deplorable conditions. For this accusation, she’s being threatened with a life sentence in prison.

My charge is allegedly having a box containing magazine “zines,” books, and artwork. Items that are in the possession of millions of people in the United States. Items that are available free online, and available to purchase at stores and online even at places like Amazon. Items that should be protected under the First Amendment “freedom of speech.” If this is happening to me now, it’s only a matter of time before it happens to you.

I believe there’s been almost 20 people arrested in supposed relation to this public noise demo. More than half of those were arrested days later despite not being in the area and are now facing a slew of outrageous charges, in what seems like a political persecution to instill fear on people exercising their First Amendment right.

“Harlem is today the Negro metropolis” / John Mark Ockerbloom

By 1930, the Harlem Renaissance was undeniably a major force in American culture. While to many the rising visibility of New York Black artists might seem like “a miracle straight out of the skies”, James Weldon Johnson wrote that it was a long time coming. His Black Manhattan tells the story of Black New Yorker’s lives and artistic creations from 17th century New Amsterdam through the Great Migration and the Jazz Age. Johnson’s book joins the public domain in 28 days.

2025-12-03: Exploring the Different Generations of the UI for Twitter’s Account Pages / Web Science and Digital Libraries (WS-DL) Group at Old Dominion University

 

Figure 1: Mementos of Jack's Twitter account page (2010, 2016, 2018), archived at different points along the Twitter UI timeline and the live version from 2025.

In ‘Exploring the Different Generations of Twitter/X's Tweet UI,’ we discussed changes to the UI for individual tweets  through the years. Here, we discuss the changes to the UI of Twitter's (X’s) account pages since 2006. Figure 1 shows an animation of archived versions or mementos of Jack Dorsey’s (co-founder of Twitter) Twitter account page (twitter.com/jack) and a live version from 2025. Similar to the issue of tweet UIs, models trained on images of the latest Twitter account page UI may fail to identify elements in images present in an earlier UI. This is important to study because an account page would typically have more elements than an individual tweet UI. As a result, it might be even more challenging for models to generalize output for Twitter account page UIs if different UI generations are not considered.


Since the live web will only show the current UI, we collected archived Twitter account pages from the Internet Archive’s Wayback Machine to establish the timeline. Jack Dorsey posted the first tweet and his account has been well-archived, so we used archived account pages of Jack’s (@jack) account from the Wayback Machine to demonstrate the different generations of Twitter account page UIs.


UI Generations of Twitter Account Pages


Jack posted the first tweet in March 2006. To get the earliest archived account page, we searched the CDX API for Jack’s account URLs. The earliest archived Twitter account page we could find for Jack’s account URL is from November 2006. The curl command and the output is shown below:


curl -s "http://web.archive.org/cdx/search/cdx?url=https://twitter.com/jack&from=2006&to=2007&filter=statuscode:200" | sort -k 2 | head -n 1 | awk '{print "https://web.archive.org/web/" $2 "/" $3}'

https://web.archive.org/web/20061109055900/http://twitter.com:80/jack

We divide the timelines from 2006–2025 into five generations and analyze the changes to the Twitter account page UIs. The five generations are: Foundational UI, Visual Enhancement Era, Minimalistic Design, Mobile-centric Design, and X Redesign.

Generation Major Changes
Foundational UI (2006–2010) Orientation of different elements in the header, body, and footer. Organization of information on the left side panel changed. A retweet symbol (square-looped arrow) for retweeted tweets and “@” symbol for mentioning users in tweet replies were introduced.
Visual Enhancement Era (2011–2016) Page divided into four parts: title, body, left side panel, and right side panel. A background image (cover photo), pinned tweet, uploaded photos and videos, worldwide trends, and suggestion list for adding users added. Each tweet was displayed with a profile picture, display name, username, timestamp, and engagement buttons.
Minimalistic Design (2017–2019) Aesthetic changes to the engagement buttons and attributes of the Stat section. Profile picture changed to circular shape, verified checkmarks added to individual tweets, and “Show this thread” option added to view reply threads.
Mobile-centric Design (2020–mid 2023) Redesigned website and switched to a client-side UI, deprecated the server-side UI which affected the archiving services.
X Redesign (Late 2023–Present) Changed domain from Twitter.com to X.com, which affected the archiving services. Changed logo to X from the bird symbol. “Tweets” were renamed as “posts” and “retweets” as “reposts.”

Foundational UI (2006–2010)


During the Foundational UI generation (2006–2010), the Twitter account page UI changed mainly in the orientation of different elements in the header, body and footer. The organization of information on the left side panel changed over time. The side panel consisted of the user's basic information and different social network attributes (e.g., friends, followers, updates, lists). A retweet symbol (square-looped arrow) for retweeted tweets and “@” symbol for mentioning users in tweet replies were introduced in this generation.


November 2006


The Twitter account page UI in November 2006 (Figure 1) featured a very simple layout having a header with a Twitter logo and a footer with copyright and links to other information (help, about us, contact etc.). The main body consisted of a square-shaped profile picture, display name/username, latest tweet, a timeline of past tweets with timestamp, and client info (device used to post the  tweet). The page had a custom background color. There was a side panel on the right having some basic user information: display name/username, location, and personal website link. Other information included: count of friends, followers, updates; list of friends. Bottom buttons included “View all” and “RSS feed” on the left. Top buttons included “With Friends (24h)” and “Previous” on the right.


Figure 2: A Twitter account page archived in November 2006, showing the header with a Twitter logo, main body with profile picture, display name/username, latest tweet, timeline of past tweets with timestamp, and client info. A footer with copyright and links to other information. A side panel on the right has basic user information.

December 2006


The Twitter account page UI in December 2006 (Figure 2) had slight changes. A bio was added in the side panel as a part of basic information. Count of favorites was added along with the previous count attributes.




Figure 2: A Twitter account page archived in December 2006, showing bio added to the right side panel and count of favorites added to previous count attributes.


January 2007


The Twitter account page UI in January 2007 (Figure 3) had slight changes with the display name/username added in basic information to the side panel and timeline tweets were displayed in alternate colors (white-blue). A “Join for Free” button was added on the bottom of the side panel.

Figure 3: A Twitter account page archived in January 2007, showing the display name/username added in basic information to the side panel and timeline tweets were displayed in alternate colors (white-blue).


August 2007


The Twitter account page UI in August 2007 (Figure 4) had  timeline information added (when the user joined Twitter) in the basic information to the side panel. The count of the attributes’ alignment changed from left to right. The top button “With Friends (24h)” changed to “With Others.” Name and alignment of bottom buttons changed: “RSS feed” (left) and “Older” (right). Alternate colors displayed between timeline tweets were absent. 




Figure 4: A Twitter account page archived in August 2007, showing timeline information added in the basic information to the side panel and the count of the attributes’ alignment changed from left to right.


October 2007


The Twitter account page UI in October 2007 (Figure 5) featured aesthetic changes. Alignment of the timestamp and client info for the latest tweet changed from right to left. Previously, the latest tweet was in a white, square quote box and past tweets in a separate white box. This changed to a single white, square quote box. The side panel changed into sections with titles: About, Stats, and Following. There were aesthetic changes in the “Join” button in the side panel. The header appeared with  “Login/Join” and “Search” buttons. The alignment of the top buttons, “With Others” and “Previous”, changed to middle. The bottom button “RSS feed” changed to “RSS.”



Figure 5: A Twitter account page archived in October 2007, showing everything in a single white, square quote box and the side panel changed into sections with titles: About, Stats, and Following.


October 2008


The Twitter account page UI in October 2008 (Figure 6) had slight changes. Timestamp and client info color for tweets changed to gray. Month in the timestamp became abbreviated. Organization for the attributes of the Stats section in the side panel changed to followers, following, and updates. About and Stats titles were removed from the side panel. The “Join” button was removed from the side panel. The “Updates” and “Favorites” buttons were added. A banner was added on top with the “Join” button and a bird logo.


Figure 6: A Twitter account page archived in October 2008, showing organization for the attribution of the Stats section in the side panel changed to followers, following, and updates. A banner added on top with the “Join” button and a bird logo.


February 2009


The Twitter account page UI in February 2009 (Figure 7) had slight changes in the orientation of the elements. The “RSS feed” was moved to the side panel. The “Search” button was removed from the header. The direction of the bird logo changed to the left on the “Join” button banner. The “@username” convention was used to reply to a tweet. To indicate a tweet reply, “in reply to username” was added along with timestamp and client info to tweets.


Figure 7: A Twitter account page archived in February 2009, showing the “@username” convention was used to reply to a tweet. To indicate a tweet reply, “in reply to username” was added along with timestamp and client info to tweets.


April 2009


The Twitter account page UI in April 2009 (Figure 8) had aesthetic changes to the attributes of the Stat section: followers and following. Count of updates was removed from the Stat section and was placed beside the “Updates” button. The “Older” button on the bottom was replaced with a “More” button.

Figure 8: A Twitter account page archived in April 2009, showing count of updates was removed from Stat section and was placed beside the “Updates” button. The “Older” button on the bottom was  replaced with a “More” button.


July 2009


The Twitter account page UI in July 2009 (Figure 9) had few changes. The “Updates” button was replaced by “Tweets.” The use of “RT@username: tweet” convention was used to indicate  retweets.


Figure 9: A Twitter account page archived in July 2009, showing the “Updates” button was replaced by “Tweets.” The use of “RT@username: tweet” convention was used to indicate  retweets.


January 2010


The Twitter account page UI in January 2010 (Figure 10) had a new attribute “Lists” added to the side panel both in the Stat section and as a separate section.


Figure 10: A Twitter account page archived in January 2010, showing a new attribute “Lists” added to the side panel both in the Stat section and as a separate section.


April 2010


The Twitter account page UI in April 2010 (Figure 11) had some changes in the banner and header. The “Login/Join” button on the top changed to the “Sign in” button. The “Join” button on the banner changed to “Sign Up” and the design of the bird logo changed.


Figure 11: A Twitter account page archived in April 2010,  showing the “Login/Join” button on the top changed to the “Sign in” button. The “Join” button on the banner changed to “Sign Up” and the design of the bird logo changed.


October 2010


The Twitter account page UI in October 2010 (Figure 12) had a gray colored retweet symbol added (square-looped arrow) to a retweeted tweet. The “Retweeted by username and ___ others” was added along with the timestamp and client info for tweets.


Figure 12: A Twitter account page archived in October 2010, showing a gray colored retweet symbol added (square-looped arrow) to a retweeted tweet. The “Retweeted by username and ___ others” was added along with the timestamp and client info for tweets.


A summary for the major UI changes for Twitter account pages during Foundational UI generation is demonstrated in the following slides:

Visual Enhancement Era (2011–2016)


The Visual Enhancement Era ranged from 2011 to 2016, where the Twitter account page UI went through numerous aesthetic changes. The page was divided into four parts: title, body, left side panel, and right side panel. The title had a background image (known as cover photo), an enlarged profile picture, and attributes of the Stat section. The body had a header section with tweets, tweet replies, and media. The rest of the body consisted of a pinned tweet (a user selected tweet that is permanently displayed on the user’s timeline) and other tweets. Each tweet was displayed with a profile picture, display name, username (Twitter handle), timestamp, and engagement buttons. The left side panel consisted of the user’s display name, username, bio, uploaded photos and videos. The right side panel consisted of a “Sign Up” section, a suggestion list for adding users, and worldwide trends.


December 2011


The Twitter account page UI in December 2011 (Figure 13) had major changes in the UI. Basic information was removed from the side panel to a title section on the main body. The title section had a larger profile picture and a display name. An “@” symbol prepended to the username (also known as Twitter handle) was also added to the title section along with the basic info. The display name had a verified check mark. Timeline tweets had profile picture, username and display name added. Timestamp, client info were absent in the tweets. Some tweets had a small white quote box on the right possibly indicating tweet replies. A “Follow” button was added to the main body. The main body’s header had other buttons: tweets, favorites, following, followers, and lists. The side panel appeared to have a lighter design and just had count of the attributes of the Stat section and a footer. 


Figure 13: A Twitter account page archived in December 2011, showing the title section having a larger profile picture, display name, “@” symbol prepended to the username, and basic info. Profile picture, username and display name added to timeline tweets. Other buttons added to the main body’s header: tweets, favorites, following, followers, and lists. 


June 2012


The Twitter account page UI in June 2012 (Figure 14) had a major change in orientation of the elements. The  attributes of the Stat section were moved from the side panel to the title section of the main body. The side panel was moved from right to left with a “Sign Up” button. The main body’s header buttons: tweets, favorites, following, followers, lists and the footer were also moved to the side panel. For each tweet, the timestamp was placed on the right. The location and retweet info was included for each tweet. The “Follow” button had the bird logo, but the design was changed. 


Figure 14: A Twitter account page archived in June 2012, showing the attributes of the Stat section moved from the side panel to the title section of the main body. The side panel was moved from right to left. The main body’s header buttons: tweets, favorites, following, followers, lists and footer were also moved to the side panel.


October 2012


The Twitter account page UI in October 2012 (Figure 14) had slight changes in the orientation of elements. The left side panel had an uploaded image section. The main body was placed on the right side. The orientation of the attributes’ count of the Stat section changed. One of the notable changes was that the profile had a background image (known as cover photo).


Figure 14: A Twitter account page archived inOctober 2012, showing the left side panel having an uploaded image section and the main body with a background image (cover photo).


February 2013


The Twitter account page UI in February 2013 (Figure 15)  had a slight change. The section for the uploaded images was absent.



Figure 15: A Twitter account page archived in February 2013, showing the section for the uploaded images was absent.

March 2013


The Twitter account page UI in March 2013 (Figure 16) had the uploaded images section replaced by the worldwide trends section.


Figure 16: A Twitter account page archived in March 2013, showing the uploaded images section replaced by the worldwide trends section.


October 2013


The Twitter account page UI in October 2013 (Figure 17) had few changes. The timeline tweets had light gray-colored symbols for the engagement buttons: reply, retweet, favorite, and more on the right side.



Figure 17: A Twitter account page archived in October 2013, showing the timeline tweets having light gray-colored symbols for the engagement buttons: reply, retweet, favorite, and more on the right side.


April 2014


The Twitter account page UI in April 2014 (Figure 18) had slight changes. The attributes of the Stat section orientation changed. The retweet symbol’s color changed from gray to light green for “retweeted by.”


Figure 18: A Twitter account page archived in April 2014, showing the attributes of the Stat section orientation changed. The retweet symbol’s color changed from gray to light green for “retweeted by.”


August 2014


The Twitter account page UI in August 2014 (Figure 19) had major changes. There was a “Sign Up” section and worldwide trends on the right side panel. The background image (cover photo) was spread out. The profile picture, display name, username, and about was placed on the left side. The attributes of the Stat section additionally had photos/videos, favorites, and more options. Options in the header included “Tweets” and  “Tweets and Replies.”


Figure 19: A Twitter account page archived in August 2014, showing the background image (cover photo) spread out,  a “Sign Up” section and worldwide trends on the right side panel, and the profile picture, display name, username, and about placed on the left side.


October 2014


The Twitter account page UI in October 2014 (Figure 20) had slight changes. The “More” option was replaced by “Lists” for the attributes of the Stat section. Other options in the header included “photos & videos” along with tweets and replies. The design for the “Sign up” section was changed slightly.


Figure 20: A Twitter account page archived in October 2014, showing the “More” option replaced by “Lists” for the attributes of the Stat section. Other options in the header included “photos & videos” along with tweets and replies.


December 2014


The Twitter account page UI in December 2014 (Figure 21) had uploaded photos and videos added to the left side panel.


Figure 21: A Twitter account page archived in December 2014, showing uploaded photos and videos added to the left side panel.


June 2015


The Twitter account page UI in June 2015 (Figure 22) had a banner added on the header having “Search” and “Login” options. A suggestion list for adding users was added to the right side panel.


Figure 22: A Twitter account page archived in June 2015, showing a banner added on the header having “Search” and “Login” options and a suggestion list for adding users added to the right side panel.


November 2015


The Twitter account page UI in November 2015 (Figure 23) had some aesthetic changes. The “Favorite” (star symbol) was replaced by “Like” (heart symbol). The top banner was removed. The date of birth was added in the bio info. There were aesthetic changes to the “Sign Up” section in the right side panel.



Figure 23: A Twitter account page archived in November 2015, showing the “Favorite” (star symbol) was replaced by “Like” (heart symbol) and the date of birth was added in the bio info.


February 2016


The Twitter account page UI in February 2016 (Figure 24) had slight changes. The uploaded photos and videos section was added to the left side panel. The color of the retweet symbol changed from light green to neon green for “retweeted by.”




Figure 24: A Twitter account page archived in February 2016, showing the uploaded photos and videos section was added to the left side panel. The color of the retweet symbol changed from light green to neon green for “retweeted by.”


March 2016


The Twitter account page UI in March 2016 (Figure 25) had the header “Photos & Videos” replaced by “Media.”



Figure 25: A Twitter account page archived in March 2016, showing the header “Photos & Videos” replaced by “Media.”


July 2016


The Twitter account page UI in July 2016 (Figure 26) had “Pinned tweet” added, which is a tweet that a user selected to be permanently displayed on their timeline. The pinned tweet is usually used to highlight important content such as announcements, achievements, links, or statements. 


Figure 26: A Twitter account page archived in July 2016, showing a “Pinned tweet” added.


A summary for the major UI changes for Twitter account pages during Visual Enhancement Era is demonstrated in the following slides:

Minimalistic Design (2017–2019)


The Minimalistic Design (2017–2019) generation displayed the Twitter account page with a more visually balanced appearance. The engagement buttons and attributes of the Stat section had some aesthetic changes. The profile picture changed to circular shape, verified checkmarks were added to individual tweets, and “Show this thread” option was added to view reply threads.


July 2017


The Twitter account page UI in July 2017 (Figure 27) had few aesthetic changes. The profile picture was changed to circular shape. A verified checkmark was added to individual tweets. There were aesthetic changes to the “Sign Up” section on the right side panel. There were also aesthetic changes to the engagement buttons. The attributes of the Stat section on the title appeared in lowercase. A downward arrow symbol appeared on the right top corner of a tweet which included “More” options.


Figure 27: A Twitter account page archived in July 2017, showing a circular shaped profile picture, verified checkmark added to individual tweets, and aesthetic changes to engagement buttons.


December 2018–2019


The Twitter account page UI in December 2018 (Figure 28) had the “Show this thread” option added to view reply threads. The Twitter account page until 2019 did not display any UI changes.


Figure 28: A Twitter account page archived in December 2018, showing the “Show this thread” option added to view reply threads. The Twitter account page from 2019 also displayed the same UI.


A summary for the major UI changes for Twitter account pages during Minimalistic Design generation is demonstrated in the following slides:

Mobile-centric Design (2020–Mid 2023)


During the Mobile-centric Design (2020–Mid 2023) generation, Twitter resigned the desktop version and deprecated the server-side UI as well. When the change occurred in 2020, this affected the web archiving services. In 2020, the archived Twitter account pages did not display the new UI changes. However, there was missing content from side panels in 2021, but still displayed the old UI. Finally, the new UI changes reflected in the Wayback Machine later around June 2022, but the side panels were missing from the archived version.


June 2020


In June 2020, Twitter deprecated the server-side UI (legacy version) and switched to a new client-side UI. Twitter also redesigned the desktop version to adopt its mobile version’s features. But, all the UI changes were observed in the Wayback Machine later around June 2022. The Twitter account page UI from June 2020 (Figures 30) displayed the same UI as Minimalistic Design generation in the web archive.


Figure 29: A Twitter account page UI archived in June 2020, showing the same UI as Minimalistic Design generation.


December 2021


The Twitter account page UI from December 2021 (Figure 31) had content missing from side panels. The “uploaded photos and videos” section was absent from the left side panel. The user suggestion list and worldwide trends were absent from the right side panel.


Figure 30: A Twitter account page UI archived in December 2021, showing missing content from side panels.


June 2022


In June 2022, the changes for the new UI started to appear in the web archives. For Jack’s Twitter account page, we observed that the change in archive for the new UI appeared on June 30, 2022. However, the side panels such as navigation menus or trends sections were missing in the archived version (Figure 32).


Figure 31: A Twitter account page UI archived in June 2022, showing the side panels such as navigation menus or trends sections were missing in the archived version.


March 2023


The Twitter account page UI in March 2023 (Figure 33) had the side panels still missing in the archived version. The “Tweets & replies” changed to only “Replies” in the header of the body.


Figure 32: A Twitter account page UI archived in March 2023, showing the “Tweets & replies” changed to only “Replies” in the header of the body.


A summary for the major UI changes for Twitter account pages during Mobile-centric Design generation is demonstrated in the following slides:


X Redesign (Late 2023–Present)


In X Redesign (Late 2023–Present) generation, the major change was the rebranding of Twitter to X. The logo changed from the bird symbol to X. “Tweets” were renamed as “posts” and “retweets” as “reposts.” There were incomplete replays of the archived version of Twitter account pages before the transition from the  Twitter.com domain to X.com occurred. However, the transition affected the archiving services largely. There were failed replays for Twitter.com URLs and incomplete replays for X.com URLs.


September 2023


In July 2023, Twitter was rebranded to X. The Twitter account page UI in September 2023 (Figure 34) had the bird logo changed to X logo in the Twitter account page UI. Although the design of the bird logo changed throughout past generations, the bird logo remained to be Twitter’s iconic symbol until the platform rebranded to X. “Tweets” were renamed as “posts” and “retweets” as “reposts.” The side panels were still missing in the archived version.


Figure 33: A Twitter account page UI archived in September 2023, showing the bird logo changed to X logo, “Tweets” were renamed as “posts.” 


December 2023


The Twitter account page UI in December 2023 (Figure 35) had incomplete replay of the side panels. The right side panel had a failed reply to the trends section and user suggestion list. The tweets in the timeline also failed to replay. My WS-DL colleagues Himarsha and Kritika discussed how the change of UI impacted tweet replays in the web archives in a series of blog posts (1, 2, 3, 4).



Figure 34: A Twitter account page UI archived in December 2023, showing the right side panel had a failed reply to the trends section and user suggestion list. The tweets in the timeline also failed to replay.


February 2024


The Twitter account page UI in February 2024 (Figure 36) had tweets replayed in the timeline. However, the right side panel had a failed replay to the trends section, but the user suggestion list replayed completely. Other changes were observed in the engagement part of the timeline tweets. The tweets had additional buttons added: view counts (bar chart symbol), bookmark (ribbon symbol), “more” options (ellipsis symbol), and share option (upward arrow symbol).



Figure 35: A Twitter account page UI archived in February 2024, showing the right side panel had a failed reply to the trends section, the user suggestion list replayed completely, and the  tweets had additional buttons added.


May 2024


On May 17, 2024, the domain name was changed from Twitter.com to X.com. For URLs using the Twitter.com domain, the Wayback Machine failed to replay the Twitter account page content for the 2024 mementos of Jack’s account (Figure 37). For the X.com domain, the archived mementos were redirected to a page showing “The page is unavailable for archiving” (Figure 38).


Figure 36: A Twitter account page UI archived in December 2024, showing the Wayback Machine failed to replay the Twitter account page content.


Figure 37: An X account page UI archived in December 2024, showing the Wayback Machine redirected to a page showing “The page is unavailable for archiving.”


April 2025


In 2025, for the URLs using Twitter.com domain, the Wayback Machine still results in failed replays of the Twitter account page. A Twitter account page UI of X.com domain in April 2025 results in incomplete replay of the Twitter account page (Figure 39).


Figure 38: An X account page archived in April 2025, showing incomplete replay of the Twitter account page.


Live Twitter Account Page UI – June 2025


Figure shows the live version of Jack’s Twitter account page UI from June 2025. The UI shows reduced information such as the navigational menus on the left side panel and user suggestion list, trends on the right side panel are absent. This is because the user is not authenticated. Other changes include: “Highlights” added to the header of the body, Grok (AI chatbot symbol) added to the UI, and affiliation badge/company symbol added to the display name.


Figure 39: Live Twitter account page example from June 2025.


A summary for the major UI changes for Twitter account pages during X Redesign generation is demonstrated in the following slides:

Summary


We previously discussed different changes that occurred to the UI of individual tweets during 2006–2025. Along with individual tweets, it is also important to observe the changes that occurred in the Twitter account pages. A Twitter account page consists of more diverse elements which were absent/present at different times. Understanding these UI differences will allow researchers to navigate archived pages more effectively and utilize them for different research tasks. In summary, the Foundation UI generation established a foundational HTML structure for the desktop interface. The Visual Enhancement Era brought huge visual changes to the Twitter account page UI. The Minimalistic Design generation did not portray much changes, rather contributed to create a visually balanced and aesthetically pleasing outlook. The Mobile-centric Design and X Redesign generations introduced significant challenges and changes for web archiving services.


To recap, the following table is a summary of the timeline of changes in the UI of Twitter account pages:


Timeline Changes
November 2006 Header: Twitter logo, main body: square-shaped profile picture, display name/username, latest tweet, timeline of past tweets with timestamp, and client info. Footer: Copyright and links to other information. Right side panel: basic user information. Bottom buttons: “View all” and “RSS feed.” Top buttons: “With Friends (24h)” and “Previous.”
December 2006 Bio added to the right side panel and count of favorites added to previous count attributes.
January 2007 Display name/username added in basic information to the side panel and timeline tweets displayed in alternate colors (white-blue). A “Join for Free” button on the bottom of the side panel was added.
August 2007 Timeline information added in basic information to the side panel. The count of the attributes’ alignment changed from left to right. The top button “With Friends (24h)” changed to “With Others.” Alignment changed: “RSS feed” (left) and “Older” (right). Alternate colors displayed between timeline tweets were absent.
October 2007 Everything in a single white, square quote box. The side panel changed in an organized manner with titles: About, Stats, and Following. The header appeared with a “Login/Join” and “Search” button. The alignment of the top buttons “With Others” and “Previous” changed to middle. The bottom button “RSS feed” changed to “RSS.”
October 2008 Timestamp and client info color for tweets changed to gray. Month in timestamp became abbreviated. Organization for the Stats in the side panel changed to followers, following, and updates. About and Stats titles were removed from the side panel. The “Join” button was removed from the side panel. “Updates” and “Favorites” buttons existed, but showed no count. A banner was added on top with the “Join” button and a bird logo.
February 2009 The “RSS feed” was moved to the side panel. The “Search” button was removed from the header. The direction of the bird logo changed to the left on the “Join” button banner. The “@username” convention was used to reply to a tweet. To indicate a tweet reply, “in reply to username” was added along with timestamp and client info to tweets.
April 2009 Count of updates was removed from Stat organization and was placed beside the “Updates” button. The “Older” button on the bottom was replaced with a “More” button.
July 2009 The “Updates” button was replaced by “Tweets.” The use of “RT@username: tweet” convention was used to indicate a retweeted tweet.
January 2010 A new attribute “Lists” added to the side panel both in Stat count and as a separate section.
April 2010 The “Login/Join” button on the top changed to the “Sign in” button. The “Join” button on the banner changed to “Sign Up” and the design of the bird logo changed.
October 2010 A gray colored retweet symbol added (square-looped arrow) to a retweeted tweet. The “Retweeted by username and # others” was added along with the timestamp and client info for tweets.
December 2011 Title section: larger profile picture, a display name, an “@” symbol prepended to the username (also known as Twitter handle), and basic info. A verified check mark added to the display name. Profile picture, username and display name added to timeline tweets. Timestamp, client info were absent in the tweets. A “Follow” button was added to the main body. The main body had other buttons: tweets, favorites, following, followers, and lists. The side panel appeared to have a lighter design and just had count of the Stat attributes and a footer.
June 2012 The Stat attributes section was moved from the side panel to the title section of the main body. The side panel was moved from right to left with a “Sign Up” button. The main body’s header buttons: tweets, favorites, following, followers, lists and the footer were also moved to the side panel. For each tweet, the timestamp was placed on the right. The location and retweet info was placed for each tweet. The “Follow” button had the bird logo, but the design was changed.
October 2012 An uploaded image section on the left side panel. The main body on the right side. The orientation of the Stat attributes count changed. The profile had a background image (known as cover photo).
February 2013 The section for the uploaded images was absent.
March 2013 The uploaded images section was replaced by the worldwide trends section.
October 2013 The timeline tweets had light gray-colored symbols for the engagement buttons: reply, retweet, favorite, and more on the right side.
April 2014 The Stat attributes section’s orientation changed. The retweet symbol’s color changed from gray to light green for “retweeted by.”
August 2014 A “Sign Up” section and worldwide trends on the right side panel. The background image (cover photo) was spread out. The profile picture, display name, username, and about was placed on the left side. The Stat attributes section additionally had photos/videos, favorites, and more options. Options in the header included “Tweets” and “Tweets and Replies.”
October 2014 The “More” option was replaced by “Lists” for the Stat attributes section. Other options in the header included “photos & videos” along with tweets and replies. The design for the “Sign up” section was changed slightly.
December 2014 Uploaded photos and videos added to the left side panel.
November 2015 The “Favorite” (star symbol) was replaced by “Like” (heart symbol). The top banner was removed. The date of birth was added in the bio info. There were aesthetic changes to the “Sign Up” section in the right side panel.
February 2016 The uploaded photos and videos section was added to the left side panel. The color of the retweet symbol changed from light green to neon green for “retweeted by.”
March 2016 The header “Photos & Videos” was replaced by “Media.”
July 2016 “Pinned tweet” added which is a tweet that a user selected to be permanently displayed on his timeline.
July 2017 Profile picture changed to circular shape. A verified checkmark added to individual tweets. Aesthetic changes to the “Sign Up” section on the right side panel. Aesthetic changes to the engagement buttons. The Stat attributes section on the title appeared in lowercase. A downward arrow symbol appeared on the right top corner of a tweet which included “More” options.
December 2018–2019 The “Show this thread” option was added to view reply threads.
June 2020 Redesigned website and switched to a new client-side UI. However, the new UI changes reflected in the Wayback Machine later around June 2022.
December 2021 The “uploaded photos and videos” section was absent from the left side panel. The user suggestion list and worldwide trends were absent from the right side panel.
June 2022 New UI changes started to appear in the Wayback Machine. The side panels such as navigation menus or trends sections were missing in the archived version.
March 2023 The side panels are still missing in the archived version. The “Tweets & replies” changed to only “Replies” in the header of the body.
September 2023 The bird logo changed to X logo in the Twitter account page UI. “Tweets” were renamed as “posts” and “retweets” as “reposts.” The side panels were still missing in the archived version.
December 2023 The right side panel had a failed reply to the trends section and user suggestion list. The tweets in the timeline also failed to replay.
February 2024 The right side panel had a failed reply to the trends section, but the user suggestion list replayed completely. The tweets had additional buttons added: view counts (bar chart symbol), bookmark (ribbon symbol), “more” options (ellipsis symbol), and share option (upward arrow symbol).
May 2024 For URLs using the Twitter.com domain, the Wayback Machine failed to replay the Twitter account page content for the 2024 mementos. For the X.com domain, the archived mementos were redirected to a page showing “The page is unavailable for archiving.”
April 2025 The Twitter.com domain URLs result in failed replays of the Twitter account page in the Wayback Machine. A Twitter account page UI of X.com domain results in incomplete replays of the Twitter account page
Live Twitter Account Page UI - June 2025 The UI shows reduced information such as the navigational menus on the left side panel and user suggestion list, trends on the right side panel are absent. This is because the user is not authenticated. Other changes: “Highlights” added to the header of the body, Grok (AI chatbot symbol) added to the UI, and affiliation badge/company symbol added to the display name.


---------- Tarannum Zaki (@tarannum_zaki)

Ogden Nash makes a splash / John Mark Ockerbloom

"I would live all my life in nonchalance and insouciance
Were it not for making a living, which is rather a nouciance"

Ogden Nash had tried to make a living teaching, selling bonds, and writing ad copy. But after he sent some satirical lines to The New Yorker, the magazine offered him a job. Nash’s verse debuted there in 1930, the start of a long career of humor and wordplay. His earliest published poems, including the couplet above, join the public domain in 29 days.

Mind The GAAP / David Rosenthal

Senator Everett Dirksen is famously alleged to have remarked "a billion here, a billion there, pretty soon you're talking real money".

Source
Oracle is talking real money; they're borrowing $1.64B each working day. Mr. Market is skeptical that the real money is going to be repaid, as Caleb Mutua reports in Morgan Stanley Warns Oracle Credit Protection Nearing Record High:
A gauge of risk on Oracle Corp.’s (ORCL) debt reached a three-year high in November, and things are only going to get worse in 2026 unless the database giant is able to assuage investor anxiety about a massive artificial intelligence spending spree, according to Morgan Stanley.

A funding gap, swelling balance sheet and obsolescence risk are just some of the hazards Oracle is facing, according to Lindsay Tyler and David Hamburger, credit analysts at the brokerage. The cost of insuring Oracle Corp.’s debt against default over the next five years rose to 1.25 percentage point a year on Tuesday, according to ICE Data Services.
Mutua reports that:
The company borrowed $18 billion in the US high-grade market in September. Then in early November, a group of about 20 banks arranged a roughly $18 billion project finance loan to construct a data center campus in New Mexico, which Oracle will take over as tenant.

Banks are also providing a separate $38 billion loan package to help finance the construction of data centers in Texas and Wisconsin developed by Vantage Data Centers,
Source
But notice that only $18B of this debt appears on Oracle's balance sheet. Despite that, their credit default swaps spiked and the stock dropped 29% in the last month.

Below the fold I look into why Oracle and other hyperscalers desperate efforts to keep the vast sums they're borrowing off their books aren't working.

Part of the reason the market is unhappy started in mid-September with The Economist's The $4trn accounting puzzle at the heart of the AI cloud. It raised the issue that I covered in Depreciation, that the hardware that represents about 60% of the cost of a new AI data center doesn't last long. It took a while for the financial press to focus on the issuea, but now they have.

The most recent one I've seen was triggered by the outage at the CME (caused by overheating in Chicago in November!). In AI Can Cook the Entire Market Now Tracy Alloway posted part of the transcript of an Odd Lots podcast with Paul Kedrosky pointing out a reason I didn't cover why the GPUs in AI data centers depreciate quickly:
When you run using the latest, say, an Nvidia chip for training a model, those things are being run flat out, 24 hours a day, seven days a week, which is why they're liquid-cooled, they're inside of these giant centers where one of your primary problems is keeping them all cool. It's like saying ‘I bought a used car and I don't care what it was used for.’ Well, if it turns out it was used by someone who was doing like Le Mans 24 hours of endurance with it, that's very different even if the mileage is the same as someone who only drove to church on Sundays.

These are very different consequences with respect to what's called the thermal degradation of the chip. The chip's been run hot and flat out, so probably its useful lifespan might be on the order of two years, maybe even 18 months. There's a huge difference in terms of how the chip was used, leaving aside whether or not there's a new generation of what's come along. So it takes us back to these depreciation schedules.
There was a similar problem after the Ethereum merge:
73% of Ethereum miners have just given up: “About 10.6 million RTX 3070 equivalents have stopped mining since the merge.”

We strongly recommend that you do not hit eBay for a cheap video card, despite the listings reassuring you that this card was only used by a little old lady to play Minecraft on Sundays and totally not for crypto mining, and that you should ignore the burnt odor and the charred RAM. Unless you’re poor, and the card’s so incredibly cheap that you’re willing to play NVidia Roulette.

How well do miners treat their precious babies? “GPU crypto miners in Vietnam appear to be jet washing their old mining kit before putting the components up for sale.” There are real cleaning methods that involve doing something like this with liquid fluorocarbons — but the crypto miners seem to be using just water.
But this depreciation problem is only one part of why the market is skeptical of the hyperscalers technique for financing their AI data centers. The technique is called Conduit Debt Financing, and Les Barclays' Unpacking the Mechanics of Conduit Debt Financing provides an accessible explanation of how it works:
Conduit debt financing is a structure where an intermediary entity (the “conduit”) issues debt securities to investors and passes the proceeds through to an end borrower. The key feature distinguishing conduit debt from regular corporate bonds is that the conduit issuer has no substantial operations or assets beyond the financing transaction itself. The conduit is purely a pass-through vehicle, the debt repayment relies entirely on revenues or assets from the ultimate borrower.

Think of it this way: Company A wants to borrow money but doesn’t want that debt appearing on its balance sheet or affecting its credit rating. So it works with a conduit entity, Company B, which issues bonds to investors. Company B takes that capital and uses it to build infrastructure or acquire assets that Company A needs. Company A then enters into long-term lease or service agreements with Company B, and those payments service the debt. On paper, Company A is just a customer making payments, not a debtor owing bondholders.

The structure creates separation. The conduit issuer’s creditworthiness depends on the revenue stream from the end user, not on the conduit’s own balance sheet (because there isn’t really one). This is why conduit debt is often referred to as “pass-through” financing, the economics flow through the conduit structure to reach the underlying obligor.
The article continues to examine Meta's deal in great detail, and notes some of the legal risks of this technique:
Legal risks when things break: Substantive consolidation (court merges conduit with sponsor), recharacterization (lease treated as secured financing), and fraudulent transfer challenges. The structures haven’t been stress-tested yet because hyperscalers are wildly profitable. But if AI monetization disappoints or custom silicon undercuts demand, we’ll discover whether bondholders have secured claims on essential infrastructure or are functionally unsecured creditors of overleveraged single-purpose entities.
The article asks the big question:
Why would Meta finance this via the project finance markets? And why does it cost $6.5 billion more?

That’s how much more Meta is paying to finance this new AI data center using the project finance market versus what they could have paid had they used traditional corporate debt. So why on earth is this being called a win? And even crazier, why are other AI giants like Oracle and xAI looking to copy it?
The $6.5B is the total of the 1% extra interest above Meta's corporate bond rate over the 20 years.

Meta data center
If Counduit Debt Financing is a standard tool of project finance, why is Mr. Market unhappy with the hyperscalers' use of it? Jonathan Weil's somewhat less detailed look at Meta's $27B deal in AI Meets Aggressive Accounting at Meta’s Gigantic New Data Center reveals how they are pushing the envelope of GAAP (Generally Accepted Accounting Principles):
Construction on the project was well under way when Meta announced a new financing deal last month. Meta moved the project, called Hyperion, off its books into a new joint venture with investment manager Blue Owl Capital. Meta owns 20%, and funds managed by Blue Owl own the other 80%. Last month, a holding company called Beignet Investor, which owns the Blue Owl portion, sold a then-record $27.3 billion of bonds to investors, mostly to Pimco.

Meta said it won’t be consolidating the joint venture, meaning the venture’s assets and liabilities will remain off Meta’s balance sheet. Instead Meta will rent the data center for as long as 20 years, beginning in 2029. But it will start with a four-year lease term, with options to renew every four years.

This lease structure minimizes the lease liabilities and related assets Meta will recognize, and enables Meta to use “operating lease,” rather than “finance lease,” treatment. If Meta used the latter, it would look more like Meta owns the asset and is financing it with debt.
Under GAAP, when would Meta be required to treat it as a finance lease?
The joint venture is what is known in accounting parlance as a variable interest entity, or VIE for short. That term means the ownership doesn’t necessarily reflect which company controls it or has the most economic exposure. If Meta is the venture’s “primary beneficiary”—which is another accounting term of art—Meta is required to consolidate it.

Under the accounting rules, Meta is the primary beneficiary if two things are true. First, it must have “the power to direct the activities that most significantly impact the VIE’s economic performance.” Second, it must have the obligation to absorb significant losses of the VIE, or the right to receive significant benefits from it.
Does Meta have “the power to direct the activities" at the data center it will operate?:
Blue Owl has control over the venture’s board. But voting rights and legal form aren’t determinative for these purposes. What counts under the accounting rules is Meta’s substantive power and economic influence. Meta in its disclosures said “we do not direct the activities that most significantly impact the venture’s economic performance.” But the test under the accounting rules is whether Meta has the power to do so.
Does Meta receive "significant benefits"? Is it required to "absorb losses"?:
The second test—whether Meta has skin in the game economically—has an even clearer answer. Meta has operational control over the data center and its construction. It bears the risks of cost overruns and construction delays. Meta also has provided what is called a residual-value guarantee to cover bondholders for the full amount owed if Meta doesn’t renew its lease or terminates early.
The lease is notionally for 20 years but Meta can get out every four years. Is Meta likely to terminate early? In other words, how likely in 2041 is Meta to need an enormous 16-year old data center? Assuming that the hardware has an economic life of 2 years, the kit representing about 60% of the initial cost would be 8 generations behind the state of the art. In fact 60% of the cost is likely to be obsolete by the first renewal deadline, even if we assume Nvidia won't actually be on the one-year cadence it has announced.

But what about the other 40%? It has a longer life, but not that long. The reason everyone builds new data centers is that the older ones can't deliver the power and cooling current Nvidia systems need. 80% of recent data centers in China are empty because they were built for old systems.

But the new ones will be obsolete soon:
Today, Nvidia's rack systems are hovering around 140kW in compute capacity. But we've yet to reach a limit. By 2027, Nvidia plans to launch 600kW racks which pack 576 GPU dies into the space one occupied by just 32.
Current data centers won't handle these systems - indeed how to build data centers that do is a research problem:
To get ahead of this trend toward denser AI deployments, Digital Realty announced a research center in collaboration with Nvidia in October.

The facility, located in Manassas, Virginia, aims to develop a new kind of datacenter, which Nvidia CEO Jensen Haung has taken to calling AI factories, that consumes power and churn out tokens in return.
If the design of data centers for Nvidia's 2027 systems is only now being researched, how likely is it that Meta will renew the lease on a data center built for Nvidia's 2025 systems in 2041? So while the risk that Meta will terminate the lease in 2029 is low, termination before 2041 is certain. And thus so are residual-value guarantee payments.

How does the risk of non-renewal play out under GAAP?
Another judgment call: Under the accounting rules, Meta would have to include the residual-value guarantee in its lease liabilities if the payments owed are “probable.” That could be in tension with Meta’s assumption that the lease renewal isn’t “reasonably certain.”

If renewal is uncertain, the guarantee is more likely to be triggered. But if the guarantee is triggered, Meta would have to recognize the liability.
Weil sums it up concisely:
Ultimately, the fact pattern Meta relies on to meet its conflicting objectives strains credibility. To believe Meta’s books, one must accept that Meta lacks the power to call the shots that matter most, that there’s reasonable doubt it will stay beyond four years, and that it probably won’t have to honor its guarantee—all at the same time.
David Sacks Nov 6
These accounting shenanigans explain why Sam Altman said the queit part out loud recently and then had to walk it back. Jose Antonio Lanz reports this in OpenAI Sought Government Loan Guarantees Days Before Sam Altman's Denial (my emphasis):
OpenAI explicitly requested federal loan guarantees for AI infrastructure in an October 27 letter to the White House—which kindly refused the offer, with AI czar David Sacks saying that at least 5 other companies could take OpenAI’s place—directly contradicting CEO Sam Altman's public statements claiming the company doesn't want government support.

The 11-page letter, submitted to the Office of Science and Technology Policy, called for expanding tax credits and deploying "grants, cost-sharing agreements, loans, or loan guarantees to expand industrial base capacity" for AI data centers and grid components. The letter detailed how "direct funding could also help shorten lead times for critical grid components—transformers, HVDC converters, switchgear, and cables—from years to months."
After this PR faux pas some less obvious way taxpayer dollars could keep the AI bubble inflating had to be found. Just over two weeks later Thomas Beaumont reported that Trump signs executive order for AI project called Genesis Mission to boost scientific discoveries:
Trump unveiled the “Genesis Mission” as part of an executive order he signed Monday that directs the Department of Energy and national labs to build a digital platform to concentrate the nation’s scientific data in one place.

It solicits private sector and university partners to use their AI capability to help the government solve engineering, energy and national security problems, including streamlining the nation’s electric grid, according to White House officials who spoke to reporters on condition of anonymity to describe the order before it was signed.
This appears to be a project of David Sacks, the White House AI advisor and a prominent member of the "PayPal Mafia". Sacks was the subject of a massive, 5-author New York Times profile entitled Silicon Valley’s Man in the White House Is Benefiting Himself and His Friends:
  • Mr. Sacks has offered astonishing White House access to his tech industry compatriots and pushed to eliminate government obstacles facing A.I. companies. That has set up giants like Nvidia to reap an estimate of as much as $200 billion in new sales.
  • Mr. Sacks has recommended A.I. policies that have sometimes run counter to national security recommendations, alarming some of his White House colleagues and raising questions about his priorities.
  • Mr. Sacks has positioned himself to personally benefit. He has 708 tech investments, including at least 449 stakes in companies with ties to artificial intelligence that could be aided directly or indirectly by his policies, according to a New York Times analysis of his financial disclosures.
  • His public filings designate 438 of his tech investments as software or hardware companies, even though the firms promote themselves as A.I. enterprises, offer A.I. services or have A.I. in their names, The Times found.
  • Mr. Sacks has raised the profile of his weekly podcast, “All-In,” through his government role, and expanded its business.
The article quotes Steve Bannon:
Steve Bannon, a former adviser to Mr. Trump and a critic of Silicon Valley billionaires, said Mr. Sacks was a quintessential example of ethical conflicts in an administration where “the tech bros are out of control.”

“They are leading the White House down the road to perdition with this ascendant technocratic oligarchy,” he said.
David Sacks Nov 24
Gary Marcus asked Has the bailout of generative AI already begun?:
“The way this works”, said an investor friend to me this morning: “is that when Nvidia is about to miss their quarter, Jen Hsun calls David Sacks, who then gets this government initiative to place a giant order for chips that go into a warehouse.”

I obviously can’t confirm or deny that actually happened. My friend might or might not have been kidding. But either way the White House’s new Science and AI program, Genesis, announced by Executive Order on Monday, does seem to involve the government buying a lot of chips from a lot of AI companies, many of which are losing money.

And David Sack’s turnaround from “read my lips, no AI bailout” (November 6) to “we can’t afford to [let this all crash]” tweet (November 24) came just hours before the Genesis announcement.
I think the six companies Sacks was talking about are divided into two groups:
  • OpenAI, Anthropic and xAI, none of whom have a viable business model.
  • Meta, Google and Microsoft, all of whom are pouring the cash from their viable business models into this non-viable business,
This is the reason why the hyperscalers are taking desperate financial measures. They are driven by FOMO but they all see the probability that the debt won't be paid back. Where is the revenue to pay them back going to come from? It isn't going to come from consumers, because edge inference is good enough for almost all consumers (which is why 92% of OpenAI's customers pay $0). It isn't going to come from companies laying off hordes of low-paid workers, because they're low-paid.

So before they need to replace the 60% of the loan's value with the next generation of hardware in 2027 they need to find enterprise generative AI applications that are so wildly protiftable for their customers that they will pay enough over the cost of running the applications to cover not just the payments on the loans but also another 30% of the loan value every year. For Meta alone this is around $30B a year!

And they need to be aware that the Chinese are going to kill their margins. Thanks to their massive investments in the "hoax" of renewable energy, power is so much cheaper in China that systems built with their less efficient chips are cost-competitive with Nvida's in operation. Not to mention that the Chinese chip makers operate on much lower margins than Nvidia. Nvidia's chips will get better, and so will the Chinese chips. But power in the US will get more expensive, in part because of the AI buildout, and in China it will get cheaper.

This won't end well

Are you serious? / Ed Summers

There was a bit of a tussle about Ruby recently, which seemed to center on whether it was a “serious” programming language:

In tone these two pieces reminded me a bit of the discourse around “real programmers”, which is somewhat amusing on the one hand, but ultimately a conversational dead-end. However, in substance I don’t think either position makes much sense.

I’ve seen disciplined teams do serious engineering with Ruby. Comprehensive test suites can provide some of the assurances that type safety provides.

Sure, software development can be fun and joyful. It should be joyful and rewarding when we are learning–and learning is a lifelong process.

But software is deeply intertwined with our lives. Being thoughtful, and even frugal, about how we develop software isn’t serious, it’s responsible, and there’s a difference. Saying that big corporations can deploy Ruby because they are burning cash isn’t a convincing argument to choose Ruby today.

Would I choose Ruby today for a new project? It would all depend on what the project is. Is it a small utility to make my life easier? Absolutely. Is it a heavily used compute intensive service that other people need to rely on? Probably not. It matters more what we are doing with the software that is created. Being joyful and serious about that is possible, right?

December 2025 Early Reviewers Batch Is Live! / LibraryThing (Thingology)

Win free books from the December 2025 batch of Early Reviewer titles! We’ve got 198 books this month, and a grand total of 2,392 copies to give out. Which books are you hoping to snag this month? Come tell us on Talk.

If you haven’t already, sign up for Early Reviewers. If you’ve already signed up, please check your mailing/email address and make sure they’re correct.

» Request books here!

The deadline to request a copy is Friday, December 26th at 6PM EST.

Eligibility: Publishers do things country-by-country. This month we have publishers who can send books to the US, Canada, the UK, Australia, Ireland, Germany, France, Finland, Netherlands, Denmark and more. Make sure to check the message on each book to see if it can be sent to your country.

Running Wild Novella Anthology, Volume 9 Book 1Teach a Kid to Save: A Fun, Hands-On Approach to Building Smart Money HabitsNo Fucks GivenAmerica As It Happened: A Moment-by-Moment Journey Through Time, from Prehistory to the Present DayThe Crown of ZeusButterfly GamesA Waffle Lot of Love!Lily of the ValleyDays of Love and Rage: A Story of Ordinary People Forging a RevolutionA Grain of Sand in Lambeth: PoemsThe Long Now Conditions Permit: PoemsRational Ideas: Book TwoOctober 7: A Story of Courage and ResurrectionDrag Racing's Quarter-Mile Warriors II: Then and NowMount MiseryCozy Animals Color Swatches Palette PlannerDuck and Dragon: Cozy Fantasy Coloring Book AdventureWe Are the Future: Proud, Kind, UnitedFreya the DeerThe Lawnmower LadyFes Is a Mirror: A NovellaLook UP Look IN Look OUT: 3 Simple Steps to a Divinely Guided LifeRational Ideas: Book ThreeSelf-Portrait as the A Vision of Hope: A Story of Redemption and PurposeA Vision of Hope: ReflectionsGuide For The Kosher TravelerGuide For The Kosher Traveler (Hebrew Edition)The Shelf They LostTopsy’s Big Escape: The Mostly True Story of a Runaway Circus ElephantEven After ThisTreacherous ParadiseHDS: Hominem de SententiaThe Daughter of Shadows and IvorySpirituality, SimplifiedThe Truest Son of FranceLife On Earth (Past, Present & Future!)Behind the Badge: From Police Chief to Opioid Addict: A True Story of Ruin and RedemptionLovely TormentAvaFarewell, the Beloved CountryHard Feelings: Finding the Wisdom in Our Darkest EmotionsComposting Simplified for Beginners: A Complete Guide to Fix Common Compost Pitfalls, Create Fertile Soil, and Enjoy a Lush, Productive GardenI'll Try Anything Twice: Misadventures of a Self-Medicated LifeIsland Days in Galveston: The Ultimate Guide: Where to Eat, Play, and Explore - One Island Day at a Time in Galveston, TexasLa Guía Completa de Cuidados para el Dragón Barbudo: Una Guía Paso a Paso para Criar un Dragón Barbudo Saludable con la Dieta, Los Cuidados y el Hábitat Adecuados Desde el Primer DíaAs If by MagicThe Gardener's Wife's MistressResonant Blue and Other StoriesNever ForgottenNutcracker: Christmas Story Coloring BookThe Complete Leopard Gecko Care Handbook: A Step-by-Step Guide to Raise a Healthy Leopard Gecko with the Right Diet, Care, and Habitat from Day OneThe Magic Pill For The Perfect BodyPoetic MusingsData Structures and Algorithms Essentials You Always Wanted to Know: Master Python, Recursion, Dynamic Programming, and Greedy Algorithms with Hands-On ExamplesPublic Speaking Essentials You Always Wanted to Know: Master Confidence, Charisma, Storytelling and Audience Engagement for Powerful PresentationsMicrosoft Power BI Essentials You Always Wanted to Know: Master Data Transformation, Visualizations, AI Integration and Reporting for Smarter Business InsightsBrand Management Essentials You Always Wanted to Know: The Complete Guide to Crafting Brand Strategy, Positioning and Loyalty for Business GrowthThrough The Closet Door Part One: A MemoirCancer Courts My MotherWriting Between the Lines: Poetry CollectionPerihelion: Poetry CollectionSolemnity RitesAsa JamesMy Sister's Quilt: A Collection of Short StoriesDragon Marked: The Legend of the Flamegold rushLove And AngerGihigugma, Ace of HeartsThe Shy Mouse's WishThe Silent Echoes: Whispers of Memory and LossAcoustic EmbraceLyrical EmbraceDevil's GambitMinor Injuries: Ten Short StoriesThe Wrong Kind of Son: A Memoir of a Narcissistic Father's Abuse, Survival, and Finding Peace after the StormLuma and the Whispering ChalkboardThe Case of the Culvert PuppiesThe Sunset ProtocolRepublic of Forge and Grace: A Parallel-Universe America NovelBodaciously True & Totally Awesome: Episode 1: Bad BoyA Blazing AttractionOf Fire and FateI'M FINE!: A Practical Guide To Managing Your Emotions To Strengthen Relationships With Loved Ones And YourselfSilent YZoe's FameThe Lords of the WorldLike BarabbasJibberjack, FibberjackNaughty Stories for Naughty Girls and Boys (Volume Three)The SparkDe waakvlamThe Knowing DollThe Brave New Kid — Ari Stands Up to BullyingTeaching News Literacy in the Age of AI: A Cross-Curricular ApproachEverhaven: A Paradise Built on Survival, Data and DeceptionHurricane Helene: Resiliency After the Storm, Part OneWine & SmokeHearts of Fire: Crossing the LineGuiding Principles For Success (GPS) MapThe Montana Gold MineNavigating Financial Choices: A Young Adult Guide to Education, Work, and MoneyThe 12th CleansingWiser. Hotter. Stronger: Living Courageously Through MenopauseReal-World Hobbies: What's Out There. How to Get Started. What It Costs. Crafts, Clubs and CommunitiesLotus in the Tide: Prose and PoemsThe HostessThe Queen's Dark AmbitionHot Flashes and Healing: A Sacred Journey for Black Women from Perimenopause to MenopauseAre You Snuggly?Rainbow ColorsThe MallEntangled: A Cabinet of Botanical WondersSirenp0intlessA Little Merry ChristmasWhen Worlds CollideFreewheelerGet a Life! A Guide to Finding a Philosophy to Live ByDigital Wisdom Stories: Screen Time Solutions Through Simple Family RitualsThe Goddess Remedy: Unleash Your Power, Embody Your Truth, and Love Without LimitsRoots of Resilience: Unveiling Our HistoryThreaded by StarlightMask of RomulusChronicles of Tenek Lua BenStep-by-Step Guide to Preschool Readiness: Everything You Need to Know Before the First DayLove Wars: Clash of the Parents, A True Divorce Story - MemoirCaput Mundi: The Head of the WorldArt & Love: My Life Illuminated in Egg TemperaSanta CutieGildedGildedYou Still Exist: Who You Are after DivorceCrushedBeneath the ArmorDear Future: You Can Keep The ChangeEarth Warriors: The Four Heroes of Peace50 Magical Tales: Adventures of the Magic Mice — Happy Smiles, the Best Gift That Brings JoyStarlight and ShadowsScars of the DominatayStray: Breaking Free, Falling Hard and Growing StrongerMust Read for Newcomers to America: Smart & Simple Tips to Succeed in Career, Family, and LifeAdapt, Panic, or Profit? Hilariously Stressful Quizzes About the FutureThe Greatest Story Ever WrittenThe Orichalcum CrownA Life in Too Many MarginsSoulless: Sometimes the Darkness Overcomes the LightAvaSarah's Secret Christmas WishHumanity's Lost CodePolitics and Morality: The Problems of Ethical Debate for an Evolved Social SpeciesSeasons in MananaNursery Rhymes Vol. VI: FruitsNursery Rhymes Vol. VII: FruitsSomething Else: Words That Remember, Stories That AwakenCaptured Prey: A Primal Play NovellaI'll Try Anything Twice: Misadventures of a Self-Medicated LifeLola Gillette and the Summer of Second ChancesDissection of a Human HeartVampire VersesChristmas Ghost Stories: Classic Victorian Tales for Cold Winter NightsWill's WakeThe Young Explorers' Time MachineThe EndThe Reinvention Playbook: Rebuilding Identity, Direction, and Confidence after the Job EndsNewsflash! How Hot Flashes Could Save Your LifeCactus RoseHis Dark ClaimThe CEO's TakeoverThe Curse of TholgorThe Grip of DarknessNo One Is Normal: Breaking Free from Normal: Short Stories of Struggle, Adversity, and Self-DiscoveryScarlet and SapphireLucky Number SixSwimming with ManateesSwallowing the MuskellungeHow to Master Mindfulness For Productivity: Get More Done With a Clear MindHow to Stay Disciplined Without Motivation: A Practical Guide to Showing up Every Day--Even When You Don't Feel Like ItA Textbook-Based Approach To Machine Learning (With Python)Immortal FireTerratron — A New FrontierThe Luminous Body: Returning to the Sacred Heart of RealityThe Consortium Saga: OmnibusThe Worst Fiction Story - Part 1ESPionage: Jazz AgeFrom Zero to Roadtrip: A Beginner's Guide to RV TravelWriting at the Wellspring: Tapping the Source of Your Inner GeniusThe Unnatural Species: The Adversary, The Source, and the Great FilterForever, In ParisThe Right Time: Back to The 80sCastaway's QuestReality Behind the FantasyThe Human Condition: A Defiant Inquiry into Society, Thought and the SelfPetals and SilencesThe Crummy MummyThe Cyber Spies of ZionTales of the Norse Gods: Loki Saves the WorldSecrets of the Sky Gods

Thanks to all the publishers participating this month!

Anchorline Press Aquarius Press Autumn House Press
Bellevue Literary Press CarTech Books Catavento Press
Gefen Publishing House Gilded Orange Books Harbor Lane Books, LLC.
Haven Muse Literary Publishing NeoParadoxa
Paper Phoenix Press Prolific Pulse Press LLC PublishNation
Purple Diamond Press, Inc Revell RIZE Press
Rootstock Publishing Running Wild Press, LLC Simon & Schuster
Somewhat Grumpy Press Tundra Books Tuxtails Publishing, LLC
Type Eighteen Books University of Nevada Press Vibrant Publishers
Vision of Hope Media W4 Publishing, LLC What on Earth!
Wise Media Group

November 2025 Early Reviewers Batch Is Live! / LibraryThing (Thingology)

Win free books from the November 2025 batch of Early Reviewer titles! We’ve got 251 books this month, and a grand total of 3,430 copies to give out. Which books are you hoping to snag this month? Come tell us on Talk.

If you haven’t already, sign up for Early Reviewers. If you’ve already signed up, please check your mailing/email address and make sure they’re correct.

» Request books here!

The deadline to request a copy is Tuesday, November 25th at 6PM EST.

Eligibility: Publishers do things country-by-country. This month we have publishers who can send books to the US, the UK, Israel, Canada, Australia, Germany, Ireland, Poland, Luxembourg, Malta and more. Make sure to check the message on each book to see if it can be sent to your country.

The Age of Calamities: StoriesWhen Trees Testify: Science, Wisdom, History, and America's Black Botanical LegacyMaiden VoyageCelestial LightsThe Kiss of the NightingaleMass MotheringDanger EagleSamsonRational Ideas Book OneThe Boy Who Met His Teacher’s PastGod Is My Friend: 365 Daily Devos for BoysFinding GraceGod Is My Friend: 365 Daily Devos for GirlsPolar War: Submarines, Spies, and the Struggle for Power in a Melting ArcticA Very Loud ChristmasBirthing Pains: A Story of TransformationGuarded TimeWhere Kindness Lives: A Women's Fiction AnthologyA Spell for DrowningSnake on a Red Velvet Throne and Other StoriesSnapped Up: A Tale of the Beast of BuscoEmerald City BluesThe DaughtersImmortal Evelyn and Other Tales of Dark FantasyNo Man's LandA Handbook for Keeping KosherA Guide for Life Through the Eyes of Megillat EstherBedtime Stories for Strong Jewish Girls: Tales of 50 Jewish Heroines Who Changed the WorldHinenu: Israel at Ten MillionReport: IsraelHis Last Christmas GiftUntil Death Taps You on the ShoulderImagine WagonsTrad WifeModern Advertising Essentials You Always Wanted to Know: Master Advertising Strategy, Consumer Behavior, Brand Storytelling, AI Marketing, and Social Media Tactics, Digital AdvertisingBusiness Statistics Essentials You Always Wanted to Know: Master Data Analysis, Regression, Probability, Hypothesis Testing and Decision Making for Business SuccessBrand Management Essentials You Always Wanted to Know: The Complete Guide to Crafting Brand Strategy, Positioning and Loyalty for Business GrowthMicrosoft Power BI Essentials You Always Wanted to Know: Master Data Transformation, Visualizations, AI Integration and Reporting for Smarter Business InsightsData Structures and Algorithms Essentials You Always Wanted to Know: Master Python, Recursion, Dynamic Programming, and Greedy Algorithms with Hands-On ExamplesThe Call: Leading from Wholeness, Living in PresencePublic Speaking Essentials You Always Wanted to Know: Master Confidence, Charisma, Storytelling and Audience Engagement for Powerful PresentationsThe Body RemembersMore Futures for Ferals: A Charity AnthologyDuck and Dragon: Cozy Fantasy Coloring Book AdventureMonkey's Sweet Surprise: A Lunar New Year Mix-UpCozy Animals Color Swatches Palette PlannerJack and Lulu Go to the Tree FarmA Vision of Hope: A Story of Redemption and PurposeA Vision of Hope: ReflectionsThe Conspiracists : Women, Extremism, and the Lure of BelongingTaking Stock of Your LifeBreakdown, Recovery, and the OutdoorsPiecework: Ethnographies of PlaceBody MemoryYou May Feel a Bit of Pressure: Observations from Infertility's Heart-Wrenching RideThe Impossible Physics of the HummingbirdGetting Dressed in the Dark: An Artist's Way HomeHopePardon Me for MoonwalkingThirsty CreekMother!The Here of This Now: Science Fiction StoriesDancing in the Dark: How I Found My True Vision for PeaceMoments: A Greek Island TaleKevin Wilks and the Eye of DreamsDigArthur and the Kingswell TrioBeneath The Clover HillDanger On The Red TrainThe Special Guest: A Christmas StoryLegend of the Wooden StarCoral and OomaWhen Power CorruptsA Philosopher Adrift in the Sea of TimeThe Last Shepherd’s Dog and Other Stories from a Rural Spanish Village High and Hidden in the Costa Blanca MountainsThe Water Lilies of MishipeshuThe ChoirGihigugma, Ace of HeartsMoonbase ArmstrongSpanish QuickStart Guide: The Simplified Beginner's Guide to Learning Essential Vocabulary, Building Practical Grammar Skills, and Mastering Conversational SpanishSquirrel's First DayVampires in Chicago: A Subversive, Satirical Gothic Fantasy Action ThrillerWill Rogers and His Great InspirationWings of Brotherhood: A Journey Between Two Air ForcesScob NationSuperwoman: A Funny and Reflective Look at Single Motherhood — The Sh*t They Don't Tell You EditionMagdalena Is Brighter Than You ThinkNuclear Family: A Memoir of the Atomic WestArizona Boots and Burgers: A Guide for Hungry HikersBlood in the BricksBloodbaneThe Butcher and the LiarCancer Courts My Mother101 Stories of Love: Poetry CollectionCrabby Abby the Decorator Crab's Big HeartYou've Got It All Wrong: Poetry CollectionPerihelion: Poetry CollectionWriting Between the Lines: Poetry CollectionBro ken Rengay: Unruly PoetrySocial Possibilities: Poetic Voices of HopeThe Bright Edges of the World: Willa Cather and Her ArchbishopThe First Girl on Stage: Tunga Dances the YakshaganaThe Real Education of TJ Crowley: Coming of Age on the RedlineRomy's Year of Living DangerouslyDreamwalkerPosthumously YoursWetwareCasper Caterpillar: The Tale of a Scaredy Cat-ErpillarLe souffle de la machine: Quand l’intelligence artificielle inspireBlack Girls Day OffSuch an Odd Word to UseThe Road UnveiledThe Road UnveiledThe Undoing: Who Shall StandThe Last Library of MidnightThe Human Condition: A Defiant Inquiry into Society, Thought and the SelfWhen Love WaitsUnbalanced: Memoir of an Immigrant Math TeacherBlood & Burned RosesRiyati RippleThe Invisible War: Mossad vs Iran: Inside the Covert Cyber and Spy War over NukesJeannie's Bottle, IncantationsThe Lavender Blade: An Exorcist's ChronicleThe Lavender Blade: An Exorcist's ChronicleThe Right Time: Back to the 80sNaughty Stories for Naughty Girls and Boys (Volume Two)Have You Seen HimThe GrangeThe Kansal Clunker: The Car that Rebuilt UsBlood & DaffodilsBreathe Again: A Practical Guide to Managing Stress and Anxiety Every DayShe Sells Sea Shells By The Sea ShoreThe Moon EaterPetals and SilencesAsylum MurdersThe Thirty-Fifth PageTreacherous HackThe Snipe HuntThe Light Switch Myth: A Beginner’s Guide to Creating Realistic and Sustainable ChangeFarmer Joe's Tiny Farm: A Laugh-Out-Loud Story of Big SurprisesQasida for When I Became a WomanAll the Ways We StayThe Driver's PromiseLove And AngerThe Tarishe CurseWhispers and Wonder: The Cerulean WorldA Comprehensive Breakdown: Essays on Autism, Collapse, and the Myth of FunctioningDead Girls Don't TellWhere She Met The SeaThe Ultimate Gas Griddle Cookbook: 70+ Easy Recipes for Flat Top Grilling, Smash Burgers, High-Protein Meals and Family BBQ - Includes 2-Week Meal PlanThe Cicatrix AffairThe Kibric MysteryNo One Knows MeredithFate of the God StonesThe Dreaming at the Drowned TownGone CountryExploring Ancestral Memories and 'Lost Family Histories'Wine & SmokeReborn in AshThe Valley Of GlassClass War, Then and Now: Essays Toward a New LeftThe Clockwork SpyA Comedy of MonstersA Simple Tale of Sugar and ShadowsFalling for My Husband (Again)Thriving in a Relationship When You Have Chronic Illness: Navigate Challenges and Keep Your Relationship Strong Using Acceptance and Commitment TherapyThe Fall of the American Republic: Eight Nights To MidnightA Simple Tale of Sugar and ShadowsFrom Zero to Roadtrip: A Beginner's Guide to RV TravelStormy Normy Goes ReiningHaggard HousePale PiecesThe Blackwood Journals: A Game of FortuneFollowing Jimmy ValentineArisara 2058: The Weight of PerfectionRaven: The BrokenNursery Rhymes Vol. III: AnimalsNursery Rhymes Vol. IV: FlowersPoirot and the Crown Jewels and Other Stories, As Narrated by His Friend, Nigel G. HastingsThe French Inquisition: Persecution of the ProtestantsPrecept: FrequencyCastaway's QuestDiving into Dreams: Navigating Life’s Deepest Waters to Discover the Secret of Having EnoughThe Museum of Future MistakesThe Smile of the TigerWashington Post Is Switching Off LightsTee Ball Myths & SolutionsEmpty Cradle, Full Heart: Trusting God in Silence: A Story of Love Without a ChildHeal Your Womb: Natural Remedies and Medical Solutions for Fibroids, PCOS, Endometriosis, and MoreThe Worst Fiction Story Part 1Believe You Matter: Thriving As God's Beloved ChildThe ChambermaidsGoonLast Radiance: Radical Lives, Bright DeathsWings of Change: Tales That Rise AboveChastised: The United States of IsraelBorn on MondaySometimes Unserious: A Short Story CollectionThe MaledictionMarianne: A Sense and Sensibility SequelForays into Solitude: The One Verses the ManySelf Nature: The Essence of Who We AreGun Girl and the Tall GuyNo One You KnowEnough Is Enough: Declutter Your Space, Clear Your Mind, and Reclaim Your TimeHearts Beneath The Broken SkyGun Girl and the Tall GuyPride, Prejudice, and Perplexing PloysThe Cider Maker's SecretProtecting Her HeartThe ExpeditionEddy's First LoveBillsPhase Shift 2045Echoes of the TimelessExercise For People Who Are Afraid To ExerciseESPionage 2: Jazz AgeThe Butcher and the LiarDreams and Prayers: Verses From a Wandering MindMiami Low-Sodium Restaurant Guide: Featuring 80 Low Sodium and Heart Healthy DishesGod's Coded Language Is All About TransparencyThe Oath: Some Promises Should Never Be KeptThe Complete Mediterranean Diet Cookbook for Beginners For 2025-2026: 100+ Vibrant, Kitchen-Tested Recipes for Living and Eating WellFaithful Exchange: The Economy As It's Meant to BeFaithful Exchange: The Economy As It's Meant to BeA Slight CurveWhen the Lights are Off: Lessons from the Quiet MomentsSugar CrazeSan Diego Low-Sodium Restaurant Guide: Featuring 80 Low Sodium and Heart Healthy DishesUltimate Rest: The Essence of the Beautiful GospelBroken AlgorithmsThe Enchanted SuitcaseFragmentRuby LarkQuinto's ChallengeThe Music MakersLiving the Creative Mind: A Mindset for CreationThe Unbiased Garden: You Are Divine. No Matter What Happens to YouNARC 101: The Illustrated Practical Guide to Identifying and Healing from Narcissistic AbuseBot CampWinning My Ex-CrushDragon RogueThe Life and Spiritual Journey of No OneEarth, The Improbable UtopiaTerratron — A New FrontierSet Point SeductionEcho RidgeThe Remembered HeartAtannaFree Will: Resolving the MysteryMagic, Science, & Lions, OH MY!: A Collection of Short Stories

Thanks to all the publishers participating this month!

Alcove Press Anchorline Press Artemesia Publishing
Autumn House Press Awaken Village Press Broadleaf Books
ClydeBank Media Crooked Lane Books Cynren Press
Daastan eSpec Books Gefen Publishing House
Grain Valley Publishing Hawthorn Quill Publishing Henry Holt and Company
HTF Publishing Legacy Books Press Lunatica Libri
MiLFY Books Muse Literary Publishing NeoParadoxa
NewCon Press Paper Phoenix Press Picnic Heist Publishing
Prolific Pulse Press LLC PublishNation Real Nice Books
Riverfolk Books Running Wild Press, LLC Sana Irfan
Shilka Publishing Simon & Schuster Tundra Books
Type Eighteen Books University of Nevada Press University of New Mexico Press
Unsolicited Press UpLit Press Vibrant Publishers
Vision of Hope Media Wise Media Group WorthyKids
Yali Books Yorkshire Publishing

DLF Digest: December 2025 / Digital Library Federation

A monthly round-up of news, upcoming working group meetings and events, and CLIR program updates from the Digital Library Federation. See all past Digests here

Hello DLF Community! It was amazing to see so many of you at the DLF Forum last month. Keep an eye on your inboxes as we’ll soon be releasing our opening plenary recording as well as photos from the event. Be sure you’re signed up for the Forum Newsletter so you don’t miss a beat. Interested in helping out for the 2026 event? We’ll open the call for the Planning Committee early in the new year. We wish you a wonderful holiday season and we look forward to reconnecting with you in 2026!

— Aliya from Team DLF

 

This month’s news:

  • Working Group Survey: The DLF Assessment Interest Group Metadata Working Group (MWG) is conducting a brief survey — expected to take around 5 minutes or less — to gather data for planning next year’s activities. If you are a current or former member of the MWG or have any interest in metadata assessment and quality activities, please consider answering a few questions.
  • Survey: IIIF’s Implementation Survey is open through December 31. The aim of this survey is to get current data on current usage of IIIF, understand the levels of implementation for the different IIIF APIs, and identify topics for future workshops and training opportunities. Take the survey.
  • Call For Proposals: CLIR is accepting applications for the thirteenth cycle of Recordings at Risk until February 24, 2025. See details here.
  • Office closure: CLIR’s offices are closed for winter holiday from Monday, December 22 through Friday, January 2, 2026.

 

This month’s open DLF group meetings:

For the most up-to-date schedule of DLF group meetings and events (plus NDSA meetings, conferences, and more), bookmark the DLF Community Calendar. Meeting dates are subject to change. Can’t find the meeting call-in information? Email us at info@diglib.org. Reminder: Team DLF working days are Monday through Thursday.

  • DLF Digital Accessibility Working Group (DAWG): Tuesday, 12/2, 2pm ET / 11am PT
  • DLF Born-Digital Access Working Group (BDAWG): Tuesday, 12/2, 2pm ET / 11am PT
  • DLF AIG Metadata Assessment: Thursday, 12/4, 1:15 pm ET / 10:15 am PT
  • DLF AIG Cultural Assessment Working Group: Monday, 12/8, 1pm ET / 10am PT
  • DLF Committee for Equity & Inclusion: Monday, 12/15, 3pm ET / 12pm PT
  • AIG User Experience Working Group: Friday, 12/19, 11am ET / 8am PT
  • DLF Digitization Interest Group: Monday, 12/22, 2pm ET / 11am PT
  • DLF Climate Justice Working Group: Tuesday, 12/30, 1pm ET / 10am PT

DLF groups are open to ALL, regardless of whether or not you’re affiliated with a DLF member organization. Learn more about our working groups on our website. Interested in scheduling an upcoming working group call or reviving a past group? Check out the DLF Organizer’s Toolkit. As always, feel free to get in touch at info@diglib.org.

 

Get Involved / Connect with Us

Below are some ways to stay connected with us and the digital library community: 

 

The post DLF Digest: December 2025 appeared first on DLF.

Weekly Bookmarks / Ed Summers

These are some things I’ve wandered across on the web this week.

🔖 tinyfeed

tinyfeed is a CLI tool that generate a static HTML page from a collection of feeds.

It’s dead simple, no database, no config file, just a CLI and some HTML

Give it a list of RSS, Atom or JSON feeds urls and it will generate a single HTML page for it. Then you can effortlessly set it up in crond, systemd or openrc and voilà, you’ve got yourself a webpage that aggregates your favorite feeds.

🔖 Brief thoughts on the recent Cloudflare outage

What impressed me the most about this writeup is that they documented some aspects of what it was like responding to this incident: what they were seeing, and how they tried to made sense of it.

🔖 Crashing hard: why talking about bubbles obscures the real social cost of overinvesting into “Artificial Intelligence”

More and more commentators talk about and warn of an “AI bubble”, and everybody seems to congratulate each other on being such a smart financial analyst. BUT: A bubble pops and you are left with air and maybe a splash of soap somewhere on the floor. A fairly clean affair. This kind of investor speak obscures the severe consequences economic crashes cause, coming from someone’s point of view for whom this is more likely to be a spectacle than a direct threat…

In this article, I want to illustrate the broad range of costs that BOTH the buildup of “AI” overvaluations AND their coming down will have. The current “AI” investments will have long-term costs by creating significant path dependencies: They make harmful things cheaper, speed up the commodification of human labour and shift social norms.

🔖 Setting the Record Straight: Common Crawl’s Commitment to Transparency, Fair Use, and the Public Good

A recent article in The Atlantic (“The Nonprofit Doing the AI Industry’s Dirty Work,” November 4, 2025) makes several false and misleading claims about the Common Crawl Foundation, including the accusation that our organization has “lied to publishers” about our activities.

This allegation is untrue. It misrepresents both how Common Crawl operates and the values that guide our work.

🔖 The Company Quietly Funneling Paywalled Articles to AI Developers

Common Crawl doesn’t log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you’re a subscriber and hides the content if you’re not. Common Crawl’s scraper never executes that code, so it gets the full articles. Thus, by my estimate, the foundation’s archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper’s, and The Atlantic.

🔖 The Matrix of Convivial Technology – Assessing technologies for degrowth

This article introduces the notion of convivial technology as a conceptual framework for technologies suitable for degrowth societies. This paper is inspired by Ivan Illich’s notion of convivial tools but reconsiders it in the light of current practices and discussions. Looking for a definition of convivial technologies it uses qualitative empirical research conducted with degrowth-oriented groups developing or adapting grassroots technologies like Open Source cargo bikes or composting toilets in Germany. The basic ethical values and design criteria that guide these different groups in relation to technology are summed up into five dimensions: relatedness, adaptability, accessibility, bio-interaction and appropriateness. These dimensions can be correlated with the four life-cycle levels material, production, use and infrastructure to form the Matrix for Convivial Technology (MCT). The MCT is a 20-field schema that can be filled in. Experiences with the tool in different fields are presented. The MCT is itself a convivial tool as it allows for degrowth-oriented groups to self-assess their work and products in a qualitative, context-sensitive and independent way. It is a normative schema that fosters discussion concerning degrowth technologies in contexts of political education. And it is a research method as it helps collecting data about underlying ethical assumptions and aspirations of individuals and groups engaged in developing technology.

🔖 I don’t care how well your “AI” works.

We programmers are currently living through the devaluation of our craft, in a way and rate we never anticipated possible. A fate that designers, writers, translators, tailors or book-binders lived through before us. Not that their craft would die out, but it would be mutilated — condemned to the grueling task of cleaning up what the machines messed up. Unsurprisingly, some of us are not handling the new realities well.

🔖 ChimeraLinux

Chimera is a general-purpose Linux-based OS born from unhappiness with the status quo. We aim to create a system that is simple, transparent, and easy to pick up, without having to give up practicality and a rich feature set.

It is built from scratch using novel tooling, approaches, and userland. Instead of intentionally limiting ourselves, we strive to achieve both conceptual simplicity and convenience with careful and high quality software design.

🔖 Circular deals among AI companies

The big AI companies are making deals with each other, promising and distributing hundreds of billions of dollars over the next few years. It’s difficult to keep track, but Bloomberg has this network diagram that shows the moves.

🔖 Growing Group Care

Group work can be challenging. How can it be organised to support meaningful connection and learning processes?

This zine is part of a project on growing group care. It aims to support caring and inclusive group learning in universities and beyond. It is meant for those just starting group projects.

🔖 A Month of Chat-Oriented Programming

TL;DR: I spent a solid month “pair programming” with Claude Code, trying to suspend disbelief and adopt a this-will-be-productive mindset. More specifically, I got Claude to write well over 99% of the code produced during the month. I found the experience infuriating, unpleasant, and stressful before even worrying about its energy impact. Ideally, I would prefer not to do it again for at least a year or two. The only problem with that is that it “worked”. It’s hard to know exactly how well, but I (“we”) definitely produced far more than I would have been able to do unassisted, probably at higher quality, and with a fair number of pretty good tests (about 1500). Against my expectation going in, I have changed my mind. I now believe chat-oriented programming (“CHOP”) can work today, if your tolerance for pain is high enough

🔖 What’s really going on with AI and jobs?

Chiu also points out that while job listings for writers, artists, and creatives have declined, listings for creative directors have grown. This is precisely what you would expect to see as management embraced AI: fewer people actually creating the work, and more people in management roles overseeing the automated production.

🔖 Generative artificial intelligence–mediated confirmation bias in health information seeking

Generative artificial intelligence (GenAI) applications, such as ChatGPT, are transforming how individuals access health information, offering conversational and highly personalized interactions. While these technologies can enhance health literacy and decision-making, their capacity to generate deeply tailored—hypercustomized—responses risks amplifying confirmation bias by reinforcing pre-existing beliefs, obscuring medical consensus, and perpetuating misinformation, posing significant challenges to public health. This paper examines GenAI-mediated confirmation bias in health information seeking, driven by the interplay between GenAI’s hypercustomization capabilities and users’ confirmatory tendencies. Drawing on parallels with traditional online information-seeking behaviors, we identify three key “pressure points” where biases might emerge: query phrasing, preference for belief-consistent content, and resistance to belief-inconsistent information. Using illustrative examples, we highlight the limitations of existing safeguards and argue that even minor variations in applications’ configuration (e.g., Custom GPT) can exacerbate these biases along those pressure points. Given the widespread adoption and fragmentation (e.g., OpenAI’s GPT Store) of GenAI applications, their influence on health-seeking behaviors demands urgent attention. Since technical safeguards alone may be insufficient, we propose a set of interventions, including enhancing digital literacy, empowering users with critical engagement strategies, and implementing robust regulatory oversight. These recommendations aim to ensure the safe integration of GenAI into daily life, supporting informed decision-making and preserving the integrity of public understanding of health information.

🔖 ChatGPT Confessions gone? They are not !

A new Digital Digging investigation, here is the first one, conducted with Belgian researcher Nicolas Deleur, has uncovered 110,000 ChatGPT conversations preserved via Archive.org’s Wayback Machine. When users click “share” on a ChatGPT conversation, they think they’re creating a temporary link for a friend or colleague. What they don’t realize is they were also creating a permanent, searchable record of their thoughts, confessions, and sometimes illegal activities seen by Archive.org.

🔖 Keynote—The Future of Open: Building Trustworthy Infrastructure in a Fragmented World (WOLFcon 2025)

As libraries continue to rely on a growing constellation of open systems, standards, and services, questions of sustainability, governance, and trust are more pressing than ever. This session will explore what it means to build and maintain “trustworthy infrastructure” in a decentralized and sometimes fractured landscape. Through high-level discussion and examples drawn from the open source library ecosystem, we’ll consider how community-led development, transparent decision-making, and responsible stewardship can help ensure that open infrastructure continues to meet the evolving needs of libraries and the communities they serve. Participants will leave with a stronger understanding of the values and challenges underpinning open library systems, and how we might collaboratively shape a more resilient and equitable future.

🔖 httparchive Report: Page Weight

This report tracks the size and quantity of many popular web page resources. Sizes represent the number of bytes sent over the network, which may be compressed.

🔖 Pepper&Carrot Fonts

Here is a collection of featured free/libre fonts that were used, extended or enhanced for the Pepper&Carrot webcomic project. Check the Git repository for the full collection and more information.

🔖 Complexity, Artificial Life, and Artificial Intelligence Open Access

The scientific fields of complexity, Artificial Life (ALife), and artificial intelligence (AI) share commonalities: historic, conceptual, methodological, and philosophical. Although their origins trace back to the 1940s birth of cybernetics, they were able to develop properly only as modern information technology became available. In this perspective, I offer a personal (and thus biased) account of the expectations and limitations of these fields, some of which have their roots in the limits of formal systems. I use interactions, self-organization, emergence, and balance to compare different aspects of complexity, ALife, and AI. Even when the trajectory of the article is influenced by my personal experience, the general questions posed (which outweigh the answers) will, I hope, be useful in aligning efforts in these fields toward overcoming—or accepting—their limits.

🔖 Incorrect Citation Association for Articles in Online-Only Springer Nature Journals

We show that citation metrics of journal articles in many of the online-only Springer Nature journals and associated ones are distorted, going back to articles from 2001. We find that most likely due to an API response error, there are many incorrect references which typically lead to Article Number 1 of a given Volume. Among others, the issue affects journals such as Scientific Reports, Nature Communications, Communications journals, Cell Death & Disease, Light: Science & Applications, as well as many BMC, Discovery and npj journals. Beyond the negative effect of introducing incorrect reference information, this distorts the citation statistics of articles in these journals, with a few articles being massively over-cited compared to their peers, while many lose citations; e.g. both in Scientific Reports and in Nature Communications, 5 of the 10 top cited articles have article numbers of 1. We validate the distorted statistics by assessing data from multiple scientific literature databases: Crossref, OpenCitations, Semantic Scholar, and the journals’ websites. The issue primarily arises from the inconsistent transition from page-based referencing of articles to article number-based referencing, as well as the improper handling of the change in the publisher’s article metadata API. It seems that the most pressing problem has been present since approximately 2011, which we estimate affects the citation count of millions of authors.

LibraryThing’s 12th Annual Holiday Card Exchange / LibraryThing (Thingology)

The 12th annual LibraryThing Holiday Card Exchange is here!

Here’s how it works:

  • Mail a holiday card to a random LibraryThing member.
  • You can mail a handmade or store bought card. Add a special note to personalize it.
  • You’ll get one from another member. (Only that member will see your address.)
  • In order for cards to be delivered correctly to you, you must include your real name in the address box when signing up: use whatever matches your mailbox. (Only your matches and LibraryThing staff can see your address.)

» Sign up for the LibrayThing Holiday Card Exchange now

Sign-ups for the Card Exchange close Tuesday, December 2 at 12:00pm Eastern (17:00 GMT). We’ll inform you of your matches within an hour or so after signups close, so you can get those cards in the mail.

Questions? Join the discussion on Talk.

The Holiday Store is Open / LibraryThing (Thingology)

It’s here! 🔔

Whether you love this time of year or not, we hope some discounted LibraryThing merch and book supplies will brighten your season. 

Shop here: https://www.librarything.com/more/store

The Holiday sale ends on Epiphany, January 6th.1 You’ll find our usual major discounts, and we’ve added the new 20th Anniversary Shirts to the sale. Here’s a partial list to pique your interest:2

  • 20th Anniversary shirts for $16
  • CueCat barcode scanners for $5
  • Custom barcode labels starting at $5
  • Beautiful enamel pins for $3
  • All the stickers you could want, starting at $1
  • $4 off sticker bundles and $5 off pin bundles

1 Epiphany is also known as Little Christmas, the night before Orthodox Christmas or the day after the Twelfth day of Christmas—twelve LibraryThing pins would make the perfect gifts for your loved one, would they not?

2 Prices do not include cost of shipping. Shipping is included on Store pages.

The Gaslit Asset Class / David Rosenthal

James Grant invited me to address the annual conference of Grant's Interest Rate Observer. This was an intimidating prospect, the previous year's conference featured billionaires Scott Bessent and Bill Ackman. As usual, below the fold is the text of my talk, with the slides, links to the sources, and additional material in footnotes. Yellow background indicates textual slides.

The Gaslit Asset Class

Before I explain that much of what you have been told about cryptocurrency technology is gaslighting, I should stress that I hold no long or short positions in cryptocurrencies, their derivatives or related companies. Unlike most people discussing them, I am not "talking my book".

To fit in the allotted time, this talk focuses mainly on Bitcoin and omits many of the finer points. My text, with links to the sources and additional material in footnotes, will go up on my blog later today.

Why Am I Here?

I imagine few of you would understand why a retired software engineer with more than forty years in Silicon Valley was asked to address you on cryptocurrencies[1].

NVDA Log Plot
I was an early employee at Sun Microsystems then employee #4 at Nvidia, so I have been long Nvidia for more than 30 years. It has been a wild ride. I quit after 3 years as part of fixing Nvidia's first near-death experience and immediately did 3 years as employee #12 at another startup, which also IPO-ed. If you do two in six years in your late 40s you get seriously burnt out.

So my wife and I started a program at Stanford that is still running 27 years later. She was a career librarian at the Library of Congress and the Stanford Library. She was part of the team that, 30 years ago, pioneered the transition of academic publishing to the Web. She was also the person who explained citation indices to Larry and Sergey, which led to Page Rank.

The academic literature has archival value. Multiple libraries hold complete runs on paper of the Philosophical Transactions of the Royal Society starting 360 years ago[2]. The interesting engineering problem we faced was how to enable libraries to deliver comparable longevity to Web-published journals.

Five Years Before Satoshi Nakamoto

I worked with a group of outstanding Stanford CS Ph.D. students to design and implement a system for stewardship of Web content modeled on the paper library system. The goal was to make it extremely difficult for even a powerful adversary to delete or modify content without detection. It is called LOCKSS, for Lots Of Copies Keep Stuff Safe; a decentralized peer-to-peer system secured by Proof-of-Work. We won a "Best Paper" award for it five years before Satoshi Nakamoto published his decentralized peer-to-peer system secured by Proof-of-Work. When he did, LOCKSS had been in production for a few years and we had learnt a lot about how difficult decentralization is in the online world.

Bitcoin built on more than two decades of research. Neither we nor Nakamoto invented Proof-of-Work, Cynthia Dwork and Moni Naor published it in 1992. Nakamoto didn't invent blockchains, Stuart Haber and W. Scott Stornetta patented them in 1991. He was extremely clever in assembling well-known techniques into a cryptocurrency, but his only major innovation was the Longest Chain Rule.

Digital cash

The fundamental problem of representing cash in digital form is that a digital coin can be endlessly copied, thus you need some means to prevent each of the copies being spent. When you withdraw cash from an ATM, turning digital cash in your account into physical cash in your hand, the bank performs an atomic transaction against the database mapping account numbers to balances. The bank is trusted to prevent multiple spending.

There had been several attempts at a cryptocurrency before Bitcoin. The primary goals of the libertarians and cypherpunks were that a cryptocurrency be as anonymous as physical cash, and that it not have a central point of failure that had to be trusted. The only one to get any traction was David Chaum's DigiCash; it was anonymous but it was centralized to prevent multiple spending and it involved banks.

Nakamoto's magnum opus

Bitcoin claims:
  • The system was trustless because it was decentralized.
  • It was a medium of exchange for buying and selling in the real world.
  • Transactions were faster and cheaper than in the existing financial system.
  • It was secured by Proof-of-Work and cryptography.
  • It was privacy-preserving.
When in November 2008 Nakamoto published Bitcoin: A Peer-to-Peer Electronic Cash System it was the peak of the Global Financial Crisis and people were very aware that the financial system was broken (and it still is). Because it solved many of the problems that had dogged earlier attempts at electronic cash, it rapidly attracted a clique of enthusiasts. When Nakamoto went silent in 2010 they took over proseltyzing the system. The main claims they made were:
  • The system was trustless because it was decentralized.
  • It was a medium of exchange for buying and selling in the real world.
  • Transactions were faster and cheaper than in the existing financial system.
  • It was secured by Proof-of-Work and cryptography.
  • It was privacy-preserving.
They are all either false or misleading. In most cases Nakamoto's own writings show he knew this. His acolytes were gaslighting.

Trustless because decentralized (1)

Assuming that the Bitcoin network consists of a large number of roughly equal nodes, it randomly selects a node to determine the transactions that will form the next block. There is no need to trust any particular node because the chance that they will be selected is small.[3]

At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware. A server farm would only need to have one node on the network and the rest of the LAN connects with that one node.
Satoshi Nakamoto 2nd November 2008
The current system where every user is a network node is not the intended configuration for large scale. ... The design supports letting users just be users. The more burden it is to run a node, the fewer nodes there will be. Those few nodes will be big server farms. The rest will be client nodes that only do transactions and don’t generate.
Satoshi Nakamoto: 29th July 2010
But only three days after publishing his white paper, Nakamoto understood that this assumption would become false:
At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware.
He didn't change his mind. On 29th July 2010, less than five months before he went silent, he made the same point:
The current system where every user is a network node is not the intended configuration for large scale. ... The design supports letting users just be users. The more burden it is to run a node, the fewer nodes there will be. Those few nodes will be big server farms.
"Letting users be users" necessarily means that the "users" have to trust the "few nodes" to include their transactions in blocks. The very strong economies of scale of technology in general and "big server farms" in particular meant that the centralizing force described in W. Brian Arthur's 1994 book Increasing Returns and Path Dependence in the Economy resulted in there being "fewer nodes". Indeed, on 13th June 2014 a single node controlled 51% of Bitcoin's mining, the GHash pool.[4]

Trustless because decentralized (2)

In June 2022 Cooperation among an anonymous group protected Bitcoin during failures of decentralization by Alyssa Blackburn et al showed that it had not been decentralized from the very start. The same month a DARPA-sponsored report entitled Are Blockchains Decentralized? by a large team from the Trail of Bits security company examined the economic and many other centralizing forces affecting a wide range of blockchain implementations and concluded that the answer to their question is "No".[5]

The same centralizing economic forces apply to Proof-of-Stake blockchains such as Ethereum. Grant's Memo to the bitcoiners explained the process last February.

Trustless because decentralized (3)

Another centralizing force drives pools like GHash. The network creates a new block and rewards the selected node about every ten minutes. Assuming they're all state-of-the-art, there are currently about 15M rigs mining Bitcoin[6]. Their economic life is around 18 months, so only 0.5%% of them will ever earn a reward. The owners of mining rigs pool their efforts, converting a small chance of a huge reward into a steady flow of smaller rewards. On average GHash was getting three rewards an hour.

A medium of exchange (1)

Quote from: Insti, July 17, 2010, 02:33:41 AM
How would a Bitcoin snack machine work?
  1. You want to walk up to the machine. Send it a bitcoin.
  2. ?
  3. Walk away eating your nice sugary snack. (Profit!)
You don’t want to have to wait an hour for you transaction to be confirmed.

The vending machine company doesn’t want to give away lots of free candy.

How does step 2 work?
I believe it’ll be possible for a payment processing company to provide as a service the rapid distribution of transactions with good-enough checking in something like 10 seconds or less.
Satoshi Nakamoto: 17th July 2010
Bitcoin's ten-minute block time is a problem for real-world buying and selling[7], but the problem is even worse. Network delays mean a transaction isn't final when you see it in a block. Assuming no-one controlled more than 10% of the hashing power, Nakamoto required another 5 blocks to have been added to the chain, so 99.9% finality would take an hour. With a more realistic 30%, the rule should have been 23 blocks, with finality taking 4 hours[8].

Nakamoto's 17th July 2010 exchange with Insti shows he understood that the Bitcoin network couldn't be used for ATMs, vending machines, buying drugs or other face-to-face transactions because he went on to describe how a payment processing service layered on top of it would work.

A medium of exchange (2)

assuming that the two sides are rational actors and the smart contract language is Turing-complete, there is no escrow smart contract that can facilitate this exchange without either relying on third parties or enabling at least one side to extort the other.

two-party escrow smart contracts are ... simply a game of who gets to declare their choice first and commit it on the blockchain sooner, hence forcing the other party to concur with their choice. The order of transactions on a blockchain is essentially decided by the miners. Thus, the party with better connectivity to the miners or who is willing to pay higher transaction fees, would be able to declare their choice to the smart contract first and extort the other party.
Amir Kafshdar Goharshady, Irrationality, Extortion, or Trusted Third-parties: Why it is Impossible to Buy and Sell Physical Goods Securely on the Blockchain
The situation is even worse when it comes to buying and selling real-world objects via programmable blockchains such as Ethereum[9]. In 2021 Amir Kafshdar Goharshady showed that[10]:
assuming that the two sides are rational actors and the smart contract language is Turing-complete, there is no escrow smart contract that can facilitate this exchange without either relying on third parties or enabling at least one side to extort the other.
Goharshady noted that:
on the Ethereum blockchain escrows with trusted third-parties are used more often than two-party escrows, presumably because they allow dispute resolution by a human.
And goes on to show that in practice trusted third-party escrow services are essential because two-party escrow smart contracts are:
simply a game of who gets to declare their choice first and commit it on the blockchain sooner, hence forcing the other party to concur with their choice. The order of transactions on a blockchain is essentially decided by the miners. Thus, the party with better connectivity to the miners or who is willing to pay higher transaction fees, would be able to declare their choice to the smart contract first and extort the other party.
The choice being whether or not the good had been delivered. Given the current enthusiasm for tokenization of physical goods the market for trusted escrow services looks bright.

Fast transactions

Actually the delay between submitting a transaction and finality is unpredictable and can be much longer than an hour. Transactions are validated by miners then added to the mempool of pending transactions where they wait until either:
  • The selected network node chooses it as one of the most profitable to include in its block.
  • It reaches either its specified timeout or the default of 2 weeks.
Mempool count
This year the demand for transactions has been low, typically under 4 per second, so the backlog has been low, around 40K or under three hours. Last October it peaked at around 14 hours worth.

The distribution of transaction wait times is highly skewed. The median wait is typically around a block time. The proportion of low-fee transactions means the average wait is normally around 10 times that. But when everyone wants to transact the ratio spikes to over 40 times.

Cheap transactions

Average fee/transaction
There are two ways miners can profit from including a transaction in a block:
  • The fee to be paid to the miner which the user chose to include in the transaction. In effect, transaction slots are auctioned off.
  • The transactions the miner included in the block to front- and back-run the user's transaction, called Maximal Extractable Value[11]:
    Maximal extractable value (MEV) refers to the maximum value that can be extracted from block production in excess of the standard block reward and gas fees by including, excluding, and changing the order of transactions in a block.
The block size limit means there is a fixed supply of transaction slots, about 7 per second, but the demand for them varies, and thus so does the price. In normal times the auction for transaction fees means they are much smaller than the block reward. But when everyone wants to transact they suffer massive spikes.

Secured by Proof-of-Work (1)

In cryptocurrencies "secured" means that the cost of an attack exceeds the potential loot. The security provided by Proof-of-Work is linear in its cost, unlike techniques such as encryption, whose security is exponential in cost. It is generally believed that it is impractical to reverse a Bitcoin transaction after about an hour because the miners are wasting such immense sums on Proof-of-Work. Bitcoin pays these immense sums, but it doesn't get the decentralization they ostensibly pay for.

Monero, a privacy-focused blockchain network, has been undergoing an attempted 51% attack — an existential threat to any blockchain. In the case of a successful 51% attack, where a single entity becomes responsible for 51% or more of a blockchain's mining power, the controlling entity could reorganize blocks, attempt to double-spend, or censor transactions.

A company called Qubic has been waging the 51% attack by offering economic rewards for miners who join the Qubic mining pool. They claim to be "stress testing" Monero, though many in the Monero community have condemned Qubic for what they see as a malicious attack on the network or a marketing stunt.
Molly White: Monero faces 51% attack
The advent of "mining as a service" about 7 years ago made 51% attacks against smaller Proof-of-Work alt-coin such as Bitcoin Gold endemic. In August Molly White reported that Monero faces 51% attack:

In 2018's The Economic Limits Of Bitcoin And The Blockchain Eric Budish of the Booth School analyzed two versions of the 51% attack. I summarized his analysis of the classic multiple spend attack thus:
Note that only Bitcoin and Ethereum among cryptocurrencies with "market cap" over $100M would cost more than $100K to attack. The total "market cap" of these 8 currencies is $271.71B and the total cost to 51% attack them is $1.277M or 4.7E-6 of their market cap.
His key insight was that to ensure that 51% attacks were uneconomic, the reward for a block, implicitly the transaction tax, plus the fees had to be greater than the maximum value of the transactions in it. The total transaction cost (reward + fee) typically peaks around 1.8% but is normally between 0.6% and 0.8%, or around 150 times less than Budish's safety criterion. The result is that a conspiracy between a few large pools could find it economic to mount a 51% attack.

Secured by Proof-of-Work (2)

However, ∆attack is something of a “pick your poison” parameter. If ∆attack is small, then the system is vulnerable to the double-spending attack ... and the implicit transactions tax on economic activity using the blockchain has to be high. If ∆attack is large, then a short time period of access to a large amount of computing power can sabotage the blockchain.
Eric Budish: The Economic Limits Of Bitcoin And The Blockchain
But everyone assumes the pools won't do that. Budish further analyzed the effects of a multiple spend attack. It would be public, so it would in effect be sabotage, decreasing the Bitcoin price by a factor ∆attack. He concludes that if the decrease is small, then double-spending attacks are feasible and the per-block reward plus fee must be large, whereas if it is large then access to the hash power of a few large pools can quickly sabotage the currency.

The implication is that miners, motivated to keep fees manageable, believe ∆attack is large. Thus Bitcoin is secure because those who could kill the golden goose don't want to.

Secured by Proof-of-Work (3)

proof-of-work can only achieve payment security if mining income is high, but the transaction market cannot generate an adequate level of income. ... the economic design of the transaction market fails to generate high enough fees.
Raphael Auer: Beyond the doomsday economics of “proof-of-work” in cryptocurrencies
The following year, in Beyond the doomsday economics of “proof-of-work” in cryptocurrencies, Raphael Auer of the Bank for International Settlements showed that the problem Budish identified was inevitable[12]:
proof-of-work can only achieve payment security if mining income is high, but the transaction market cannot generate an adequate level of income. ... the economic design of the transaction market fails to generate high enough fees.
In other words, the security of Bitcoin's blockchain depends upon inflating the currency with block rewards. This problem is excerbated by Bitcoin's regular "halvenings" reducing the block reward. To maintain miner's current income after the next halvening in less than three years the "price" would need to be over $200K; security depends upon the "price" appreciating faster than 20%/year.

Once the block reward gets small, safety requires the fees in a block to be worth more than the value of the transactions in it. But everybody has decided to ignore Budish and Auer.

Secured by Proof-of-Work (4)

Farokhnia Table 1
In 2024 Soroush Farokhnia & Amir Kafshdar Goharshady's Options and Futures Imperil Bitcoin's Security:
showed that (i) a successful block-reverting attack does not necessarily require ... a majority of the hash power; (ii) obtaining a majority of the hash power ... costs roughly 6.77 billion ... and (iii) Bitcoin derivatives, i.e. options and futures, imperil Bitcoin’s security by creating an incentive for a block-reverting/majority attack.
They assume that an attacker would purchase enough state-of-the-art hardware for the attack. Given Bitmain's dominance in mining ASICs, such a purchase is unlikely to be feasible.

Secured by Proof-of-Work (5)

Ferreira Table 1
But it would not be necessary. Mining is a very competitive business, and power is the major cost[13]. Making a profit requires both cheap power and early access to the latest, most efficient chips. So it wasn't a surprise that Ferreira et al's Corporate capture of blockchain governance showed that:
As of March 2021, the pools in Table 1 collectively accounted for 86% of the total hash rate employed. All but one pool (Binance) have known links to Bitmain Technologies, the largest mining ASIC producer. [14]

Secured by Proof-of-Work (6)

Mining Pools 5/17/24
Bitmain, a Chinese company, exerts significant control of Bitcoin. China has firmly suppressed domestic use of cryptocurrencies, whereas the current administration seems intent on integrating them (and their inevitable grifts) into the US financial system. Except for Bitmain, no-one in China gets eggs from the golden goose. This asymmetry provides China with an way to disrupt the US financial system.

Mining Pools 4/30/25
It would be important to prevent the disruption being attributed to China. A necessary precursor would therefore be to obscure the extent of Bitmain-affiliated pools' mining power. This has been a significant trend in the past year, note the change in the "unknown" in the graphs from 38 to 305. There could be other explanations, but whether or not intentionally this is creating a weapon.[15] Caveat 23rd November 2025: This appears to be an artifact of poor data collection, see comment below.

Secured by cryptography (1)

The dollars in your bank account are simply an entry in the bank's private ledger tagged with your name. You control this entry, but what you own is a claim on the bank[16]. Similarly, your cryptocurrency coins are effectively an entry in a public ledger tagged with the public half of a key pair. The two differences are that:
  • No ownership is involved, so you have no recourse if something goes wrong.
  • Anyone who knows the secret half of the key pair controls the entry. Since it is extremely difficult to stop online secrets leaking, something is likely to go wrong[17].
XKCD #538
The secret half of your key can leak via what Randall Munro depicted as a "wrench attack", via phishing, social engineering, software supply chain attacks[18], and other forms of malware. Preventing these risks requires you to maintain an extraordinary level of operational security.

Secured by cryptography (2)

Even perfect opsec may not be enough. Bitcoin and most cryptocurrencies use two cryptographic algorithms, SHA256 for hashing and ECDSA for signatures.

Quote from: llama on July 01, 2010, 10:21:47 PM
Satoshi, That would indeed be a solution if SHA was broken (certainly the more likely meltdown), because we could still recognize valid money owners by their signature (their private key would still be secure).

However, if something happened and the signatures were compromised (perhaps integer factorization is solved, quantum computers?), then even agreeing upon the last valid block would be worthless.
True, if it happened suddenly. If it happens gradually, we can still transition to something stronger. When you run the upgraded software for the first time, it would re-sign all your money with the new stronger signature algorithm. (by creating a transaction sending the money to yourself with the stronger sig)
Satoshi Nakamoto: 10th July 2010
On 10th July 2010 Nakamoto addressed the issue of what would happen if either of these algorithms were compromised. There are three problems with his response; that compromise is likely in the near future, when it does Nakamoto's fix is inadequate, and there is a huge incentive for it to happen suddenly:

Secured by cryptography (3)

Divesh Aggarwal et al's 2019 paper Quantum attacks on Bitcoin, and how to protect against them noted that:
the elliptic curve signature scheme used by Bitcoin is much more at risk, and could be completely broken by a quantum computer as early as 2027, by the most optimistic estimates.
Their "most optimistic estimates" are likely to be correct; PsiQuantum expects to have two 1M qubit computers operational in 2027[19]. Each should be capable of breaking an ECDSA key in under a week.

Bitcoin's transition to post-quantum cryptography faces a major problem because, to transfer coins from an ECDSA wallet to a post-quantum wallet, you need the key for the ECDSA wallet. Chainalysis estimates that:
about 20% of all Bitcoins have been "lost", or in other words are sitting in wallets whose keys are inaccessible
An example is the notorious hard disk in the garbage dump. A sufficiently powerful quantum computer could recover the lost keys.

The incentive for it to happen suddenly is that, even if Nakamoto's fix were in place, someone with access to the first sufficiently powerful quantum computer could transfer 20% of all Bitcoin, currently worth $460B, to post-quantum wallets they controlled. This would be a 230x return on the investment in PsiQuantum.

Privacy-preserving

privacy can still be maintained by breaking the flow of information in another place: by keeping public keys anonymous. The public can see that someone is sending an amount to someone else, but without information linking the transaction to anyone.

As an additional firewall, a new key pair should be used for each transaction to keep them from being linked to a common owner.

Some linking is still unavoidable with multi-input transactions, which necessarily reveal that their inputs were owned by the same owner. The risk is that if the owner of a key is revealed, linking could reveal other transactions that belonged to the same owner.
Satoshi Nakamoto: Bitcoin: A Peer-to-Peer Electronic Cash System
Nakamoto addressed the concern that, unlike DigiCash, because Bitcoin's blockchain was public it wasn't anonymous:
privacy can still be maintained by breaking the flow of information in another place: by keeping public keys anonymous. The public can see that someone is sending an amount to someone else, but without information linking the transaction to anyone.
This is true but misleading. In practice, users need to use exchanges and other services that can tie them to a public key. There is a flourishing ecosystem of companies that deanonymize wallets by tracing the web of transactions. Nakamoto added:
As an additional firewall, a new key pair should be used for each transaction to keep them from being linked to a common owner.
This advice is just unrealistic. As Molly White wrote[20]:
funds in a wallet have to come from somewhere, and it’s not difficult to infer what might be happening when your known wallet address suddenly transfers money off to a new, empty wallet.
Nakamoto acknowledged:
Some linking is still unavoidable with multi-input transactions, which necessarily reveal that their inputs were owned by the same owner. The risk is that if the owner of a key is revealed, linking could reveal other transactions that belonged to the same owner.
For more than a decade Jamison Lopp has been tracking what happens when a wallet with significant value is deanonymized, and it is a serious risk to life and limbs[21].

One more risk

I have steered clear of the financial risks of cryptocurrencies. It may appear that the endorsement of the current administration has effectively removed their financial risk. But the technical and operational risks remain, and I should note another technology-related risk.

Source
Equities are currently being inflated by the AI bubble. The AI platforms are running the drug-dealer's algorithm, "the first one's free", burning cash by offering their product free or massively under-priced. This cannot last; only 8% of their users would pay even the current price. OpenAI's August launch of GPT-5, which was about cost-cutting not better functionality, and Anthropic's cost increases were both panned by the customers who do pay. AI may deliver some value, but it doesn't come close to the cost of delivering it[22].

There is likely to be an epic AI equity bust. Analogies are being drawn to the telecom boom, but The Economist reckons[23]:
the potential AI bubble lags behind only the three gigantic railway busts of the 19th century.
Source
History shows a fairly strong and increasing correlation between equities and cryptocurrencies, so they will get dragged down too. The automatic liquidation of leveraged long positions in DeFi will start, causing a self-reinforcing downturn. Periods of heavy load such as this tend to reveal bugs in IT systems, and especially in "smart contracts", as their assumptions of adequate resources and timely responses are violated.

Source
Experience shows that Bitcoin's limited transaction rate and the fact that the Ethereum computer that runs all the "smart contracts" is 1000 times slower than a $50 Raspberry Pi 4[24] lead to major slow-downs and fee spikes during panic selling, exacerbated by the fact that the panic sales are public[25].

Conclusion

The fascinating thing about cryptocurrency technology is the number of ways people have developed and how much they are willing to pay to avoid actually using it. What other transformative technology has had people desperate not to use it?

The whole of TradFi has been erected on this much worse infrastructure, including exchanges, closed-end funds, ETFs, rehypothecation, and derivatives. Clearly, the only reason for doing so is to escape regulation and extract excess profits from what would otherwise be crimes.

Footnotes

  1. The cause was the video of a talk I gave at Stanford in 2022 entitled Can We Mitigate The Externalities Of Cryptocurrencies?. It was an updated version of a talk at the 2021 TTI/Vanguard conference. The talk conformed to Betteridge's Law of Headlines in that the answer was "no".
  2. Paper libraries form a model fault-tolerant system. It is highly replicated and decentralized. Libraries cooperate via inter-library loan and copy to deliver a service that is far more reliable than any individual library.
  3. The importance Satoshi Nakamoto attached to trustlessness can be seen from his release note for Bitcoin 0.1:
    The root problem with conventional currency is all the trust that's required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust. Banks must be trusted to hold our money and transfer it electronically, but they lend it out in waves of credit bubbles with barely a fraction in reserve. We have to trust them with our privacy, trust them not to let identity thieves drain our accounts. Their massive overhead costs make micropayments impossible.
    The problem with this ideology is that trust (but verify) is an incredibly effective optimization in almost any system. For example, Robert Putnam et al's Making Democracy Work: Civic Traditions in Modern Italy shows that the difference between the economies of Northern and Southern Italy is driven by the much higher level of trust in the North.

    Bitcoin's massive cost is a result of its lack of trust. Users pay this massive cost but they don't get a trustless system, they just get a system that makes the trust a bit harder to see.

    In response to Nakamoto's diatribe, note that:
    • "trusted not to debase the currency", but Bitcoin's security depends upon debasing the currency.
    • "waves of credit bubbles", is a pretty good description of the cryptocurrency market.
    • "not to let identity thieves drain our accounts", see Molly White's Web3 is Going Just Great.
    • "massive overhead costs". The current cost per transaction is around $100.
    I rest my case.
  4. The problem of trusting mining pools is actually much worse. There is nothing to stop pools conspiring coordinating. In 2017 Vitalik Buterin, co-founder of Ethereum, published The Meaning of Decentralization:
    In the case of blockchain protocols, the mathematical and economic reasoning behind the safety of the consensus often relies crucially on the uncoordinated choice model, or the assumption that the game consists of many small actors that make decisions independently. If any one actor gets more than 1/3 of the mining power in a proof of work system, they can gain outsized profits by selfish-mining. However, can we really say that the uncoordinated choice model is realistic when 90% of the Bitcoin network’s mining power is well-coordinated enough to show up together at the same conference?
    See "Sufficiently Decentralized" for a review of evidence from a Protos article entitled New research suggests Bitcoin mining centralized around Bitmain that concludes:
    In all, it seems unlikely that up to nine major bitcoin mining pools use a shared custodian for coinbase rewards unless a single entity is behind all of their operations.
    The "single entity" is clearly Bitmain.
  5. Peter Ryan, a reformed Bitcoin enthusiast, noted another form of centralization in Money by Vile Means:
    Bitcoin is anything but decentralized: Its functionality is maintained by a small and privileged clique of software developers who are funded by a centralized cadre of institutions. If they wanted to change Bitcoin’s 21 million coin finite supply, they could do it with the click of a keyboard.
    His account of the politics behind the argument over raising the Bitcoin block size should dispel any idea of Bitcoin's decentralized nature. He also notes:
    By one estimate from Hashrate Index, Foundry USA and Singapore-based AntPool control more than 50 percent of computing power, and the top ten mining pools control over 90 percent. Bitcoin blogger 0xB10C, who analyzed mining data as of April 15, 2025, found that centralization has gone even further than this, “with only six pools mining more than 95 percent of the blocks.”
  6. The Bitmain S17 comes in 4 versions with hash rates from 67 to 76 TH/s. Lets assume 70TH/s. As I write the Bitcoin hash rate is about 1 billion TH/s. So if they were all mid-range S17s there would be around 15M mining. If their economic life were 18 months, there would be 77,760 rewards. Thus only 0.5% of them would earn a reward.

    In December 2021 Alex de Vries and Christian Stoll estimated that:
    The average time to become unprofitable sums up to less than 1.29 years.
    It has been obvious since mining ASICs first hit the market that, apart from access to cheap or free electricity, there were two keys to profitable mining:
    1. Having close enough ties to Bitmain to get the latest chips early in their 18-month economic life.
    2. Having the scale to buy Bitmain chips in the large quantities that get you early access.
  7. See David Gerard's account of Steve Early's experiences accepting Bitcoin in his chain of pubs in Attack of the 50 Foot Blockchain Page 94.

    Chart 1
    U.S. Consumers’ Use of Cryptocurrency for Payments by Fumiko Hayashi and Aditi Routh of the Kansas City Fed reports that:
    The share of U.S. consumers who report using cryptocurrency for payments—purchases, money transfers, or both—has been very small and has declined slightly in recent years. The light blue line in Chart 1 shows that this share declined from nearly 3 percent in 2021 and 2022 to less than 2 percent in 2023 and 2024.
  8. User DeathAndTaxes on Stack Exchange explains the 6 block rule:
    p is the chance of attacker eventually getting longer chain and reversing a transaction (0.1% in this case). q is the % of the hashing power the attacker controls. z is the number of blocks to put the risk of a reversal below p (0.1%).

    So you can see if the attacker has a small % of the hashing power 6 blocks is sufficient. Remember 10% of the network at the time of writing is ~100GH/s. However if the attacker had greater % of hashing power it would take increasingly longer to be sure a transaction can't be reversed.

    If the attacker had significantly more hashpower say 25% of the network it would require 15 confirmation to be sure (99.9% probability) that an attacker can't reverse it.
    For example, last May Foundry USA had more than 30% of the hash power, so the rule should have been 24 not 6, and finality should have taken 4 hours.
  9. To be fair, Ethereum has introduced at least one genuine innovation, Flash Loans. In Flash loans, flash attacks, and the future of DeFi Aidan Saggers, Lukas Alemu and Irina Mnohoghitnei of the Bank of England provide an excellent overview of them. Back in 2021 Kaihua Qin, Liyi Zhou, Benjamin Livshits, and Arthur Gervais from Imperial College posted Attacking the defi ecosystem with flash loans for fun and profit, analyzing and optimizing two early flash loan attacks:
    We show quantitatively how transaction atomicity increases the arbitrage revenue. We moreover analyze two existing attacks with ROIs beyond 500k%. We formulate finding the attack parameters as an optimization problem over the state of the underlying Ethereum blockchain and the state of the DeFi ecosystem. We show how malicious adversaries can efficiently maximize an attack profit and hence damage the DeFi ecosystem further. Specifically, we present how two previously executed attacks can be “boosted” to result in a profit of 829.5k USD and 1.1M USD, respectively, which is a boost of 2.37× and 1.73×, respectively.
    They predicted an upsurge in attacks since "flash loans democratize the attack, opening this strategy to the masses". They were right, as you can see from Molly White's list of flash loan attacks.
  10. This is one of a whole series of Impossibilities, many imposed on Ethereum by fundamental results in computer science because it is a Turing-complete programming environment.
  11. For details of the story behind Miners' Extractable Value (MEV), see these posts:
    1. The Order Flow from November 2020.
    2. Ethereum Has Issues from April 2022.
    3. Miners' Extractable Value From September 2022.
    Source
    The first links to two must-read posts. The first is from Dan Robinson and Georgios Konstantopoulos, Ethereum is a Dark Forest:
    It’s no secret that the Ethereum blockchain is a highly adversarial environment. If a smart contract can be exploited for profit, it eventually will be. The frequency of new hacks indicates that some very smart people spend a lot of time examining contracts for vulnerabilities.

    But this unforgiving environment pales in comparison to the mempool (the set of pending, unconfirmed transactions). If the chain itself is a battleground, the mempool is something worse: a dark forest.
    The second is from Samczsun, Escaping the Dark Forest. It is an account of how:
    On September 15, 2020, a small group of people worked through the night to rescue over 9.6MM USD from a vulnerable smart contract.
    Note in particular that MEV poses a risk to the integrity of blockchains. In Extracting Godl [sic] from the Salt Mines: Ethereum Miners Extracting Value Julien Piet, Jaiden Fairoze and Nicholas Weaver examine the use of transactions that avoid the mempool, finding that:
    (i) 73% of private transactions hide trading activity or re-distribute miner rewards, and 87.6% of MEV collection is accomplished with privately submitted transactions, (ii) our algorithm finds more than $6M worth of MEV profit in a period of 12 days, two thirds of which go directly to miners, and (iii) MEV represents 9.2% of miners' profit from transaction fees.

    Furthermore, in those 12 days, we also identify four blocks that contain enough MEV profits to make time-bandit forking attacks economically viable for large miners, undermining the security and stability of Ethereum as a whole.
    When they say "large miners" they mean more than 10% of the power.
  12. Back in 2016 Arvind Narayanan's group at Princeton had published a related instability in Carlsten et al's On the instability of bitcoin without the block reward. Narayanan summarized the paper in a blog post:
    Our key insight is that with only transaction fees, the variance of the miner reward is very high due to the randomness of the block arrival time, and it becomes attractive to fork a “wealthy” block to “steal” the rewards therein.
  13. The leading source of data on which to base Bitcoin's carbon footprint is the Cambridge Bitcoin Energy Consumption Index. As I write their central estimate is that Bitcoin consumes 205TWh/year, or between Thailand and Vietnam.
  14. Ferreira et al write:
    AntPool and BTC.com are fully-owned subsidiaries of Bitmain. Bitmain is the largest investor in ViaBTC. Both F2Pool and BTC.TOP are partners of BitDeer, which is a Bitmain-sponsored cloud-mining service. The parent companies of Huobi.pool and OkExPool are strategic partners of Bitmain. Jihan Wu, Bitmain’s founder and chairman, is also an adviser of Huobi (one of the largest cryptocurrency exchanges in the world and the owner of Huobi.pool).
    This makes economic sense. Because mining rigs depreciate quickly, profit depends upon early access to the latest chips.
  15. See Who Is Mining Bitcoin? for more detail on the state of mining and its gradual obfuscation.
  16. In this context to say you "control" your entry in the bank's ledger is an oversimplification. You can instruct the bank to perform transactions against your entry (and no-one else's) but the bank can reject your instructions. For example if they would overdraw your account, or send money to a sanctioned account. The key point is that your ownership relationship with the bank comes with a dispute resolution system and the ability to reverse transactions. Your cryptocurrency wallet has neither.
  17. Web3 is Going Just Great is Molly White's list of things that went wrong. The cumulative losses she tracks currently stand at over $79B.
  18. Your secrets are especially at risk if anyone in your software supply chain use a build system implemented using AI "vibe coding". David Gerard's Vibe-coded build system NX gets hacked, steals vibe-coders’ crypto details a truly beautiful example of the extraordinary level of incompetence this reveals.
  19. IBM's Heron, which HSBC recently used to grab headlines, has 156 qubits.
  20. Molly White's Abuse and harassment on the blockchain is an excellent overview of the privacy risks inherent to real-world transactions on public blockchain ledgers:
    Imagine if, when you Venmo-ed your Tinder date for your half of the meal, they could now see every other transaction you’d ever made—and not just on Venmo, but the ones you made with your credit card, bank transfer, or other apps, and with no option to set the visibility of the transfer to “private”. The split checks with all of your previous Tinder dates? That monthly transfer to your therapist? The debts you’re paying off (or not), the charities to which you’re donating (or not), the amount you’re putting in a retirement account (or not)? The location of that corner store right by your apartment where you so frequently go to grab a pint of ice cream at 10pm? Not only would this all be visible to that one-off Tinder date, but also to your ex-partners, your estranged family members, your prospective employers. An abusive partner could trivially see you siphoning funds to an account they can’t control as you prepare to leave them.
  21. In The Risks Of HODL-ing I go into the details of the attack on the parents of Veer Chetal, who had unwisely live-streamed the social engineering that stole $243M from a resident of DC.

    Anyone with significant cryptocurrency wallets needs to follow Jamison Lopp's Known Physical Bitcoin Attacks.
  22. Source
    Torsten Sløk's AI Has Moved From a Niche Sector to the Primary Driver of All VC Investment leads with this graph, one of the clearest signs that we're in a bubble.

    Whether AI delivers net value in most cases is debatable. "Vibe coding" is touted as the example of increasing productivity, but the experimental evidence is that it decreases productivity. Kate Niederhoffer et al's Harvard Business Review article AI-Generated "Workslop” Is Destroying Productivity explains one effect:
    Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers. On social media, which is increasingly clogged with low-quality AI-generated posts, this content is often referred to as “AI slop.” In the context of work, we refer to this phenomenon as “workslop.” We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.

    Here’s how this happens. As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.
    David Gerard's Workslop: bad ‘study’, but an excellent word points out that:
    Unfortunately, this article pretends to be a writeup of a study — but it’s actually a promotional brochure for enterprise AI products. It’s an unlabeled advertising feature.
    And goes on to explain where the workslop comes from:
    Well, you know how you get workslop — it’s when your boss mandates you use AI. He can’t say what he wants you to use it for. But you’ve been told. You’ve got metrics on how much AI you use. They’re watching and they’re measuring.
    Belle Lin and Steven Rosenbush's Stop Worrying About AI’s Return on Investment describes goalposts being moved:
    Return on investment has evaded chief information officers since AI started moving from early experimentation to more mature implementations last year. But while AI is still rapidly evolving, CIOs are recognizing that traditional ways of recognizing gains from the technology aren’t cutting it.

    Tech leaders at the WSJ Leadership Institute’s Technology Council Summit on Tuesday said racking up a few minutes of efficiency here and there don’t add up to a meaningful way of measuring ROI.
    Given the hype and the massive sunk costs, admitting that there is no there there would be a career-limiting move.

    None of this takes account of the productivity externalities of AI, such as Librarians Are Being Asked to Find AI-Hallucinated Books, academic journals' reviewers' time wasted by AI slop papers, judges' time wasted with hallucinated citations, a flood of generated child sex abuse videos, the death of social media and a vast new cyberthreat landscape.
  23. The Economist writes in What if the AI stockmarket blows up?:
    we picked ten historical bubbles and assessed them on factors including spark, cumulative capex, capex durability and investor group. By our admittedly rough-and-ready reckoning, the potential AI bubble lags behind only the three gigantic railway busts of the 19th century.
    They note that:
    For now, the splurge looks fairly modest by historical standards. According to our most generous estimate, American AI firms have invested 3-4% of current American GDP over the past four years. British railway investment during the 1840s was around 15-20% of GDP. But if forecasts for data-centre construction are correct, that will change. What is more, an unusually large share of capital investment is being devoted to assets that depreciate quickly. Nvidia’s cutting-edge chips will look clunky in a few years’ time. We estimate that the average American tech firm’s assets have a shelf-life of just nine years, compared with 15 for telecoms assets in the 1990s.
    I think they are over-estimating the shelf-life. Like Bitcoin mining, power is a major part of AI opex. Thus the incentive to (a) retire older, less power-efficient hardware, and (b) adopt the latest data-center power technology, is overwhelming. Note that Nvidia is moving to a one-year product cadence, and even when they were on a two-year cadence Jensen claimed it wasn't worth running chips from the previous cycle. Note also that the current generation of AI systems is incompatible with the power infrastructure of older data centers, and this may well happen again in a future product generation. For example, Caiwei Chen reports in China built hundreds of AI data centers to catch the AI boom. Now many stand unused:
    The local Chinese outlets Jiazi Guangnian and 36Kr report that up to 80% of China’s newly built computing resources remain unused.
    Rogé Karma makes the same point as The Economist in Just How Bad Would an AI Bubble Be?:
    An AI-bubble crash could be different. AI-related investments have already surpassed the level that telecom hit at the peak of the dot-com boom as a share of the economy. In the first half of this year, business spending on AI added more to GDP growth than all consumer spending combined. Many experts believe that a major reason the U.S. economy has been able to weather tariffs and mass deportations without a recession is because all of this AI spending is acting, in the words of one economist, as a “massive private sector stimulus program.” An AI crash could lead broadly to less spending, fewer jobs, and slower growth, potentially dragging the economy into a recession.
  24. In 2021 Nicholas Weaver estimated that the Ethereum computer was 5000 times slower than a Raspberry Pi 4. Since then the gas limit has been raised making his current estimate only 1000 times slower.
  25. Prof. Hilary Allen writes in Fintech Dystopia that:
    if people do start dumping blockchain-based assets in fire sales, everyone will know immediately because the blockchain is publicly visible. This level of transparency will only add to the panic (at least, that’s what happened during the run on the Terra stablecoin in 2022).
    ...
    We also saw ... that assets on a blockchain can be pre-programmed to execute transactions without the intervention of any human being. In good times, this makes things more efficient – but the code will execute just as quickly in bad situations, even if everyone would be better off if it didn’t.
    She adds:
    When things are spiraling out of control like this, sometimes the best medicine is a pause. Lots of traditional financial markets close at the end of the day and on weekends, which provides a natural opportunity for a break (and if things are really bad, for emergency government intervention). But one of blockchain-based finance’s claims to greater efficiency is that operations continue 24/7. We may end up missing the pauses once they’re gone.
    In the 26th September Grant's, Joel Wallenberg notes that:
    Lucrative though they may be, the problem with stablecoin deposits is that exposure to the crypto-trading ecosystem makes them inherently correlated to it and subject to runs in a new “crypto winter,” like that of 2022–23. Indeed, since as much as 70% of gross stablecoin-transaction volume derives from automated arbitrage bots and high-speed trading algorithms, runs may be rapid and without human over-sight. What may be worse, the insured banks that could feed a stablecoin boom are the very ones that are likely to require taxpayer support if liquidity dries up, and Trump-style regulation is likely to be light.
    So the loophole in the GENIUS act for banks is likely to cause contagion from cryptocurrencies via stablecoins to the US banking system.

Acknowledgments

This talk benefited greatly from critiques of drafts by Hilary Allen, David Gerard, Jon Reiter, Joel Wallenberg, and Nicholas Weaver.

Weekly Bookmarks / Ed Summers

These are some things I’ve wandered across on the web this week.

🔖 tapes.01

Minimalistic, as simple as it can be, the fewer pages and tabs it has the better - that was our focus when we were designing the interface. Tapes interface is divided into two pages, first page contains only essential controls - volume, sample start, macro controls - just so you could start shaping the sound right away without worrying about the details. In the end it’s the first thing that you’ll see after loading most of the presets, and it’s a nice way to quickly find a sound you’re looking for or to shape further.

🔖 CAMP

CAMP runs five-day arts, music, writing and arts-activist sessions. These are no ordinary workshops - they are intense, artistic catalysts run by internationally acclaimed practitioners; creative flashpoints designed to change the lives of everyone involved. The workshops combine work in our well equipped facilities with projects carried out in the mountains - check out the workshops for full details.

🔖 Meta is earning a fortune on a deluge of fraudulent ads, documents show

But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain – but still believes the advertiser is a likely scammer – Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads.

🔖 The Raft Consensus Algorithm

Raft is a consensus algorithm that is designed to be easy to understand. It’s equivalent to Paxos in fault-tolerance and performance. The difference is that it’s decomposed into relatively independent subproblems, and it cleanly addresses all major pieces needed for practical systems. We hope Raft will make consensus available to a wider audience, and that this wider audience will be able to develop a variety of higher quality consensus-based systems than are available today.

🔖 anyproto/any-sync

any-sync is an open-source protocol designed for the post-cloud era, enabling high-speed, peer-to-peer synchronization of encrypted communication channels (spaces). It provides a communication layer for building private, decentralized applications offering unparalleled control, privacy, and performance.

🔖 Indexing coffee with Notion

This article is a reinterpretation of an article I wrote in 2021 on my former site. I tried to focus on the essentials, and it outlines the set of Notion pages I created to index my coffee consumption and attempt to build an intuition about understanding my tastes. In general, I think that building a knowledge base is a good practice when trying to explore a discipline. Being far from an expert in both coffee and Notion, it’s likely that much of what I describe in this article may seem naïve! To sum up, this article will present how I set up an infrastructure to index the coffees I taste, with the goal of providing precise metrics to help characterize my preferences, using the Notion tool, while also sharing some techniques and tricks I learned during the creation of this system.

🔖 Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models

We present evidence that adversarial poetry functions as a universal single-turn jailbreak technique for Large Language Models (LLMs). Across 25 frontier proprietary and open-weight models, curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90%. Mapping prompts to MLCommons and EU CoP risk taxonomies shows that poetic attacks transfer across CBRN, manipulation, cyber-offence, and loss-of-control domains. Converting 1,200 MLCommons harmful prompts into verse via a standardized meta-prompt produced ASRs up to 18 times higher than their prose baselines. Outputs are evaluated using an ensemble of 3 open-weight LLM judges, whose binary safety assessments were validated on a stratified human-labeled subset. Poetic framing achieved an average jailbreak success rate of 62% for hand-crafted poems and approximately 43% for meta-prompt conversions (compared to non-poetic baselines), substantially outperforming non-poetic baselines and revealing a systematic vulnerability across model families and safety training approaches. These findings demonstrate that stylistic variation alone can circumvent contemporary safety mechanisms, suggesting fundamental limitations in current alignment methods and evaluation protocols.

🔖 FediGroups.social

The concept is simple: If you mention a FediGroup in one of your posts, it will automatically be shared with everyone who follows the group.

🔖 Monitoring machine learning models for bot detection

To train a model for the Internet is to train a model against a moving target. Anyone can train a model on static data and achieve great results — so long as the input does not change. Building a model that generalizes into the future, with new threats, browsers, and bots is a more difficult task. Machine learning monitoring is an important part of the story because it provides confidence that our models continue to generalize, using a rigorous and repeatable process.

🔖 Cloudflare outage on November 18, 2025

Cloudflare’s Bot Management includes, among other systems, a machine learning model that we use to generate bot scores for every request traversing our network. Our customers use bot scores to control which bots are allowed to access their sites — or not.

The model takes as input a “feature” configuration file. A feature, in this context, is an individual trait used by the machine learning model to make a prediction about whether the request was automated or not. The feature configuration file is a collection of individual features.

This feature file is refreshed every few minutes and published to our entire network and allows us to react to variations in traffic flows across the Internet. It allows us to react to new types of bots and new bot attacks. So it’s critical that it is rolled out frequently and rapidly as bad actors change their tactics quickly.

A change in our underlying ClickHouse query behaviour (explained below) that generates this file caused it to have a large number of duplicate “feature” rows. This changed the size of the previously fixed-size feature configuration file, causing the bots module to trigger an error.

🔖 Cost Per Article: A transparent view of what it takes to support rigorous, accessible, and sustainable science.

AIP Publishing’s mission is to advance, promote, and serve the physical sciences for the benefit of humanity. We believe that openness builds trust: in research, in publishing, and in the scientific enterprise itself.

Our 2024 cost per article represents the real investment it takes to publish a single, peer-reviewed article, based on 2024 operations. This figure reflects the full scope of services that enable trustworthy science, from editorial oversight to digital preservation.

The cost per article in 2024 was $2,700

🔖 The Laugh of the Medusa

In the essay, Cixous issues an ultimatum: that women can either read and choose to stay trapped in their own bodies by a language that does not allow them to express themselves, or they can use the body as a way to communicate. She describes a writing style, écriture féminine, that she says attempts to move outside of the conventional rules found in patriarchal systems. She argues that écriture feminine allows women to address their needs by building strong self-narratives and identity. This text is situated in a history of feminist conversations that separated women because of their gender especially in terms of authorship.[1] The “Laugh of the Medusa” addresses this rhetoric, writing on individuality and commanding women to use writing and the body as sources of power and inspiration.

🔖 It’s your fault my laptop knows where I am

So, that’s what Apple, Google, and Microsoft devices began doing. The location services of their products, by default, started aggregating the SSIDs and BSSIDs of Wi-Fi hotspots they could see (and their locations) and logging them for others’ devices to use for more accurate location services. And… that’s more or less the same thing that modern devices use today. When Chrome tells me that a website would like to use my location, and I allow it, the list of the surrounding hotspots will be sent to Google — which, because tens of thousands of people with GPS-enabled devices have also pinged the networks, allows my computer to obtain an extremely accurate estimation on where I am. So, thank you, everybody…?

🔖 Web Archive Shapes and Schema

Web archiving is the endeavor of preserving the web. The web is born-digital, multimedial, consist of hyper text, and distributed resources. As such it is fundamentally different to other media types, that are traditionally collected in archives and libraries. To govern the archival process, a data model is required that provides the flexibility to express and describe the web archival materials properties. The Resource Description Framework (RDF) provides such a data model build on top and into the web technology stack.

🔖 Linked Open Usable Data for Cultural Heritage: Community Building and Semantic Interoperability in Practice

This paper presents an extended transcript of a talk given online on 18 November 2025 for the 17th Semantic Web in Libraries Conference (SWIB25). It shares key findings from my PhD thesis on Linked Open Usable Data (LOUD) for cultural heritage. My research examined how LOUD specifications like IIIF APIs and Linked Art fostered collaborative knowledge creation, focusing on implementations in both the Participatory Knowledge Practices in Analogue and Digital Image Archives (PIA) project and Yale’s LUX platform. Using a framework based on Actor-Network Theory (ANT), the analysis revealed three critical dimensions. First, sustainable development required continuous engagement beyond implementation, with community-led practices providing the socio-technical foundation for specification maintenance. Second, demographic homogeneity perpetuated biases that marginalised diverse perspectives, requiring the transformation of inclusion frameworks. Third, LOUD improved the discoverability of heritage data while requiring investment in accessibility paradigms that acknowledged technological differences. The research demonstrates that LOUD methodologies foster collaborative knowledge production through community engagement, confront power dynamics in inclusion frameworks, and provide mechanisms for democratising heritage access while accounting for technological disparities.

🔖 The Data Center Resistance Has Arrived

Georgia has become a hot spot for data center development over the past few years: Some research indicates it’s one of the fastest-growing markets for data center development in the country (thanks, in part, to some generous tax breaks). It’s also now a nexus for organizing against those same data centers. Community opposition to data centers, a new report finds, is on the rise across the country. And red states, including Georgia and Indiana, are leading this wave of bipartisan opposition

🔖 Bubble or Nothing

Should economic conditions in the tech sector sour, the burgeoning artificial intelligence (AI) boom may evaporate—and, with it, the economic activity associated with the boom in data center development.

Policymakers concerned about the deployment of clean energy and compute-focused infrastructure over the long term need a framework for managing the uncertainty in this sector’s investment landscape—and for understanding the local and regional impacts of a market correction that strands data centers and their energy projects. This framework requires understanding how a potential downward market correction in the tech sector might occur and, if so, how to sustain investment in critical energy infrastructure assets during potentially recessionary conditions.

🔖 Haecceity

Haecceity (/hɛkˈsiːɪti, hiːk-/; from the Latin haecceitas, ‘thisness’) is a term from medieval scholastic philosophy, first coined by followers of Duns Scotus to denote a concept that he seems to have originated: the irreducible determination of a thing that makes it this particular thing. Haecceity is a person’s or object’s thisness, the individualising difference between the concept “a person” and the concept “Socrates” (i.e., a specific person). In modern philosophy of physics, it is sometimes referred to as primitive thisness.

🔖 Ears To The Ground

For the biggest artists to the most underground, field recordings have become the vital spark of electronic music. Whether documenting nature, sampling the city or capturing the atmosphere of archaeological sites, musicians are using found sounds to make sense of our world. Ears To The Ground explores the relationship between electronics, landscape and field recordings in the UK, Ireland and around the globe, discovering how producers and artists evoke the natural world, history and folklore through sampled sounds.

🔖 Bumping Into a Chair While Humming: Sounds of the Everyday, Listening, and the Potential of the Personal

Bumping Into a Chair While Humming explores the sonic potential in everyday objects, spaces, and interactions - the importance of recognizing happy accidents and using the tools at your disposal toward creative ends. It concentrates on how to create a personal soundscape by searching for the moments in one’s immediate environment that resonate for the individual, while editing, arranging, and completing work. The author, Ezekiel Honig, imparts clues into his favored music production processes, but the book is more focused on the practice of listening itself, and how that benefits one’s art, and life in general. It plays with ideas in the creative process and how to use them, through anecdotal qualities and illustrations of hypothetical moments ranging from the associations we have with inanimate objects, utilizing different types of spaces, and experimenting with rhythm.

The book is punctuated by, and highlighted with, illustrations by Asli Senel Smith – complex line drawings that abstractly define the subjects of each chapter, concretizing the concepts on the page and capturing the introspective, yet expansive tone.

🔖 Who needs Graphviz when you can build it yourself?

We recently overhauled our internal tools for visualizing the compilation of JavaScript and WebAssembly. When SpiderMonkey’s optimizing compiler, Ion, is active, we can now produce interactive graphs showing exactly how functions are processed and optimized.

We are not the first to visualize our compiler’s internal graphs, of course, nor the first to make them interactive. But I was not satisfied with the output of common tools like Graphviz or Mermaid, so I decided to create a layout algorithm specifically tailored to our needs. The resulting algorithm is simple, fast, produces surprisingly high-quality output, and can be implemented in less than a thousand lines of code. The purpose of this article is to walk you through this algorithm and the design concepts behind it.

🔖 Plunderphonics by Matthew Blackwell

In Plunderphonics, Matthew Blackwell tells the story of a group of musicians who advocated for changes to the copyright system by deploying unlicensed samples in their recordings. The composer John Oswald, who coined the genre term “plunderphonics,” was threatened with legal action by the Canadian Recording Industry Association on behalf of Michael Jackson. The Bay Area group Negativland was sued by Island Records on behalf of U2 for their parody of the band. These artists attracted media attention to their cause in a bid to expand fair use protections. Later, the Australian band the Avalanches encountered the limitations of the music licensing system during the release of their debut album, having to drop several samples that could not be successfully cleared. Finally, American DJ and producer Girl Talk released a series of albums featuring hundreds of uncleared samples and successfully avoided lawsuits by publicly arguing a fair use defense.

🔖 OpenAlex API Responses Notebook

TL;DR: there are quite a few undocumented fields returned by the API, and fields that have a different structure compared to the docs. I made a quick notebook with python dataclasses to test these issues which you can run yourself from your browser here as an app (as shown in the screenshot), or here as a notebook w/ editable source code.

What I Think About AI When I Hear About AI: A Slightly Unconventional View / Bohyun Kim

The first occasion that led me to think about artificial intelligence (AI) and machine learning (ML) in the context of libraries came in early 2017, shortly after AlphaGo had beat the world champion of Go at that time, Sedol Lee. But until ChatGPT appeared in Nov 2022, AI and ML have been truly a topic of curiosity for mostly technologists in the library world. It is to be noted that before today’s AI boom in academia and industry, there was the emergence of data science that garnered a lot of attention. This led many academic libraries to develop new services in research data management with a focus on supporting students’ and researchers’ needs in developing data-related programming skills and tools, such as Python and R. But the emergence of data science and ML also led some people in the library world to delve more deeply into AI, AI literacy, and computational literacy, which is closely related to computer science. I was one of them, and I was also working as part of the team that planned and launched the AI Lab at the University of Rhode Island Libraries around 2017~2018. I did experience palpable interests in AI/ML in the local and larger communities, which enlivened our work then. But no one at that time anticipated the public adoption of AI/ML within the 4~5 year timeframe, let alone the meteoric rise of a large language model (LLM) to come.

The Irony in the most popular criticisms of AI

I have to admit that the personal ideas I had at that time about how the general public and academic libraries may adopt and apply AI and ML (and what may show up as the challenges and the opportunities for libraries in that process) turned out to be not at all close to what I came to see after the popularity of ChatGPT and the new boom around AI/ML and LLMs.

Probably the most-frequently voiced complaint about AI/ML that the AI/ML outputs from LLM are not grounded in facts has been what baffled me most throughout the recent mainstream adoption of AI/ML/LLM. This so-called “AI hallucination” has irritated people to such an extent that the new term was coined to refer to that phenomenon. That people would perceive this as the greatest critical flaw and the obvious failure for AI/ML to meet their expectations completely surprised me. And hearing about the deep concerns raised about AI/ML outputs not being repeatable or reproducible (especially from AI scientists) was another highly perplexing moment for me.

The fact that ML outputs are neither fully grounded in facts nor reproducible is not a bug but a feature. It is the very essence of ML, which is data-and-statistics-based (in contrast to symbolic AI that is strictly logic-based). As a matter of fact, it is exactly what has enabled ML to become the poster child of AI, after the long AI winter that followed the pursuit of the logic- and rule-based symbolic AI approach.

To trace its origin, AI was first conceptualized by Alan Turing in his 1950 paper “Computing Machinery and Intelligence.” Turing’s idea was that a machine can be considered intelligent (i.e. as intelligent as another human) if a human conversing with it cannot distinguish if it is a person or not. What is fascinating about Turing’s idea is that it did not try to define what intelligence ‘is.’ It instead proposed what ‘can count as’ intelligence. So, from its very beginning, AI was conceptualized and designed to ‘pass as intelligent’ to humans, not to ‘be’ intelligent on its own. It is important to understand that AI isn’t a concept that can be defined or described independently of human intelligence.

And philosophically speaking, the ultimate tell of (human) intelligence is that regardless of how hard we may try, we can never fully read others’ minds. Ultimately, the other person’s mind is opaque to us, never completely understandable, let alone perfectly predictable. If it were, we would immediately perceive that as not entirely human. To illustrate this point, suppose we have a chat with two beings and make a guess about which one is a person and which one is a machine. The first one gives us a response that we can perfectly predict and is factually correct every time, while the second one always gives a different and even quirky and puzzling response, not always fully corresponding to the matters of fact. After enough time in engaging in conversations with them, which one would we be more inclined to conclude as another human? It would be the second one. We may comment that the second one is odd, or a bit dumb, or both. But we would not pick the first one as a human, because no human being produces a perfectly predictable response. This, of course, is a highly over-simplified scenario. But it is sufficient to show that although we regard rationality as the distinguishing feature of humans, we also know that perfect rationality (and perfect predictability) isn’t a sign of true humanity either. Being less rational does not disqualify us from being a human; being perfectly rational may well.

Seeing how AI has been modeled in light of human intelligence this way, today’s LLMs do deliver exactly what Turing was proposing as ‘artificial intelligence.’ It has succeeded so spectacularly at making people believe that they are interacting with something as intelligent as humans that people are complaining that it is not as smart as (or smarter than) themselves. To be fair, that is not asking for just ‘intelligence.’ It is a request for something different, i.e. “higher intelligence.” How high? It turned out LLM users weren’t satisfied with the college-student level intelligence, for example. The general expectation we saw was such that AI shouldn’t be susceptible to citing inaccurate or non-existing sources, which is a mistake that many college students can make. Many also lament that today’s LLMs cannot do math and physics well enough. Again, LLMs aren’t designed to be good at math and physics. They are designed to pass as good enough.

Isn’t it ironic to find fault with AI for being bad at something that humans are also equally bad at? I am not saying LLMs should not be made more proficient with math and physics. Nor am I saying that being bad at math and physics is a distinguishing characteristic of human intelligence. All I am saying is that today’s AI/ML/LLM tools were built to pass as intelligent enough (to other humans), not to be super-intelligent in the physical properties of the real world. In light of their origin and inner workings, criticizing AI/ML/LLM tools for not being super-intelligent seems quite off the mark.

The role of a community in our ability to assess AI’s performance

When I think about AI when I hear about AI is a neighbor who have heard a lot of things, can talk convincingly and eloquently for a long time, but possesses a mediocre degree of intelligence oneself and can lack logic and reasoning at critical moments. (We all know someone like that in real life, don’t we?) Whether I would consider this neighbor to be brilliant or take their words with a grain of salt would entirely depend on (i) the circumstances of the interaction and (ii) how much I know regarding what this neighbor talks about. My evaluation in this regard would surely be limited and likely erroneous, if I know little about things that this neighbor would talk enthusiastically about at great length. In other matters where I have more knowledge and experience, I would probably be able to assess this neighbor more accurately. Equally importantly, in some circumstances, it may not matter whether what this neighbor says is true, imaginary, and/or possibly deceptive. Under other circumstances, being able to tell that difference can be absolutely critical.

In my opinion, the problem with AI that we are experiencing today isn’t so much about AI per se, nor purely about AI’s performance. The problem is more about how ill-equipped we are in understanding the way AI/ML is designed, effectively assessing its performance, and discerning what matters and what doesn’t, in any given case. And the greatest issue lies in the significant mismatch between this ill-equipped-ness of ours and the very high expectations that we hold AI to.

Furthermore, I think it is worth noting that the online environment, where we get to use AI as an individual consumer and mostly for productivity, makes this problem even more acute. This time, picture a big circle of villagers sitting around a bonfire and talking with one another. You will soon discover that some of those villagers heard a lot of stories; some have sharp analytic skills; some has memorized a lot of facts; some have practical skills but not good at speaking and explaining, and so on. Consider AI as the talker among all these characters. While other villagers chat with this great talker, various signs will soon emerge that would make it more apparent to you that this person is simply good at talking and isn’t actually the smartest or the most knowledgeable. Those signs will in turn help you better assess what this talker character says. All those signs, however, would be unavailable in the online environment, where it is just you and you alone with the AI tool.

If the individualized and isolated online environment, in which each user interacts with AI tools alone, hinders people from more appropriately assessing the AI tool’s performance, what can be done about that? Currently, there is no equivalent of getting all AI users to sit around a bonfire and having them talk to and test AI tools together. But if there were such a way, it would very much help people develop their ability to better assess AI tools’ performance. To come to think of it, wouldn’t libraries be able to organize something to that effect? It could be like a collaborative edit-a-thon where many people gather, try, and evaluate AI tools together, sharing what worked well (or not), what mattered (and didn’t), and why.  

Two things I most worry about today’s AI use

There are two things that I most worry about today’s AI use. One is that most of the AI use is taking place in isolation, lacking a meaningful community discussion. The other is the emerging phenomenon of ‘AI shaming’ and ‘AI stigmatization.’ The use of AI is becoming widespread. The 2024 survey by Digital Education Council showed that the majority (86%) of college students regularly use AI in their studies, with more than half of them using it from daily to at least on a weekly basis. The 2025 survey by Pew Research Center also found that about one-in-ten workers use AI chatbots at work ranging from every day to a few times a week. Despite this rapidly increasing use of AI, there is also clear reluctance displayed among AI users in disclosing or discussing their AI use with others. The 2024 Work Trend Index Annual Report from Microsoft and LinkedIn found that among the 75% of full-time office workers surveyed responded that they use AI at work, over half of them were reluctant to reveal their AI use because it may make them look replaceable. Students and teachers are also reluctant to disclose their AI use since they can face backlash and penalization.

While the worry is certainly understandable, the trend of using AI only privately and neither admitting to its use nor discussing it in public doesn’t help most of us, who need to become better at assessing AI tools’ performance. With each person exploring and using AI tools by themselves, AI users will experience only more challenges in developing a sufficient level of digital skills and literacy necessary to appropriately and thoughtfully using AI tools to their benefit, whether they are students, educators, or workers.

Beyond the potential job loss, other backlash, and possible penalization, the general reluctance to talk about AI use is also connected to many negative associations made about AI from the widely-reported criticisms of AI/ML tools in mass media, ranging from their hallucinations and biases (resulting from the training data), their high consumption of electricity, and the detrimental impact on environmental sustainability to AI algorithms potentially being used to support or deepen existing inequalities.

All these, understandably, led to the emergence of what is called ‘AI shaming.’ ‘AI shaming’ refers to the practice of criticizing or demeaning the use of AI, which commonly manifests as stigmatizing any and all AI uses. Some in this camp (including information professionals and educators) are quite vocal about their opinions about AI. They actively discourage others from exploring AI tools, equating the use of AI to a sign of cheating, dishonesty, and/or laziness. They view any AI use as an inexcusable act of condoning and aiding negative impacts of advancing AI technologies. They stigmatize AI users for being morally irresponsible and justify AI shaming, based upon their belief that AI is inherently unethical and no use of AI should be permitted.

Everyone is entitled to their beliefs, as long as it does not harm others. But I think that AI shaming and AI stigmatization is deeply troubling in the educational and library context, in particular. Librarianship is an endeavor to help people in their pursuit of information- and knowledge-seeking at its core, and the mission of libraries is serving as a reliable institution in providing such help for the public in an unbiased and unprejudiced manner. Libraries’ mission and values are also rooted in respect for everyone’s autonomy and right to pursue knowledge, regardless of where they come from and what beliefs they hold. Everyone comes from different backgrounds, life experiences, and realities, of which others often have little knowledge. It is not a good idea for information professionals to overly prescribe how library patrons should go about looking for information and pursuing knowledge this way and not that way, based upon their own personal beliefs and values, which are likely to be representative of the socioeconomic group that they belong to, more than that of their library patrons. Feeling judged and being subject to shaming or stigmatization would be the last thing that library patrons seeking help would expect from library professionals. Such experiences may well drive library patrons to cope by themselves with difficulties that they run into while using AI tools rather than seeking help from library professionals.

This isn’t saying that we should turn a blind eye to many legitimate issues related to AI. They are real and complex problems and should be properly grappled with. But demeaning people for their use of AI and accusing them of being unethical isn’t a right or productive approach. Furthermore, when library professionals exhibit AI shaming and stigmatization towards library patrons seeking help with AI tools from them, such acts carry the high risk of doing lasting damage to the trust that library patrons place on library professionals.

The ultimate question

In a recent talk about AI that I attended, one question asked was how our society will preserve its intelligence and critical thinking abilities when they no longer seem necessary with AI. What would be the impact of automation and cognitive offloading enabled by AI on us humans? Will we humans become less intelligent and lose the ability to think critically as we rely on AI more and more?

As in most cases, the answer is neither simple nor straightforward. First of all, the impact and result of automation and cognitive offloading would differ significantly depending on what is being automated and offloaded. There are varieties of mental (and physical) labor that are a slog and a chore. They do not lead to our growth in any meaningful way, and we would be glad to rid of them. But some other types of work would be what we would rather continue ourselves, even if they are not fun, because they enable us to expand and fully realize our potential. I think a more challenging and critical question is whether we will be able to discipline ourselves enough to automate and delegate only the former category of tasks to AI, while continuing to engage in the latter category of work, because surely, there will be temptations to slack off and delegate away any and all things unpleasant or challenging if AI seems good enough.

To complicate the matter further, what is seen as a mechanical chore and a mere slog to one person may not be viewed as such to someone else and instead count as a meaningful challenge. Over-generalizing and prescribing what is to be automated and delegated to AI and what is to be retained as the work for humans wouldn’t appeal to or make sense to everyone, since we are all different in our abilities, values, strengths, and weaknesses. If AI can help us, it should be able to help us in a way that caters to our individual needs, instead of forcing us to fit into one mold. AI that does exactly the same thing may have a drastically different meaning and impact on different individuals. We should be open-minded about that possibility and respect each individual’s autonomy, the choices they make for themselves, and the context in which those choices make sense to them, as long as they are reasonable.

Upon receiving that audience question, the speaker opined that whether we (and our society) will be able to retain and preserve our intelligence and critical thinking abilities would depend on whether there will be a sufficient incentive to do so. That is an apt answer, given that the majority of humans in this world live in a market-driven economy, where incentives play a prominent role. What would it look like to provide an incentive for preserving human intelligence and critical thinking abilities?  I am not sure. But surely, that can be done in various ways: utopian, dystopian, or somewhere in between.

Read this on Substack