Planet Code4Lib

Join us for the Inaugural Webinar of OPEN GOES COP, a movement advocating for openness in the UN Climate Change Conferences / Open Knowledge Foundation

OPEN GOES COP is a coalition of organisations and individuals aiming to advocate for openness in the context of the UN Climate Change Conferences (COP). We aim to overcome the lack of discussion on the role of ‘openness’ as a necessary condition for addressing the climate crisis and to build the capacity of open movement activists and stakeholders from civil society and academia to influence high-level decisions on related issues.

In this Inaugural Webinar, we call on all those interested in joining the coalition to develop common strategies and work together to make information and materials freely available.

There will be an introduction to the COP processes followed by a brief introduction to the aims of the coalition and a short statement from each participating organisation. The meeting will then be open and conversational to decide together on the next steps. 


  • Adam Yakubu – Institute for Energy Security (IES)
  • Sara Petti, Maxwell Beganim and Julieta Milan – Open Knowledge Network
  • Monica Granados – Open Climate Campaign
  • Otuo-Akyampong Boakye – Wiki Green Initiatives

The Open Goes COP Inaugural Webinar will be held in English. Future meetings in other languages are in the planning stage.

If you’re working at the intersection of openness and climate change, please come! It will only work if more people and organisations get involved.

At this early stage, the coalition is convened by Wiki Green Initiatives, the Open Knowledge Network, and the Open Climate Campaign.

Event details:

  • 🗓 5 June 2024 (World Environment Day)
  • 🕒 3 pm UTC/GMT
  • 📍 Online (Zoom)

European Tour / Jonathan Brinley

Several years ago, Modern Tribe invited me on the annual team trip. Instead of the “usual” Central American/Caribbean destination, we were meeting in Tuscany. After the trip, Stephanie would join me in Rome, where we would have a few days to explore before heading home. This adventure was scheduled for May 2020—it didn’t happen.

Jump forward four years. The kids are teenagers, finances are more flexible, the world is open again. We decided it’s time to try again, this time as a big adventure for the whole family. In May 2024, we went on the Brinley Family Grand European Tour, a 17-day trip with stays in Rome, London, Paris, and Munich.

I will not herein attempt to capture the entire journey. Rather, I want to highlight a handful of experiences and create a space to share selected photographs. (Stephanie has her own selection of photos over at Laughter & Dance.)

Notable Sights

Great Food

Everything was delicious. Our first evening in Rome included a food tasting tour, and a few friends recommended restaurants. Otherwise we just searched Google Maps for nearby restaurants with 4.5+ ratings.

Magnificent Organs

We encountered a plethora of churches/cathedrals/chapels filled with beautiful organs, although we only had the opportunity to hear two of them: in St. Paul’s Cathedral for the Sunday eucharist, and a short concert in the Salzburg Cathedral that featured three of the seven(!) organs therein.

Odds & Ends

"Sufficiently Decentralized" / David Rosenthal

Mining Pools 5/17/24
In June 2018 William Hinman, the Director of the SEC's Division of Corporate Finance, gave a speech to the Yahoo Finance All Markets Summit: Crypto entitled Digital Asset Transactions: When Howey Met Gary (Plastic) in which he said:
when I look at Bitcoin today, I do not see a central third party whose efforts are a key determining factor in the enterprise. The network on which Bitcoin functions is operational and appears to have been decentralized for some time, perhaps from inception.
Over time, there may be other sufficiently decentralized networks and systems
Below the fold, thanks to a tip from Molly White, I look at recent research suggesting that there is in fact a "central third party" coordinating the enterprise of Bitcoin mining.

I have been pointing out that the crypto-bros claims of decentralization are false for more than a decade, most recently in Decentralized Systems Aren't. In that talk I quoted Vitalik Buterin from 2017 in The Meaning of Decentralization:
In the case of blockchain protocols, the mathematical and economic reasoning behind the safety of the consensus often relies crucially on the uncoordinated choice model, or the assumption that the game consists of many small actors that make decisions independently. If any one actor gets more than 1/3 of the mining power in a proof of work system, they can gain outsized profits by selfish-mining. However, can we really say that the uncoordinated choice model is realistic when 90% of the Bitcoin network’s mining power is well-coordinated enough to show up together at the same conference?
Blackburn et al Fig. 5c
Coordination among Bitcoin miners has a long history. In 2022 Alyssa Blackburn et al's Cooperation among an anonymous group protected Bitcoin during failures of decentralization showed that Bitcoin's centralization problem dated back to its earliest days. They were able to:
estimate the effective population size of the decentralized bitcoin network by counting the frequency of streaks in which all blocks are mined by one agent (bottom-left) or two agents (bottom-right). These are compared to the expected values for idealized networks comprising P agents with identical resources. The comparisons suggest an effective population size of roughly 5, a tiny fraction of the total number of participants.
Bitcoin started in 2009 with one miner (Nakamoto) and two years later it was dominated by five miners. It has been dominated by 5 or fewer mining pools ever since.

In 2014's Economies of Scale in Peer-to-Peer Networks I wrote:
When new, more efficient technology is introduced, thus reducing the cost per unit contribution to a P2P network, it does not become instantly available to all participants. As manufacturing ramps up, the limited supply preferentially goes to the manufacturers best customers, who would be the largest contributors to the P2P network. By the time supply has increased so that smaller contributors can enjoy the lower cost per unit contribution, the most valuable part of the technology's useful life is over.
In December 2021 Alex de Vries and Christian Stoll estimated that:
The average time to become unprofitable sums up to less than 1.29 years.
It has been obvious since mining ASICs first hit the market that, apart from access to cheap or free electricity, there were two keys to profitable mining:
  1. Having close enough ties to Bitmain to get the latest chips early in their 18-month economic life.
  2. Having the scale to buy Bitmain chips in the large quantities that get you early access.
And it wasn't just Buterin that noticed that the big mining pools were "well-coordinated". In 2021's Blockchain Analysis of the Bitcoin Market Igor Makarov & Antoinette Schoar wrote:
Six out of the largest mining pools are registered in China and have strong ties to Bitmain Techonologies, which is the largest producer of Bitcoin mining hardware.
Protos provides much better evidence of just how "well-coordinated" the big pools are in New research suggests Bitcoin mining centralized around Bitmain:
A sleuth found a clue in Antpool’s block template: A manually prioritized transaction immediately after the 6.25 BTC block reward or ‘coinbase’ transaction. This new research by pseudonymous Bitcoin developer 0xb10c seemingly confirms long-rumored practices by Antpool hiding its massive operation under the names of ostensibly independent pools.

In short, it warns that despite tens of thousands of decentralized nodes, Bitcoin might actually be quite centralized from a mining perspective.
There are two sleuths involved, discovering two kinds of evidence. First:
0xb10c detected that Pool, Binance Pool, Poolin, EMCD, and Rawpool show signs of using Antpool’s method for prioritizing the post-coinbase transaction.

Antpool might also use a sixth pool, Braiins, but 0xb10c was still analyzing its merkle branches as of the research publication time. Nearly identical merkle branches might indicate that these five or six pools often use the exact same template as Antpool for selecting transactions to include in a block.

In other words, all of these pools often use Bitmain’s machines, often assemble transactions according to Bitmain’s block template, often prioritize the same manually-configured post-coinbase transaction as Bitmain, and often send coinbase and transaction fees to the same custodian as Bitmain.
Second, mononaut discovered:
A single custodian now controls the coinbase addresses of at least 9 pools, representing 47% of total hashrate.
Mononaut traced coinbase rewards from mining pools AntPool, F2Pool, Binance Pool, Braiins, BTCcom, SECPOOL, Poolin, ULTIMUSPOOL and 1THash, and Luxor. He found suspicious levels of cooperation from these supposedly competitive entities in allocating coinbase rewards to a shared — possibly Antpool-controlled — custodian.

0xb10c couldn’t confirm that SECPOOL and SigmaPool entirely cloned AntPool’s template, although they seemed to share a similar template. In all, it seems unlikely that up to nine major bitcoin mining pools use a shared custodian for coinbase rewards unless a single entity is behind all of their operations.
Thus it appears that, instead of being controlled by 3 large mining pools, Bitcoin's blockchain is actually contolled by a single huge mining pool operating through a set of subsiduaries. And that this pool is controlled by Bitmain.

From Bitmain's point of view, this makes a lot of sense. They have essentially one product, mining rigs. Controlling the mechanism through which the bulk of their customer base is "well-coordinated" would be a big help in generating consistent excess profit.

The image is's 4-day mining history. Extracting the pools mentioned in the research we have this table:
Blocks Mined 5/13-17 by suspects
Binance Pool3.880%221,474
Braiins Pool2.293%13871
In 4 days there should be 576 blocks. 40.744% of 576 is 235 blocks, so close enough. There are some pools mentioned that don't appear in the history (SECPOOL, ULTIMUSPOOL, 1THash, EMCD, Luxor). Equally, there may be "well-coordinated" pools missing from the research. So Bitmain does appear to control significantly more power than the biggest single pool. Foundry USA controls 31.746%, and together with Bitmain's collaborators controls 72.49% of the hashing power. The Bitmain pools are mining almost $4M/day at today's "price".

But we should not worry that the Bitcoin blockchain is even less decentralized than it has been all along. It is in safe hands. Bitmain isn't going to kill the goose that lays the golden eggs.

It is only fair to point out that the Ethereum community has actually improved decentralization slightly. A year ago the top 5 staking pools controlled 58.4%, now they control 44.7% of the stakes. But it is still true that block production is heavily centralized, with one producer claiming 57.9% of the rewards.

No-one really cares that cryptocurrencies are actually centralized; they care that they are seen as decentralized. In Deconstructing ‘Decentralization’: Exploring the Core Claim of Crypto Systems Prof. Angela Walch explains why this appearance is important:
the common meaning of ‘decentralized’ as applied to blockchain systems functions as a veil that covers over and prevents many from seeing the actions of key actors within the system. Hence, Hinman’s (and others’) inability to see the small groups of people who wield concentrated power in operating the blockchain protocol. In essence, if it’s decentralized, well, no particular people are doing things of consequence.

Going further, if one believes that no particular people are doing things of consequence, and power is diffuse, then there is effectively no human agency within the system to hold accountable for anything.
In other words, it is a means for the system's insiders to evade responsibility for their actions.

In Decentralized Systems Aren't I pointed out that:
The fact that the coins ranked 3, 6 and 7 by "market cap" don't even claim to be decentralized shows that decentralization is irrelevant to cryptocurrency users. Numbers 3 and 7 are stablecoins with a combined "market cap" of $134B. The largest stablecoin that claims to be decentralized is DAI, ranked at 24 with a "market cap" of $5B.
I rest my case.

Imagining library futures using AI and machine learning / HangingTogether

The following post is part of an ongoing series about the OCLC-LIBER “Building for the future” program. 

User walking through library stacks, with transparent imagery of thoughts, data, and more arching across shelves.Image generated using Adobe Firefly AI

The OCLC Research Library Partnership (RLP) and LIBER (Association of European Research Libraries) hosted a facilitated discussion on the topic of AI and machine learning on 17 April 2024. This event was a component of the ongoing Building for the future series exploring how libraries are working to provide state-of-the-art services, as described in LIBER’s 2023-2027 strategy.

As with the previous sessions in the series, on the topics of research data management and data-driven decision making, members of the OCLC RLP team collaborated with LIBER working group members to develop the discussion questions and support small group discussion facilitation.

The virtual event was attended by participants from 31 institutions across twelve countries in Europe and North America, and this post synthesizes key points from the small group discussions.

Curiosity, confusion, and uncertainty

We kicked off the event by asking participants how they feel about the use of AI and machine learning in libraries, and they responded with a range of complex emotions. While curious and interested about the uses and future of AI, librarians are also skeptical and apprehensive.

Word cloud reporting librarian feelings about AI, with feelings of interest, curiosity, uncertainty, and skepticism dominating.Word cloud reporting librarians’ feelings about AI

In the small group discussions, participants expressed significant concerns about:

  • Environmental impacts due to significant energy usage
  • Privacy of user data
  • Use of copyrighted materials in LLMs and uncertainty about intellectual property ownership
  • Misinformation created by the inaccuracies and hallucinations delivered by generative AI
  • Risks of nefarious manipulations, particularly of voice recordings
  • English language dominance in LLM models
  • The ability to acquire relevant and usable information amidst intense information overload

Upskilling is lonely work. Most people are acting independently to develop their own AI knowledge, through experimentation with an array of tools, and virtually everyone participating in these discussions reported being in the experimentation and learning phase. More structure and support is sorely needed, and a few participants described how they had benefited from a team approach, such as through the establishment of an AI interest group in their library, or by participating in facilitated discussions such as this one.

What’s in their AI tool kit? We asked participants about the tools they are using, and ChatGPT unsurprisingly dominated the list, followed by Microsoft Copilot. Mention of tools like Transkribus, eScriptorium, and DeepL reflect library interests in text and image transcription, analysis, and translation, while a long tail of products like Elicit, Gemini, ResearchRabbit, Perplexity, and Dimensions AI reflect an interest in research discovery and analysis.

Discussions about AI in libraries are strongly influenced by their institutional contexts. Many participants described a pervasive institutional focus on concerns about academic integrity. Policies and guidelines are emerging at local, consortial, and association levels, such as the principles on the use of generative AI tools in education from the Russell Group of research universities in the United Kingdom, which emphasizes not only academic integrity but also the role of universities in supporting AI literacy and equitable access among its affiliates.

Research universities are beginning to provide enterprise services. A few US institutions are launching local chatbots for use by faculty, staff, and students. Participants from the University of California-Irvine shared about the institutionally-supported ZotGPT, built upon the Microsoft Azure platform, which is provided to campus users at no cost. By providing a local tool, the institution can equalize access to experimentation while also overcoming privacy concerns, as the data inputs remain local. This is almost certainly an area we will see more growth in.

AI use cases in libraries

We asked participants to consider the ways libraries can leverage AI, resulting in a rich mine of potentialities, which I have organized into six high-level use case categories:

  • Metadata management
  • Reference support
  • Discovery and content evaluation
  • Transcription and translation
  • Data analytics and user intelligence
  • Communications and outreach

Metadata management topped the list. We heard several participants mention an interest in using machine learning models to create MARC records, and indeed, we heard numerous examples of exploration in this area. For example, the National Library of Finland has experimented with automated subject indexing, resulting in the Annif microservice. In the United States, LC Labs at the Library of Congress has undertaken a project called Exploring Computational Description (ECD) in order to test the effectiveness of machine learning models in the creation of MARC record fields from ebooks. You can learn more via this recorded OCLC RLP webinar. Other participants described local efforts to use textual information to generate subject headings as well as experiments to use tools like Gemini. Participants found their early results disappointing, as they mostly offered a lot of “fictional data,” but still remained optimistic about the potentialities.

In addition to metadata creation, participants are interested in how AI and machine learning technologies may be used to improve metadata quality. This could include anomaly and duplicate record detection or perhaps detection of incorrect coding of languages in records. OCLC has shared about its use of machine learning to identify duplicate records in WorldCat, with input from the cataloging community.

Reference support. Several participants expressed an interest in leveraging AI in order to create a library reference chatbot, in order to instantly answer questions that can be answered by information on local web pages. A participant from the University of Calgary briefly shared how their library has implemented a multilingual reference chatbot called T-Rex, which leverages both an LLM as well as retrieval-augmented generation (RAG), and is trained on the library’s own web content, including LibGuides, operating hours, and much more. In operation for over a year, the effort has been successful and appreciated by librarians, as it reduced the amount of human support required for simple questions.[1]

Discovery and content evaluation. Participants are also interested in how AI technologies can enhance discovery, for example, by enabling searching with natural language phrases in addition to keywords. We heard about some innovative projects at national libraries to support discovery use cases, such as a chatbot answering questions from the digitized newspaper collection at the National Library of Luxembourg.

Researchers are using a number of freestanding tools like scite, Consensus, ResearchRabbit, Perplexity, and Semantic Scholar in order to summarize relevant findings from aggregated content, receive citation recommendations, and visualize research landscapes. The Generative AI Product Tracker compiled by Ithaka S+R offers a useful guide to this ecosystem. In addition, participants described researcher uptake of new AI functionality being built into existing research indexes like Scopus and Dimensions. Like the reference chatbot example above, it appears that these tools use a combination of retrieval-augmented generation (RAG) to query only the local index and generative AI, which processes the returned information into an answer to the original question while minimizing hallucinations.

Transcription and translation. Librarians are keenly interested in transcribing tools, which can increase the accessibility and use of cultural heritage collections. In the discussions, we heard about speech-to-text experimentation (using automatic speech recognition (ASR) technology) taking place at the National Library of Norway and the Royal Danish Library. Several participants mentioned using the Transkribus and eScriptorium platforms to support text recognition and image analysis of digitized historical documents. There’s also interest in how these tools can support researchers working in languages where they have poor proficiency.

Data analytics and user intelligence. While not at the top of the list, more than one participant expressed an interest in using data science and AI tools to learn more about patron behaviors in order to support improved library management.

Communications and outreach. One participant described how their library is using ChatGPT to generate content for library social media feeds, with human review. This seems like a general purpose use case that I expect to hear more about.

Supporting responsible operations

Participants discussed the need for responsible AI practices, particularly the need for AI to be transparent, accountable, and inclusive. There was considerable focus on the need for transparency of LLM data sources, including an examination of the legality of data scraped for use in training sets. In addition to previous reports like OCLC Research’s Responsible Operations: Data Science, Machine Learning, and AI in Libraries, many other research projects, statements, workshops, and events are emerging to guide libraries in ethical decision making about AI. Just a few of these include:

Participants shared their thoughts on how libraries can lead, which included chairing campus discussions about AI literacy, uses, and good academic practices. The LIBER Data Science in Libraries Working Group (DSLib) has been discussing how libraries can interact with AI-generated misinformation and fake news.

Leadership roles for libraries in AI literacy

A principal way that libraries can and are leading is in supporting AI literacy education and training, which many participants described as the newest component of information literacy training. To guide students and researchers, librarians must quickly upskill in order to teach others.

What do libraries require for success?

Through these conversations, I heard participants describe many things that libraries need to successfully move forward. At the most basic level, librarians need access to tools and the time to practice and experiment. Only through these preconditions will librarians gain the content mastery necessary to both serve as campus experts to users and to lead library-based efforts. For example, one participant described how librarians must be familiar with LLM hallucinations, including the creation of fake citations, in order to have the knowledge and confidence to work with patrons using chatbots. Another local need is for more professionals with data analytics skills to be situated in the library, working as part of a cross-functional team, consistent with comments we heard in a previous session about data-driven decision making.

Skills development is independent and ad hoc at this point. Participants want more training guides, external support and sample use cases, and they also want to engage meaningfully with others in communities of practice.

Looking ahead

Word cloud about "what excites you about the future of AI and libraries?" with answers like efficiency, accessibility, and collaboration.Librarians see a hopeful future for libraries and AI

These small group discussions are valuable for connecting library professionals across many time zones. Some participants reported feeling reassured that others were grappling with the same uncertainties at early stages of discovery and experimentation. Overall, participants reported feeling excited and hopeful about the opportunities for AI to support greater efficiency and time-savings in libraries.

Join us on Thursday, 6 June for the closing plenary event of the OCLC-LIBER Building for the Future series. This session will synthesize the high level takeaways from the previous small group discussions, followed by a panel discussion by library thought leaders, who will respond with their perspectives on how research libraries can collaboratively plan in these challenging times. Registration is free and open to all. I’ll see you then.

[1] Julia Guy et al., “Reference Chatbots in Canadian Academic Libraries,” Information Technology and Libraries 42, no. 4 (December 18, 2023),

The post Imagining library futures using AI and machine learning appeared first on Hanging Together.

AI Sauna quick reflection / Open Knowledge Foundation

During the early days of May, AvoinGLAM hosted a co-creation event AI Sauna that brought practitioners of Open Culture from Wikimedia, Creative Commons, Open Future, Flickr Foundation, Meemoo and others together with representatives from Finnish memory institutions and research projects.

National Archives of Finland, AI Sauna event. By Fuzheado – Own work, CC0,

The event kicked off on Monday morning the 6th of May with inspiring talks that led the participants through listening to idea creation in the magnificent old reading room of the National Archives of Finland. The speakers brought inspiring perspectives into the discussion around the impact of AI in the shared online culture from many directions.

→ If you missed the event, you can still watch the inspire talks and the following panel discussion on the event playlist on AvoinGLAM YouTube channel or read the recap in This Month in GLAM.

Before the hacking/co-creation/brainstorming could begin, we invited all guests to enjoy a sauna and a swim in the 9°C seawater at Allas Sea Pool at the Helsinki harbor.

On the following Tuesday morning, the work started at URBAN3 at Maria01, which is the home base for Open Knowledge Finland, AvoinGLAM and Wikimedia Finland. The roughly 4 hours of work was enough to create a plethora of outstanding projects. 

The stream containing these presentations will be available on the AvoinGLAM YouTube channel at a later point.

Further ideas from the Ideas page:

  • Authorship of political artists and Embodied creative process in the context glassblowing by Liisi Soroush
  • Hot topics in the Finnish local letters of the 1860s TuulaP
  • GenAI for Moroccan Arabic Ideophagous
  • History of the Basque Country in 100 objects
  • Summary of all knowledge Susanna

The documentation is forever on AI Sauna pages on Wikimedia Meta, so you can ping the creators and continue work on interesting topics. The project ideas can be found on the Project ideas page, and contacts to most of the participants on the People page.

Check out the slides for Monday and Tuesday that are also available, or the image category on Wikimedia Commons.

Let’s bathe on!

Announcing the COLD French Law Dataset / Harvard Library Innovation Lab

COLD French Law Banner

There is a new addition to the Collaborative Open Legal Data collection: a set of over 800,000 articles extracted from the LEGI dataset, one of France’s official open law repositories, that were programmatically identified as “currently applicable French law” by our pipeline.

This dataset—formatted into a single CSV file and openly available on Hugging Face—contains original texts from the LEGI dataset as well as machine-generated French to English translations thanks to the participation of the CoCounsel team at Casetext, part of Thomson Reuters.

COLD French Law was initially compiled to be used in a forthcoming experiment at the Lab. We are releasing it broadly today as part of our commitment to open knowledge. We see this dataset as a contribution to the quickly expanding field of legal AI, and hope it will help researchers, builders, and tinkerers of all kinds in their endeavors.

The Process

As part of these release notes, we would like to share details about the process used to translate the articles contained in the dataset.

In a field where the volume of data is so important, it’s useful to understand the plausibility of working with a dataset in one language with an LLM trained in another. This process revealed some techniques for not only reliably translating a large set of documents, but also for doing so efficiently. We do not plan to maintain this dataset outside of the needs of our experiments, and are therefore sharing the details of the pipeline so that others may update the data in the future if needed.

Over the course of two months the CoCounsel team ran all ~800,000 articles through a translation pipeline that took each individual entry and translated it from its original French into English using OpenAI’s GPT-4 large language model. One hurdle was the variety of important metadata for each entry that was also in French, and a desire to retain each of the articles in its fullest form.

Via GPT-4’s function-calling feature, the pipeline was able to translate the full entries, and allowed each column of an entry to be translated in a single call (or couple of calls in the limited cases where entries were longer than 2,500 tokens.) This saved weeks of processing. Additionally, this technique outputs individual JSON files for each of the law articles.

With this approach, we were able to run the pipeline for just a few hours each night, and the structure of the dataset remained intact.

Over the course of this process adjustments were made to the prompt based on the expertise of the CoCounsel team and feedback provided by Timothée Charmeil, an LL.M. candidate at HLS, who quality tested samples of the initial outputs.

The final prompt that was engineered by our colleagues is shared below.

The Prompt

COLD French Law dataset on Hugging Face

COLD French Law CLI pipeline on Github

See also: COLD Cases Dataset

Empowering Digital Citizenship: Unlocking the Power of Open Knowledge with Participants of the LIFE Legacy / Open Knowledge Foundation

In today’s digital landscape, understanding open knowledge and digital citizenship is crucial for navigating the online world effectively and responsibly. A recent session delved into these vital topics, equipping participants with the knowledge and tools necessary to thrive in the digital age.

The session commenced with an introduction to open knowledge, highlighting its significance in the digital space. Open knowledge refers to the free and unrestricted access to information, ideas, and resources. This concept is essential in promoting collaboration, innovation, and progress.

Maxwell Beganim, lead of Open Knowledge Ghana and coordinator of the Open Knowledge Network Anglophone Africa Hub, facilitated an interactive discussion on digital citizenship, exploring its various elements and internet knowledge. Digital citizenship encompasses the rights, responsibilities, and skills required to navigate the digital world safely and ethically. The discussion covered critical aspects such as online privacy, security, and etiquette, empowering participants to become responsible digital citizens. This is part of Open Knowledge Ghana’s mandate to help build a world open by design where all knowledge is accessible to everyone against the backdrop of the Open Knowledge Foundation vision.

Kiwix Tool: A Game-Changer for Accessing Knowledge

Ruby D. Brown, Project Coordinator at Open Knowledge Ghana, took participants on a journey through the Kiwix tool, demonstrating its usage and importance. Kiwix is an offline Wikipedia reader, providing access to a vast repository of knowledge even without internet connectivity. This tool is particularly valuable for individuals with limited or no internet access, bridging the knowledge gap and promoting digital inclusivity.

Ruby further took participants through an overview of Wikipedia and met the critical needs of information literacy.

The session culminated with participants installing the Kiwix tool on their laptops, ensuring they have a valuable resource at their fingertips. With Kiwix, users can access a vast library of knowledge, including Wikipedia articles, books, and educational resources, even without internet connectivity.

Read more about Kiwix implementation and Environmental sustainability by Maxwell Beganim and Otuo Boakye Akyampong:

Ghanaian Wikimedian empowers students with offline educational app

Beginning in February 2020, Ghanaian Wikimedian Maxwell Beganim and a community volunteer Boakye Otuo Acheampong started using Kiwix and offline Wikipedia

The session successfully empowered participants of the LIFE legacy Project with a deeper understanding of open knowledge and digital citizenship. By embracing these concepts and leveraging tools like Kiwix, individuals can navigate the digital landscape with confidence, responsibility, and a commitment to lifelong learning. As we continue to evolve in the digital age, we must prioritize digital literacy, inclusivity, and access to knowledge, ensuring that everyone can thrive in the online world.

The Life Legacy project in Ghana is by Paradigm Initiative with Internet Society Ghana Chapter as the country implementation partner. LIFE is an acronym for Life Skills, ICTs, Financial Readiness, and Entrepreneurship. The project is aimed at building the capacity of underserved youth in communities. Paradigm Initiative implements this program through its partners in countries across Africa.

Pew Research On Link Rot / David Rosenthal

When Online Content Disappears by Athena Chapekis, Samuel Bestvater, Emma Remy and Gonzalo Rivero reports results from this research:
we collected a random sample of just under 1 million webpages from the archives of Common Crawl, an internet archive service that periodically collects snapshots of the internet as it exists at different points in time. We sampled pages collected by Common Crawl each year from 2013 through 2023 (approximately 90,000 pages per year) and checked to see if those pages still exist today.

We found that 25% of all the pages we collected from 2013 through 2023 were no longer accessible as of October 2023. This figure is the sum of two different types of broken pages: 16% of pages are individually inaccessible but come from an otherwise functional root-level domain; the other 9% are inaccessible because their entire root domain is no longer functional.
Their results are not surprising, but there are a number of surprising things about their report. Below the fold, I explain.

The Web is an evanescent medium. URLs are subject to two kinds of change:
  • Content drift, when a URL resolves to different content than it did previously.
  • Link rot, when a URL no longer resolves.
The Pew team found link rot in Common Crawl's collections:
  • A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.
  • For older content, this trend is even starker. Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.
And in news sites, government sites and Wikipedia:
  • 23% of news webpages contain at least one broken link, as do 21% of webpages from government sites. News sites with a high level of site traffic and those with less are about equally likely to contain broken links. Local-level government webpages (those belonging to city governments) are especially likely to have broken links.
  • 54% of Wikipedia pages contain at least one link in their “References” section that points to a page that no longer exists.
There is a long history of research into both phenomena. Content drift is important to Web search engines. To keep their indexes up-to-date, they need to re-vist URLs frequently enough to capture changes. Thus studies of content drift started early in the history of the Web. Here are some examples from more than two decades ago:
Link rot is a slower process than content drift, so research into it started a bit later. Here are some examples from more than two decades ago:
I have written about this topic many times, first in 2008's Persistence of Poor Peer Reviewing:
I like to cite an example of really bad reviewing that appeared in AAAS Science in 2003. It was Dellavalle RP, Hester EJ, Heilig LF, Drake AL, Kuntzman JW, Schillin MGLM: Going, Going, Gone: Lost Internet References. Science 2003, 302:787, a paper about the decay of Internet links. The the authors failed to acknowledge that the paper repeated, with smaller samples and somewhat worse techniques, two earlier studies that had been published in Communications of the ACM 9 months before, and in IEEE Computer 32 months before. Neither of these are obscure journals. It is particularly striking that neither the reviewers nor the editors bothered to feed the keywords from the article abstract into Google; had they done so they would have found both of these earlier papers at the top of the search results.
The first surprise is that the Pew report lacks any acknowledgement that the transience of Web content is a long-established problem; like Dellavalle et al it is as if it was a new revelation.

Even before published research had quantified it, link rot and content drift were well understood and efforts were underway to mitigate them. In 1996 Brewster Kahle had founded the Internet Archive, the first of several archives of the general Web. Two years later, the LOCKSS Program was the first effort to establish a specialized archive for the academic literature. Both were intended to deliver individual pages to users. A decade later, Common Crawl was set up to deliver Web content in bulk to researchers such as the Pew team; it is not intended as a mitigation for link rot or content drift.

Although Common Crawl was a suitable resource for their research, the second surprise is that the Pew report describes and quantifies the problem of link rot, but acknowledges none of the multiple, decades-long efforts to mitigate it by archiving the Web and providing users with preserved copies of individual pages.

#ODDStories 2024 @ Goma, DRC Congo 🇨🇩 / Open Knowledge Foundation

The ongoing war between M23 rebel group and the regular army in eastern Congo war-torn regions is causing massive population displacements who lack essential aid and basic needs.

On March 7-9, 2024 in the city of Goma (North Kivu province) in the eastern Democratic Republic of the Congo, Media Sensitive to Disasters – MSD Network held an Open Data Day event. MSD is a media network aiming to increase the media’s coverage of risk and disasters.

Entitled “Forced Displacement Open Mapping”, the event was organized in the context of tensions where M23 launched two bombs on March 7, 2024 in Mugunga (Western coast of the city of Goma up to 5km from the battlefields).

The overall goal of the event was to identify newly established displaced camps in eastern Congo war-torn regions for humanitarian assistance.

By developing a timeline feature to illustrate the evolution of forced displacements over the course of the ongoing war and how these displacement patterns change in response according to recent war development, the proposed aimed at producing spatial data that help humanitarian, media, local organizations and the Congolese officials design and implement evidence-based interventions beneficial for displaced deprived from basic needs.

Up to 15 participants have been involved in the displaced camp identification and mapping activities through the North Kivu province which have been coordinated by Rachel KIYUNGI and animated by Cleophas Byumba, Expert in Mapping and Fact-checking.

Project’s activities

Displaced camp in the battlefields (March 07-08, 2024)

The activity aimed at identifying displaced camps in the FARDC-M23 Battlefields including surroundings of the Capital city of Goma and throughout the North Kivu province.

Surveyors composed of the MSD Network members have been involved during the process and after the activity’s completion, up to 10 cities concentrated with displaced have been identified including; Mweso, Kitchanga, Nyanzale, Rugari, Kanyarutshinya, Rusayo, Kanyabayonga, Sake, Mugunga, Bulengo and the province capital city of Goma (hosting up to 6 displaced camps)

Displaced Camps Open mapping (March, 09, 2024)

The activity aimed at mapping forced displacements in the FARDC-M23 battlefield while strengthening the capacity of participants in open mapping with UMAP (an OpenStreetsMap tool). Up to 10 participants participated in the mapping activity with the following links;


  • Up to 30 participants have been directly involved in the event activities,
  • Up to 10,000kms have been covered by the event,
  • Increased knowledge in open mapping of up to 20 participants,
  • Up to 20 cities and villages were added to the map
  • Etc.


The proposed event activities have been threatened by ongoing hostilities between the Congolese army and the M23 rebels and numerous violations of event organizing freedom on the battlefields.

About Open Data Day

Open Data Day (ODD) is an annual celebration of open data all over the world. Groups from many countries create local events on the day where they will use open data in their communities.

As a way to increase the representation of different cultures, since 2023 we offer the opportunity for organisations to host an Open Data Day event on the best date within a one-week period. In 2024, a total of 287 events happened all over the world between March 2nd-8th, in 60+ countries using 15 different languages.

All outputs are open for everyone to use and re-use.

In 2024, Open Data Day was also a part of the HOT OpenSummit ’23-24 initiative, a creative programme of global event collaborations that leverages experience, passion and connection to drive strong networks and collective action across the humanitarian open mapping movement

For more information, you can reach out to the Open Knowledge Foundation team by emailing You can also join the Open Data Day Google Group to ask for advice or share tips and get connected with others.

Code4libBC Day 2: Talk notes / Cynthia Ng

Code4libBC Day 2 talk notes. CatalogerGPT: an AI powered cataloguing assistant as agentic collaboration Glen Greenly, Capilano University Library version 1: create a usable catalogue record online version: version 2 in development design goals: later want to be able to navigate RDA toolkit rapid advancement, so if there’s a limitation, wait 6-12 months GPT-4o … Continue reading "Code4libBC Day 2: Talk notes"

The Productivity Trap / Meredith Farkas

Two men in suits record the actions of women doing sewing work in a factory

Photo source

This is the third in a series of essays I’ve written on time. You can view a list of all of them on the first essay.

In my last post, I wrote about the dominance of work temporality over every aspect of our lives and the push toward acceleration and overwork. What I find funny/sad is that in the face of untenable to-do lists, we often engage in self-blame and look for ways that we can become more efficient in getting our work done. Of course, the problem must be us. We’re simply too distracted, too disorganized, and not good at prioritizing (that’s not to say that some people don’t legitimately have challenges with these things, but we as individuals are not responsible for the systemic problems in our libraries re: overwork). Whether we’ve adopted time-blocking, Kanban boards, Pomodoro timers, dot journals, David Allen’s 43 folders, or even “eat the frog” (yes, really), the problem isn’t so much with how we work as the ever-increasing pace of our organizations and their never-ending push for more. But there are so many productivity pundits who will sell you a can’t miss method for taming your physical and mental clutter and bulldozing your to-do list. Is the best solution to a systemic problem ever really an individual one? 

Modern productivity literature was born out of the field of scientific management from the turn of the 20th century (in her book Saving Time, Jenny Odell actually ties it further back to slavery, quoting a letter from George Washington who wrote about how wasteful it would be to not work slaves enough, but how it’s also wasteful to work them so much so that they become hurt — nope!). Scientific management theorists like Frederick Taylor and Frank and Lillian Gilbreth did time and motion studies and other experiments to find the most efficient and safe ways to do a great variety of manual labor jobs, including housework. Their prescriptions included elements such as the design of the workplace, the equipment used, the movements of the individual, and how long each task should take down to a fraction of a second. Scientific management was premised upon the notion that all work could be perfected and that there truly were methods that would allow each employee to make the very most of their time. It was also premised on the idea that a single method would work for every worker doing the same job, as if we were all just interchangeable widgets.

Scientific management was designed for blue-collar routinized jobs. The assumption of white-collar jobs, as Peter Drucker often wrote, was that for knowledge workers to be effective, they would have to be able to exercise freedom around how they prioritized and got their work done. That freedom and autonomy turned scientific management into “productivity,” with methods (often variations on common themes) that promised, when properly applied, to allow workers to get more done in the same amount of time. Productivity experts like Drucker, Brian Tracy, David Allen, and Cal Newport each offered their own slightly unique systems for managing tasks, incoming information, and requests. Productivity has become its own self-help genre and books, systems, and apps abound that promise to help workers to use every single moment of their time to its greatest purpose. In her book Counterproductive, Melissa Gregg writes about how the rhetoric of productivity mirrors the rhetoric of athleticism. Productivity becomes about training ourselves, becoming more disciplined, and finding ways to do things faster and better than others; essentially “render[ing] all colleagues competitors” (15). I love what L. M. Sacasas wrote in his newsletter, The Convivial Society (highly recommended!), about how our society–

privileges efficiency and tempts us with the promise of time-saved for the sake of some nebulous higher purpose, a human being is valuable only to the degree that they become sites of automated consumption and on-demand productivity. And the order that demands this from us is never satiated. For its purposes, you will never purchase enough or produce enough. It’s a self-perpetuating engine of desire for what it alone can offer.

Productivity literature is focused on the perfection of the self and plays into our culture’s focus on optimization (to what end?). If we are determined and strong enough, we can conquer our workloads. And if you can’t keep up with your work – if you can’t prioritize and get things done – the responsibility is your own. Maybe you need a better system. Maybe you need to work harder. Peter Drucker’s vision was for all workers to manage their own productivity, and we’ve seen that the result is that each individual is expected to adopt or develop our own systems for keeping up and getting things done and we are judged as individuals for our productivity. Yet in most workplaces, we are far from isolated actors and our productivity is often bound up in the productivity of others with whom we work. Most knowledge work is, to some extent, collaborative. Gregg argues that this erasure of our collectivity is intentional:

Time-management training has the effect, if not the function, of obliterating recognition of collegial interdependence in contemporary workplaces. Fitting its neoliberal moment, productivity’s inward focus further entrenches the erasure of collective thinking that the first efficiency engineers sought to accomplish. (54)

Steel Workers, Pennsylvania by Arthur Rothstein

In 2020, computer scientist and productivity pundit Cal Newport wrote in the New Yorker about how personal productivity systems like David Allen’s Getting Things Done have failed the modern worker because they are enacted at the individual rather than the organizational-level. As someone who hates blaming individuals for systemic problems, this caught my attention. Newport suggests that we need to work to find solutions to problems of productivity at the organizational level (“The Rise and Fall of Getting Things Done”). He thinks the answer is finding ways to routinize certain knowledge worker tasks. While I agree with him that individual solutions are bound to fail in the face of a collective/systemic problem, I’m even more concerned with the idea of organizations imposing productivity systems on workers. I can imagine organizations adopting one-size-fits-all solutions that harken back to the scientific management movement (Newport himself writes about “an optimal sequence of actions”) and ignore the fact that we are not all widgets. A “sequence of actions” that works for one worker may not work for another or may not work in a particular context. Any such system will also likely increase management surveillance of workers. I’d like to believe there are options beyond the individualism of most current productivity systems and the “top-down interventions” Newport suggests. If the problem is collective, perhaps the solution also should be determined collectively rather than paternalistically imposed from above. 

I think our society’s current obsession with productivity is a symptom of the anxiety and precarity of our current moment. When we feel out of control in so many aspects of our lives, it can be a balm to be able to exert control over something, but productivity is a false idol. While productivity is ostensibly about prioritizing our work, completing tasks regardless of import can become an end in itself. It feels good to check tasks off our list or to move emails into different folders so you can achieve Inbox Zero. It can give us a sense of purpose, control, and accomplishment even when very little of import is actually being accomplished and when we have very little control over our working conditions. It can distract us from questions of whether we are doing the right things; our most meaningful work. Melissa Gregg puts it well in Counterproducitive

Personal productivity is an epistemology without an ontology, a framework for knowing what to do in the absence of a guiding principle for doing it. The meaning produced by productivity is the aesthetic pleasure of pure order, a method in which to do things so that they appear superficially manageable. Such a practice erases any need to question the overall structure determining which things are important, because if we spend enough time choreographing what should get done, there is no need to deal frankly with why. (98)

Productivity is not just about looking good to our bosses; being busy can help us convince ourselves of our worthiness. It can distract us from existential dread over our sense of precarity. I remember being in a previous job where it seemed like there was an informal competition to be the busiest. People would “complain” about how much they had on their plate or how many meetings they had that day or week, like it was a badge of honor. That’s because it is. Bellezaa, Paharia, and Keinan’s (2017) research suggests that busyness in the United States is considered a status symbol. They posit that like luxury goods, people who appear busy appear to be more important because their time is scarce and found in their study that “observers infer that the busy individual possesses desirable human capital characteristics, such as competence and ambition” (121). In comparing the status scores given to busy and non-busy people between participants in the United States versus European study participants, they found that Americans considered less busy people to be of lower status while the inverse was true of the Europeans. Europeans believed that a person with more leisure time likely had higher status, so leisure, rather than busyness, was seen as aspirational. That similar findings to the U.S. have been seen in Britain – a country similarly impacted by neoliberalism over the past 40+ years – suggests how deeply workers in the U.S. and Britain have internalized the values of neoliberalism versus Western countries more focused on social welfare (Gershuny).

Given that people are at least somewhat interdependent in most organizations, it’s clear to see how this push to perform busyness can create trickle down effects for all workers, regardless of how they personally view busyness. Someone looking to take on lots of projects will likely create work for others in their organization, either in project creation or maintenance. Add in precarity – whether real or perceived – and you have a toxic recipe for overwork. And this normalization of overwork is, in most cases, not likely to be opposed by managers who benefit from the increased output. Anne Helen Petersen writes about how over-emailing became something everyone in the workplace needed to do to demonstrate their commitment to the job:

And even though everyone resents it, once in place, the framework of over-emailing demands participation: over-emailing becomes the way you show you’re a team player and evidence of your commitment. When everyone is spending their nights filing through their inbox to ready themselves for more email-generating content during the day, to opt out of the cycle is to position yourself as lazy or less committed. Objectively speaking, checking your emails — and sorting and responding to them — in every crevice of the day is a compulsive, fucked up behavior. But we’ve normalized it thoroughly. It’s not burnout behavior; it’s not something workaholics do. It’s just working an office job.

We can blame technology for our problems, but the real problem is that we’ve created absolutely ridiculous norms around availability and speed of response that we’re constantly interrupted by email, Slack, Teams messages, texts, etc. The good news: those norms can be changed.

In spite of the increase in busyness, the most common reward for high productivity is a higher workload rather than pats on the back and raises. In a study that looked at how work is assigned through six separate research experiments, researchers discovered that those who exhibit strong productivity and self-control tend to be assigned more work because employers and family members assume it is easier for them, though they also found that the work required just as much effort for all research participants (Koval, VanDellen, Fitzsimons & Ranby). In one experiment, “a survey of more than 400 employees, they found that high performers were not only aware that they were giving more at work—they rightly assumed that their managers and co-workers didn’t understand how hard it was for them, and thus felt unhappy about being given more tasks” (Lam). This is a recipe for burnout, yet so many of us believe that our hard work and grinding at our jobs will be rewarded.

Our phony meritocracy has spawned an achievement culture where no amount of work or achievement is ever really enough. We’re pushed to perform busyness to both feel secure in our jobs and to feel like worthwhile human beings. Productivity literature, even that focused on doing less like Cal Newport’s most recent book Slow Productivity (which I will critique in my next essay), are focused on individual solutions when these problems are systemic, and encourage us to ignore collective solutions. An individual solution may help you better get through your to-do list, but it won’t change the underlying pressure to always be doing more. If you have job security, an individual solution may help you to truly lessen your workload and improve your own work-life balance, but it does nothing for those in your organization in more precarious positions who will likely be even more burdened with work as you use your privilege to work less. My next essay will explore the slow movement and constructions of productive/unproductive work, critique Newport’s notion of slow productivity, and discuss the necessity of collective solutions to these thorny problems.

Bellezza, Silvia, Neeru Paharia, and Anat Keinan. “Conspicuous consumption of time: When busyness and lack of leisure time become a status symbol.” Journal of Consumer Research 44.1 (2017): 118-138.

Gershuny, Jonathan. “Busyness as the badge of honor for the new superordinate working class.” Social Research: An International Quarterly 72.2 (2005): 287-314.

Gregg, Melissa. Counterproductive: Time Management in the Knowledge Economy. Durham North Carolina; London, Duke University Press, 2018.

Koval, C. Z., VanDellen, M. R., Fitzsimons, G. M., & Ranby, K. W. (2015). The burden of responsibility: Interpersonal costs of high self-control. Journal of Personality and Social Psychology, 108(5), 750.

Lam, Bourree. “Being a Go-Getter Is No Fun.” The Atlantic, 22 May 2015,

Newport, Cal. “The Rise and Fall of Getting Things Done.” The New Yorker, 17 Nov. 2020,

Petersen, Anne Helen. “How Email Became Work.” Culture Study, 25 Oct. 2020,

Sacasas, L. M. “Waste Your Time, Your Life May Depend On It.” The Convivial Society 4.8 (2023 May 12). 

Time: It doesn’t have to be this way / Meredith Farkas

Three pocket watches

“What we think time is, how we think it is shaped, affects how we are able to move through it.”

-Jenny Odell Saving Time, p. 270

This is the first of a series of essays I’ve written on time. Here are the others (they will be linked as they become available on Information Wants to be Free):

  • With Work Time at the Center
  • The Productivity Trap
  • Meredith’s Slow Productivity (not to be mistaken for Cal Newport’s Faux Slow Productivity)
  • Queer Time, Crip Time, and Subverting Temporal Norms
  • Community Time and Enoughness: The heart of slow librarianship

What I love about reading Jenny Odell’s work is that I often end up with a list of about a dozen other authors I want to look into after I finish her book. She brings such diverse thinkers beautifully into conversation in her work along with her own keen insights and observations. One mention that particularly interested me in Odell’s book Saving Time (2023) was What Can a Body Do (2020) by Sara Hendren. Her book is about how the design of the world around us impacts us, particularly those of us who don’t fit into the narrow band of what is considered “normal,” and how we can build a better world that goes beyond accommodation. Her book begins with the question “Who is the built world built for?” and with a quote from Albert Camus: “But one day the ‘why’ arises, and everything begins in that weariness tinged with amazement” (1).

“Why” is such a simple world, but asking it can completely alter the way we see the world. There’s so much in our world that we simply take for granted or assume is the only way because some ideology (like neoliberalism) has so deeply limited the scope of our imagination. Most of what exists in our world is based on some sort of ideological bias and when we ask “why” we crack the world open and allow in other possibilities. Before I read the book Invisible Women (2021) by Caroline Criado Perez, I already knew that there was a bias towards men in research and data collection as in most things, but I didn’t realize the extent to which the world was designed as if men were the only people who inhabited it and how dangerous and harmful it makes the world for women. What Can a Body Do similarly begins with an exploration of the construction of “normal” and how design based on that imagined normal person can exclude and harm people who aren’t considered normal, particularly those with disabilities. The book is a wonderful companion to Invisible Women in looking at why the world is designed the way it is and how it impacts those who it clearly was not built for. I’ll explore that more in a later essay in this series. 

One thing I took for granted for a very long time was time itself. I thought of time in terms of clocks and calendars, not the rhythms of my body nor the seasons (unless you count the start and end of each academic term as a season). I believed that time was scarce, that we were meant to use it to do valuable things, and that anything less was a waste of our precious time. I would beat myself up when, over Spring Break, I didn’t get enough practical home or scholarship projects done or if I didn’t knock everything off my to-do list at the end of a work week. I would feel angry and frustrated with myself when my bodily needs got in the way of getting things done (I’m writing this with ice on both knees due to a totally random flare of tendinitis when I’d planned to do a major house cleaning today so I’m really glad I don’t fall into that shooting myself with the second arrow trap as much as I used to). I looked for ways to use my time more efficiently. I am embarrassed to admit that I owned a copy of David Allen’s Getting Things Done and tried a variety of different time management methods over the years that colleagues and friends recommended (though nothing ever stuck besides a boring, traditional running to-do list). I’d often let work bleed into home time so I could wrap up a project because not finishing it would weigh on my mind. I was always dogged by the idea that I wasn’t getting enough done and that I could be doing things more efficiently. It felt like there was never enough time all the time. 

Black and white photo of a man hanging from a clock atop a buildingFrom Harold Lloyd’s Safety Last (1923)

I didn’t start asking questions about time until I was 40 and the first one I asked was a big one “what is the point of our lives?” Thinking about that opened a whole world of other questions about how we conceive of time, what kinds of time we value, to what end are we constantly trying to optimize ourselves, what is considered productive vs. unproductive time, why we often value work time over personal time (if not in word then in deed), why time often requires disembodiment, etc. The questions tumbled out of me like dominoes falling. And with each question, I could see more and more that the possibility exists to have a different, a better, relationship with time. I feel Camus’ “weariness, tinged with amazement.”

This is an introduction to a series of essays about time: how we conceive of it, how it drives our actions, perceptions, and feelings, and how we might approach time differently. I’ll be pulling ideas for alternative views of time from a few different areas, particularly queer theory, disability studies, and the slow movement. I’m not an expert in all these areas, but I’ll be sure to point you to people more knowledgeable than me if you want to explore these ideas in more depth.

How many of you feel overloaded with work? Like you’re not getting enough done? How many of you are experiencing time poverty: where your to-do list is longer than the time you have to do your work? How many of you feel constantly distracted and/or forced to frequently task-switch in order to be seen as a good employee? How many of you feel like you’re expected to do or be expert in more than ever in your role? How many of you feel like it’s your fault when you struggle to keep up? More of us are experiencing burnout than ever before and yet we keep going down this road of time acceleration, constant growth, and continuous availability that is causing us real harm. People on the whole are not working that many more hours than they used to, but we are experiencing time poverty and time compression like never before, and that feeling bleeds into every other area of our lives. If you want to read more about how this is impacting library workers, I’ll have a few article recommendations at the end of this essay.

My exploration is driven largely by this statement from sociologist Judy Wajcman’s (2014) excellent book Pressed for Time: “How we use our time is fundamentally affected by the temporal parameters of work. Yet there is nothing natural or inevitable about the way we work” (166). We have fallen into the trap of believing that the way we work now is the only way we can work. We have fallen into the trap of centering work temporality in our lives. And we help cement this as the only possible reality every time we choose to go along with temporal norms that are causing us harm. In my next essay, I’m going to explore how time became centered around work and how problematic it is that we never have a definition of what it would look like to be doing enough. From there, I’m going to look at alternative views of time that might open up possibilities for changing what time is centered around and seeing our time as more embodied and more interdependent. My ideas are not the be-all end-all and I’m sure there are thinkers and theories I’ve not yet encountered that would open up even more the possibilities for new relationships with time. To that end, I’d love to get your thoughts on these topics, your reading recommendations, and your ideas for possible alternative futures in how we conceive of and use time. 

Works on Time in Libraries

Bossaller, Jenny, Christopher Sean Burns, and Amy VanScoy. “Re-conceiving time in reference and information services work: a qualitative secondary analysis.” Journal of Documentation 73, no. 1 (2017): 2-17.

Brons, Adena, Chloe Riley, Ean Henninger, and Crystal Yin. “Precarity Doesn’t Care: Precarious Employment as a Dysfunctional Practice in Libraries.” (2022).

Drabinski, Emily. “A kairos of the critical: Teaching critically in a time of compliance.” Communications in Information Literacy 11, no. 1 (2017): 2.

Kendrick, Kaetrena Davis. “The public librarian low-morale experience: A qualitative study.” Partnership 15, no. 2 (2020): 1-32.

Kendrick, Kaetrena Davis and Ione T. Damasco. “Low morale in ethnic and racial minority academic librarians: An experiential study.” Library Trends 68, no. 2 (2019): 174-212.

Lennertz, Lora L. and Phillip J. Jones. “A question of time: Sociotemporality in academic libraries.” College & Research Libraries 81, no. 4 (2020): 701.

McKenzie, Pamela J., and Elisabeth Davies. “Documenting multiple temporalities.” Journal of Documentation 78, no. 1 (2022): 38-59.

Mitchell, Carmen, Lauren Magnuson, and Holly Hampton. “Please Scream Inside Your Heart: How a Global Pandemic Affected Burnout in an Academic Library.” Journal of Radical Librarianship 9 (2023): 159-179.

Nicholson, Karen P. “Being in Time”: New Public Management, Academic Librarians, and the Temporal Labor of Pink-Collar Public Service Work.” Library Trends 68, no. 2 (2019): 130-152.

Nicholson, Karen. “On the space/time of information literacy, higher education, and the global knowledge economy.” Journal of Critical Library and Information Studies 2, no. 1 (2019).

Nicholson, Karen P. ““Taking back” information literacy: Time and the one-shot in the neoliberal university.” In Critical library pedagogy handbook (vol. 1), ed. Nicole Pagowsky and Kelly McElroy (Chicago: ACRL, 2016), 25-39.

Awesome Works on Time Cited Here

Hendren, Sara. What Can a Body Do?: How We Meet the Built World. Penguin, 2020.

Odell, Jenny. Saving Time: Discovering a Life Beyond Productivity Culture. Random House, 2023.

Wajcman, Judy. Pressed for time: The acceleration of life in digital capitalism. University of Chicago Press, 2020.

Code4libBC Day 1: Talk Notes / Cynthia Ng

Code4libBC Day 1 talk notes. Giving back to the community through transparency and a public handbook See the slides and the whole script: Twitter archiving, post Twitter… Speaker: Daniel Sifton, VIU Other talk credits: Dalys Darney, Dana McFarland, Sarah Ogden leveraging open source tools for many years archiving tweets on covid and wildfires a … Continue reading "Code4libBC Day 1: Talk Notes"

Presentation: Giving back to the community through transparency and a public handbook / Cynthia Ng

This was presented at Code4libBC 2024. It’s a version of my previous Support Driven talk, edited down and for a library audience. Slides Slides on GitHub Introduction Hi Everyone, thanks for joining. My name is Cynthia Ng, although most people know me and you’ll find me online as Arty. As a quick note, I have … Continue reading "Presentation: Giving back to the community through transparency and a public handbook"

Fee-Only Bitcoin / David Rosenthal

Mining a Bitcoin block needs to be costly to ensure that the gains from an attack on the blockchain are less than the cost of mounting it. Miners have two sources of income to defray their costs, the block rewards and the fees for the transactions in the block.

On April 19th the block reward was halved from 6.25BTC to 3.125BTC. This process is repeated every 210,000 blocks (about every 4 years). It limits the issuance of BTC to 21M because around 2140 the reward will be zero; a halving will make it less than a satoshi.

Long before 2140 the block rewards will have shrunk to become insignificant compared to the fees. Below the fold I look at the significance of the change to a fee-only Bitcoin

Users wishing to transact bid the fee for their transaction in an auction. When demand for transactions is high, fees are high, at other times lower. The graph shows that around the halving there was heavy demand for transactions and the average fee per transaction rose to $127. This is an average, it is likely that the distribution of fees is highly skewed.

The lower the fee, the less likely a miner will choose to include it in the block they are trying to mine, especially at times of high demand for transactions. Low fee transactions can wait in the mempool for a long time. The average delay on 30th September 2023 was 25,810 minutes (nearly 18 days) while the median delay was 10 minutes. Clearly, there was a huge flood of very low-fee transactions.


As I write the average fee per transaction is $3.21 while the average cost (reward plus fee) is $65.72, so transactions are 95% subsidized by inflating the currency. Over time, miners reap about 1.5% of the transaction volume. The miners' daily income is around $30M, below average. This is about 2.5E-5 of BTC's "market cap".

Lets assume, optimistically, that this below average daily fraction of the "market cap" is sufficient to deter attacks and examine what might happen in 2036 after 3 more halvings. The block reward will be 0.39BTC. Lets work in 2024 dollars and assume that the BTC "price" exceeds inflation by 3.5%, so in 12 years BTC will be around $98.2K.

To maintain deterrence miners' daily income will need to be about $50M, Each day there will be about 144 blocks generating 56.16BTC or about $5.5M, which is 11% of the required miners' income. Instead of 5% of the income, fees will need to cover 89% of it. The daily fees will need to be $44.5M. Bitcoin's blockchain averages around 500K transactions/day, so the average transaction fee will need to be around $90, or around 30 times the current fee.

One might think that, were BTC to proceed properly moonwards, this problem would go away. Lets repeat the calculation assuming BTC = $1M in 12 years. Miners' daily income would need to be around $500M. The daily rewards would be about $55M, so the fees would need to be $445M, the same 11%. Thus the average fee would need to be around $900. The problem scales with the "price".

It seems probable that withdrawal of the 95% subsidy on transactions will cause some problems. Indeed, there is considerable economic research making this point, including:
  • In 2016 Arvind Narayanan's group at Princeton published a related instability in Carlsten et al's On the instability of bitcoin without the block reward. Narayanan summarized the paper in a blog post:
    Our key insight is that with only transaction fees, the variance of the miner reward is very high due to the randomness of the block arrival time, and it becomes attractive to fork a “wealthy” block to “steal” the rewards therein.
  • More generally, the analysis of 2018's The Economic Limits Of Bitcoin And The Blockchain by Eric Budish essentially concludes that, for safety, the value of transactions in a block must be less than the sum of the mining reward and the fees it contains.
  • In 2019 Raphael Auer of the Bank for International Settlements published Beyond the doomsday economics of “proof-of-work” in cryptocurrencies:
    The key takeaway of this paper concerns the interaction of these two limitations: proof-of-work can only achieve payment security if mining income is high, but the transaction market cannot generate an adequate level of income. ... the economic design of the transaction market fails to generate high enough fees. A simple model suggests that ultimately, it could take nearly a year, or 50,000 blocks, before a payment could be considered “final”."
The last time I wrote about this issue was in 2021's Taleb On Cryptocurrency Economics.

#ODDStories 2024 @ Cuddalore, India 🇮🇳 / Open Knowledge Foundation

The Open Data Day event “Village Leaders Conclave: Navigating the Climate Crisis with Open Data” was successfully conducted on March at the SWEAD Training Center in Bhuvanagiri, Cuddalore District, Tamil Nadu. The event brought together 101 elected women and men, comprising village leaders and representatives, providing them with a unique platform to address the challenges posed by climate change through the lens of open data.

Event Highlights:

  1. Inaugural Session:
    • The event commenced with an inspiring inaugural session, setting the tone for the conclave’s objectives and emphasizing the critical role of open data in climate resilience.
  2. Interactive Sessions:
    • Engaging sessions were conducted to deepen participants’ understanding of open data and its applications in climate-related challenges.
    • Discussions focused on real-world case studies, showcasing successful implementations of open data solutions in different communities.
  3. Workshops:
    • Practical workshops equipped participants with hands-on experience in leveraging open data tools and platforms for climate monitoring and analysis.
  4. Expert Panels:
    • Renowned experts in the field shared valuable insights, addressing queries from participants and offering strategic guidance on incorporating open data into climate action plans.
  5. Networking Opportunities:
    • Participants had the chance to connect, exchange ideas, and establish a network of leaders committed to driving positive change in their communities.
  6. Collaboration Initiatives:
    • The conclave facilitated collaborative efforts among village leaders, encouraging the development of joint projects and initiatives to tackle climate challenges collectively.
  7. Event Conclusion:
    • The event concluded with a commitment ceremony where participants pledged to implement the knowledge gained in their respective communities.
    • Certificates of participation were distributed, recognizing the dedication of the village leaders to climate resilience.

Outcomes and Achievements:

  1. Knowledge Empowerment:
    • Village leaders gained a comprehensive understanding of the role of open data in climate resilience, empowering them to make informed decisions.
  2. Network Building:
    • A strong network of leaders committed to utilizing open data for sustainable solutions was established, fostering future collaborations.
  3. Community Impact:
    • The event’s knowledge transfer is expected to result in the implementation of effective climate action plans across participant communities.
  4. Inspiration for Change:
    • Participants left the conclave inspired and equipped with the tools and knowledge needed to champion climate resilience in their respective villages.

The Village Leaders Conclave stands as a testament to the power of collaboration, knowledge-sharing, and the strategic use of open data in addressing the challenges posed by the climate crisis at the local level. We extend our gratitude to all participants, sponsors, and collaborators who contributed to the success of this groundbreaking event. Together, we take a step towards a more sustainable and resilient future for our communities.

About Open Data Day

Open Data Day (ODD) is an annual celebration of open data all over the world. Groups from many countries create local events on the day where they will use open data in their communities.

As a way to increase the representation of different cultures, since 2023 we offer the opportunity for organisations to host an Open Data Day event on the best date within a one-week period. In 2024, a total of 287 events happened all over the world between March 2nd-8th, in 60+ countries using 15 different languages.

All outputs are open for everyone to use and re-use.

In 2024, Open Data Day was also a part of the HOT OpenSummit ’23-24 initiative, a creative programme of global event collaborations that leverages experience, passion and connection to drive strong networks and collective action across the humanitarian open mapping movement

For more information, you can reach out to the Open Knowledge Foundation team by emailing You can also join the Open Data Day Google Group to ask for advice or share tips and get connected with others.

Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 14 May 2024 / HangingTogether

The following post is one in a regular series on issues of Inclusion, Diversity, Equity, and Accessibility, compiled by a team of OCLC contributors.

Reader sits cross legged, face obscured by a book, appears to levitate amidst library shelves.Photo by Mark Williams on Unsplash

Asian and Pacific Islander authors face book bans

In the United States, we celebrate May as Asian/Pacific American Heritage Month, yet books by Asians and Pacific Islanders in the United States have been subject to the same wave of challenges and bans in schools and libraries as have books by and about other People of Color and members of the LGBTQ+ community. Writing for ABC News Digital in New York, race and culture reporter Kiara Alfonseca says, “Titles highlighting Asian American cultures have been targets among the long and growing inventory of books singled out by critics, prompting concerns about representation in literature.” In her article “Book bans, threats and cancellations: Asian American authors face growing challenges,” Alfonseca speaks with Samira Ahmed, author of the novels Internment and Hollow Fires; Grace Lin, author of A Big Mooncake for Little Star; and Hena Khan, author of the picture book Under My Hijab.

As Alfonseca writes, groups such as Moms for Liberty claim to challenge books for what they consider to be ”objectionable” content, ”including violence, sexual content or anti-American sentiment.” However, many of the books simply try to represent identities that have long been underrepresented and/or attempt to address complex political issues. Contributed by Jay Weitz.

Representation in literature provides a mirror for young people

In a guest post in School Library Journal, author Sarah-SoonLing Blackburn writes about her own memories connecting with works that felt like they reflected her experiences, such as when she read Celeste Ng’s novel Everything I Never Told You. She discusses the importance of offering mirrors to young people, options for fiction and other works in which they can see themselves. When these cultural mirrors are absent, it can create a sense of isolation and invisibility.

Books like Blackburn’s Exclusion and the Chinese American story, which helps to expand the juvenile and young adult concepts of the Asian American experience in history, are important contributions – not only for young Asian readers but for all young readers to gain an appreciation for stories from multiple cultures and perspectives. Contributed by Merrilee Proffitt.

CILIP launches library awareness campaign 

Libraries Change Lives” is a campaign to support public libraries in the United Kingdom by the Chartered Institute of Library and Information Professionals (CILIP). The campaign, which runs 24-28 June 2024, aims to show politicians the value of public libraries to politicians in anticipation of the next General Election. CILIP is asking all UK libraries to submit a case study about library activities and events that show the impact of libraries on users. The categories for the case studies on CILIP’s site include “learning and social mobility,” which is one of many ways libraries support traditionally underrepresented groups. 

CILIP does not mention the recent independent review of English public libraries by Baroness Elizabeth Sanderson of Welton, but I assume it was one of the factors in creating their new campaign. Sanderson’s report notes the lack of recognition of library work in all levels of government as one of the fundamental challenges to overcome in supporting libraries. The deadline for submitting a case study to CILIP is 24 May 2024. I know many of our Hanging Together readers are not located in the UK, but I hope all of you reading this and working at a UK library will submit a case study. Contributed by Kate James. 

The post Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 14 May 2024 appeared first on Hanging Together.

#ODDStories 2024 @ Salvador, Brazil 🇧🇷 / Open Knowledge Foundation

Co-written by Pedro Melhado, Igor Santana e Aline Graziadei

The goal of the YouthMappers UFBA’s Open Data Day event “Communities Mapping Communities: Brazil-Africa Connection” was to strengthen vulnerable communities in Brazil and Africa through the exchange of knowledge facilitated by open mapping data. Specific themes were addressed in three sessions directly linked to the following SDGs:

  • Session 1 – Kenya-Brazil Connection: Mapping for and with favelas, promoting community improvement interventions (SDGs 11 – Sustainable Cities and Communities and 17 – Partnerships for the Goals).
  • Session 2 – Mozambique-Brazil Connection: Use of open data for location and population estimation (SDGs 11 – Sustainable Cities and Communities and 17 – Partnerships for the Goals).
  • Session 3 – Bahia Connection: Mapping vulnerable urban areas promoting resilience to extreme events (SDGs 11 – Sustainable Cities and Communities and 13 – Climate Action).

The event comprised three days of seminars with 16 speakers from Brazil, Kenya, Mozambique, and the USA, two workshops, and one mapathon, including one hybrid day and two online days. All this was moderated by four graduate students members of our YouthMappers UFBA chapter, with the support of seven additional members directly involved in organizing the event. The complete program is available here.

We are very happy with the quality and numbers of the event which reveals its success: there were 75 registered participants, 58% undergraduates and 27% postgraduates. 35% of them were linked to the field of geography, with others from diverse areas such as Engineering (surveying and cartography, forestry, civil), Architecture and Urbanism, Veterinary Medicine, Epidemiology, Medicine, History, Geology, and High School). Most participants learned about the event through WhatsApp, colleagues, and professors.

On March 2nd, 18 people participated in person, including seven residents of the vulnerable community of Pau da Lima in Salvador, where mapping began on OSM during the mapathon on March 5th. A total of 52 certificates were issued for online or in-person participation in event activities.

For participants who filled out the seminar evaluation form, the event was very beneficial: 81% had a very positive evaluation (and 19% had a positive evaluation), highlighting the speakers, event themes, and interaction with people from the African continent. Regarding areas for improvement, 50% indicated that the timetable was not the best for them (we were limited to the morning due to differences in time zones with African countries), followed by event promotion that could be improved to reach more people. Over 90% intend to participate in the next edition of the event.

The Mapathon held, in one week, has already registered 3,532 edits in the project. But the area to be mapped is quite large, and we continue mapping!

The activities were recorded on the YouthMappers UFBA chapter’s YouTube channel and had 616 views just one week after the event!

The Youth Mappers UFBA chapter thanks everyone who participated in our Open Data Day event, whether by watching, asking questions, speaking, or promoting. We also thank the Polytechnic School of the Federal University of Bahia, the postgraduate programs PPEC and MAASA, and our supporters TOMTOM, the Humanitarian OpenStreetMap Team (HOT), and the Open Knowledge Foundation (OKFN)!

But the journey doesn’t end here! The cultural exchange activities between Brazil and Kenya, “Bringing Cultures Together through Open Mapping,” continue every other Saturday! Folks from the Community Mappers of Kibera (the largest favela in Africa) are getting in touch with the Portuguese language, while those from the social startup CommuniTech, from the Pau da Lima community (both from the outskirts of Salvador), and from the Community Mappers UFBA are getting acquainted with English! The inaugural workshop held during YouthMappers UFBA’s Open Data Day was inspiring, and this group will surely go very far!

About Open Data Day

Open Data Day (ODD) is an annual celebration of open data all over the world. Groups from many countries create local events on the day where they will use open data in their communities.

As a way to increase the representation of different cultures, since 2023 we offer the opportunity for organisations to host an Open Data Day event on the best date within a one-week period. In 2024, a total of 287 events happened all over the world between March 2nd-8th, in 60+ countries using 15 different languages.

All outputs are open for everyone to use and re-use.

In 2024, Open Data Day was also a part of the HOT OpenSummit ’23-24 initiative, a creative programme of global event collaborations that leverages experience, passion and connection to drive strong networks and collective action across the humanitarian open mapping movement

For more information, you can reach out to the Open Knowledge Foundation team by emailing You can also join the Open Data Day Google Group to ask for advice or share tips and get connected with others.

We are telling G7 leaders that the AI infrastructure must be open / Open Knowledge Foundation

This week, 13-14 May, sees the start of the Think7 Italy Summit (T7), a meeting of one of the G7 working groups under the Italian presidency. Open Knowledge Foundation is honoured to participate and influence the decisions of the world’s largest economies by offering a Policy Brief in partnership with the Digital Public Goods Alliance (DPGA), Open Future Foundation, the Center for European Policy Network (CEP) and MicroSave Consulting.

The title of our contribution is “Democratic governance of AI systems and datasets”, offered in the context of Task Force 4: Science and Digitalisation for a Better Future

As the Open Knowledge Foundation, we are concerned about a closed digital future where only a few elites can seize the power of AI for private purposes. We want more democracy in both the way AI is built and deployed.  That’s why we’re pushing to adopt public option AI models designed to further the public interest. This approach aligns closely with the principles outlined in the Hiroshima AI Process Comprehensive Policy Framework (Responsible AI practices, ethical guidelines, and global collaboration) and the G20 New Delhi Leaders’ Declaration (leveraging DPI for inclusive development).

As the Think7 Summit unfolds, we want to reiterate our message and highlight once again the importance of openness as a design principle for the democratic use and collective social benefit of Artificial Intelligence. As a starting point, we recommend G7 countries to create a future where AI is not locked for a few, with two actions:

Investing in open, publicly funded datasets: 

  • Support the creation of high-quality open datasets accessible for AI development to address global challenges and serve the public interest – especially when new datasets can fill data blindspots. In addition, deploying these datasets should go hand in hand with adopting governance mechanisms that balance data accessibility for AI development with rights protection and risk mitigation. This requires identifying priority areas and allocating funding to support initiatives that create, refine and make available datasets tailored for AI advancement. 
  • Promote inclusive governance of these datasets and encourage collaboration among stakeholders such as researchers, developers, policymakers, and civil society groups. Datasets with a significant public impact should be managed as a data commons, with principles and policies that ensure equitable data sharing as a digital public good.  This approach enables collective decision-making by either the data subjects or other stakeholders involved in the data governance process while protecting data rights and serving the public interest.

Supporting the creation of Open Source AI: 

  • When allocating funds for AI projects, governments should prioritise supporting open-source initiatives. Additionally, in funding AI development, governments should ensure that any resources generated from this funding, including datasets, are shared as openly as possible.
  • Governments should strengthen the open-source ecosystem by promoting the use of open-source AI solutions in the public sector. This could include encouraging government agencies to use open-source software by establishing procurement policies that favour open-source software and building competencies inside the public administration. 

We hope that the G7 economic leaders will consider our recommendations carefully and commit to building a fairer, more sustainable and open future.

You can register on the T7 website to follow the sessions of each task force online.

Reimagine descriptive infrastructure: dreaming and enacting change together / HangingTogether

[This blog post was co-authored by Mercy Procaccini]

“Language is powerful. It conveys meaning, framing, and sets intentions.” – Reimagine Descriptive Workflows: A Community-informed Agenda for Reparative and Inclusive Descriptive Practice, 2022

Close up photo of leaves, with a fern frond in the process of unfurling.Photo by Utsman Media on Unsplash

In 2020, in response to the murder of George Floyd, OCLC leadership and staff charted a path as an organization, pledging; “We will think critically and implement actions to advance racial equity. We will not forget. We will continue to do more.”

An immediate outcome of this commitment was Reimagine Descriptive Workflows (RDW), a community-informed project that took place in 2021. RDW was structured to surface issues and to identify opportunities to effect lasting change in descriptive practices.

Among the goals set by the RDW participants were to:

  • Acknowledge a need to change the current system
  • Connect with others doing similar work
  • Identify opportunities to engage in collaborative problem-solving
  • Develop concrete approaches to enable reimagined descriptive metadata practices.

A report that documents the convening and its findings, Reimagine Descriptive Workflows: A Community-informed Agenda for Reparative and Inclusive Descriptive Practice reflects that “power and bias in collections is hard-coded from the beginning of the descriptive workflow process,” and that powerful naming and labeling systems which include content standards and data communication formats “can create systemic imbalances beyond the inherent problems of labeling and description.”

Acknowledging the need to enact change is only a first step. The biggest opportunity is in taking a hard look at existing structures, acknowledging the work that must be done, and charting a course forward. Acting on racist and oppressive structures requires stepping back to reframe and relearn, and working in community with others.

OCLC has been reflecting on our own role in repairing systems and workflows. OCLC takes seriously the responsibility to “dream and enact change,” and has been actively considering opportunities to recognize historical and current exclusions in the library industry and do our part to disrupt a cycle of harm in library descriptive practices. We can utilize learnings and practices derived from Reimagine Descriptive Workflows in doing this work.

WorldCat ontology

During the development of a fundamental component of OCLC’s linked-data ecosystem, the WorldCat ontology, the OCLC product management team recognized the need to pause standard production workflows to conduct an internal diversity, equity, and inclusion review of the ontology prior to release. The WorldCat ontology contains class and property labels and definitions that are used for production metadata services such as WorldCat Entities and additionally underpin OCLC’s entity editor, OCLC Meridian. The WorldCat ontology was developed to model extant bibliographic descriptions and authority data and builds on earlier OCLC projects, which benefited from the involvement and participation of community partners.

Seeking not to replicate harm that is rooted in legacy knowledge structures and systems, an internal diversity, equity, and inclusion review of the property labels and definitions surfaced opportunities for revision. In response, we sought a broader range of perspectives and expertise by engaging community members in reviewing the ontology.

Dreaming together: planning and carrying out the work

The OCLC team (comprised of staff from the OCLC Research Library Partnership, Global Product Management, and Global Technologies) invited a small group of professionals with a demonstrated commitment to reparative description and who work with organizations that have prioritized this work. In many cases these participants also drew from their lived experiences.

The project team took to heart recommendations from Reimagine Descriptive Workflows that involvement from community members be undertaken with care to minimize extractive practices. In this spirit, we offered an honorarium to participants.*

Given our relatively tight timeline, we were overjoyed that these skilled library and archival professionals made room in their busy schedules for this work. We provided drafts of the ontology and supporting primer document and invited participants to share written comments prior to our virtual discussion. Being able to review participants’ comments, suggestions, and concerns in advance allowed the OCLC team to synthesize common themes and to approach our conversations with care.

We organized participants into small discussion groups (ranging from 2-4 people), a format that has worked in previous projects. This provided the opportunity for multiple voices to be in conversation with us and with one another in each session. From the interview team’s perspective, this gave discussants a chance to uphold one another’s ideas, learn from each other’s expertise, and it allowed us to see where topics gained weight.

Learnings: some examples

Making space for communal ownership and creation

The comments and discussion reflected an interrogation of bibliographic ontology specifications that embed a Western understanding of works as individual acts of creation. Indigenous communities, among others, consider the inception of works (such as authorship, composing, and building) as community—rather than individualistic—endeavors. Our recommendations suggested revisions to acknowledge and account for shared creation and ownership.

Additionally, assertions about individual creation can at times erase or obscure the creation of knowledge or intellectual content by Indigenous and non-Western communities, instead incorrectly attributing the work to a colonial or settler individual/author. One respondent explained:

[T]here has been a lot of knowledge and cultural extraction from Tribal communities. … There is a perfect example… There is a community [and] their Indigenous language has been around for a very long time, but an anthropologist came in and documented it, wrote it down and now they [the anthropologist] are the author. Literally now based on Western copyright law, [the anthropologist is] the owner of the Indigenous language so now the Tribe has to go through that person’s estate to get actual rights to their own language… So that’s … where this term [author] could be problematic. [O]ur language has been documented by anthropologists …but we don’t consider them the creator of that … so, author here is “person responsible for the content” but they’re not, they’re just the person documenting it. 

Discussions of this topic can be found in a variety of spaces (including community-informed publishing and data collection) and around Indigenous data sovereignty. We found this to be a powerful example of the necessity and importance of breaking away from existing frameworks that exclude other ways of describing and organizing information.

The inherent non-neutrality of language

Respondents’ feedback also illuminated problems with descriptive structures and terms that imply neutrality related to the acquisition or ownership of materials, property, and land. The draft ontology terms and definitions in some cases could uphold a narrative that obscures histories of seizure, colonization, and violence. Participants explained that their concerns were not about finding a better term, but an acknowledgment that these actions are not neutral. The recommendation here is to consider language that invites critical reflection regarding the circumstances of ownership and acquisition.

For example, the draft definition of “acquired/acquired by” included the explanatory phrase “otherwise obtained ownership over the agent or place.” One respondent commented:

Consider well-established shift away from validating “ownership” in context of slavery. Acquisition (through purchase or not) in hindsight might not amount to accepted ownership but more to unjust violence, mastery, coercion, detainment, theft, loot. Emphasizing “ownership” might therefore erase those potential transgressions. I would recommend qualifying ownership further.  

This example helped the project team understand how “otherwise obtained” essentially functions as a euphemism here—we understand what it means but use language that obscures the true nature of the activity and presents it as seemingly neutral.

Similarly, in response to “founded by,” one respondent explained that their Tribal community considered words such as “founded, discovered” to be “very colonized, offensive words… it makes it sound like it [place/location] wasn’t in existence before the founding date, or whoever founded it.” 

Kinship relationships are important and expansive

The original definitions for a subset of properties that can be understood as kinship-based relationships (including parent, child, and spouse) emphasized legal and biological bases in defining them. Respondent feedback uniformly suggested that element description should be expanded to relationships that are self-defined. As one respondent explained:

Partner and spouse are not equivalent, even though conceptually, a lot of people put those together. …for many in the [LGBTQ+] community, partner was a poor equivalent to spouse— “This person is my spouse; he’s not my partner” (we don’t have a business relationship)…. Going back to Indigenous communities, spousal relationships did not necessarily have legal status. [Enslaved people], for example, couldn’t get married or have a spousal relationship, unless it was recognized by the state, and of course, those relationships existed, in spite of those prohibitions. 

For the ontology properties “given name” and “family name respondents indicated that the way that these properties are currently structured feels exclusionary to many cultural practices and groups. In advising the project team of the shortcomings of the current structures, they acknowledged the challenges and provided resources rather than specific guidance.  

There [are] just widespread issues with how names are represented, not just “is this a family name versus a given name?” but, which names count, which names are used in what circumstances? Because they’re often really contextually specific. And also, what’s the normative or what is the form of the name that the person uses in terms of both the names and the sequence? In my experience … this disproportionately tends to affect Asians and also scholars who work in Asian languages, because people adopt … their American name, and then they have their birth name and then they have different transliterations that they prefer. Sometimes it’s just, this is how you render it. But it often crosses over the other way where you have Western scholars who are working in Asian languages and they have a very deliberately chosen preferred transliteration that also has a semantic meaning. So it goes a lot of different ways….I think that even if you just make a choice out of product or pragmatic reasons, it needs a lot of scoping and a lot of description … to be used consistently at the very least. 

Moving forward and enacting change

As a result of this process, OCLC is making improvements to the WorldCat ontology. Some revisions have already been implemented, while other issues await better solutions. This work will be ongoing; updates and revisions will continue to be made in dialogue with users and community.

In much of the respondents’ feedback, they acknowledged that the issues were not necessarily with the ontology and the descriptions themselves, but in how their usage might cause harm or obscure necessary truths. They invited OCLC to consider how—through scope notes, primer documents, and programming—to support appropriate and inclusive usage of the ontology. So while OCLC must effect change in the technology systems and structures we develop and maintain, we know that future steps will include working collaboratively with community to illuminate the challenge areas and transform descriptive practice.

In this blog post we’ve shared just a few of our key learnings. We are grateful for the opportunity to learn so much. We (and the WorldCat ontology) have truly benefited from the insights and expertise of our reviewers. The process echoes one of the seeming contradictions that must be balanced in working towards anti-racist and less harmful descriptive practices drawn from the RDW report:

“Language must be precise to demonstrate respect and inclusivity / In a diverse world, there will never be full agreement on the same words.”

This effort, working in community to ensure that the WorldCat ontology shifts away from systems and structures that were developed during the nineteenth century, demonstrates that this work is complex, nuanced—and worth undertaking.

Closing with gratitude

We offer thanks and appreciation to those who contributed to this effort, not only for their time and expertise, but for their generosity in explaining concepts, care for one another, and commitment to speaking up for communities.

  • Pardaad Chamsaz, Metadata Lead for Equity and Inclusion, British Library
  • Iman Dagher, Arabic & Islamic Studies Metadata Librarian, UCLA
  • Christine Fernsebner Eslao, Metadata Technologies Program Manager, Harvard Library Information & Technical Services
  • Selena Ortega-Chiolero, Museum Specialist, Chickaloon Village Traditional Council, the governing body of Nay’dini’aa Na’ Kayax (Chickaloon Native Village)
  • Adolfo R. Tarango, Cataloging and Metadata Librarian, University of British Columbia
  • Thurstan Young, Collection Metadata Standards Manager, British Library

Thanks are also due to the cross divisional team that undertook this work within OCLC.

  • Rebecca Dean, Lead Data Analyst, OCLC Global Product – ontology development team
  • Jeff Mixter, Senior Product Manager, OCLC Global Product – project co-lead, linked data product lead
  • Charlene Morrison, Senior Data Analyst, OCLC Global Technology – ontology development team
  • Michael Phillips, Vocabulary Specialist, OCLC Global Product project – co-lead, ontology development team
  • Mercy Procaccini, Senior Program Officer, OCLC Research Library Partnership – interview design and reporting team
  • Merrilee Proffitt, Senior Manager, OCLC Research Library Partnership – interview design and reporting team
  • Richard Urban, Senior Program Officer, OCLC Research Library Partnership –  project co-lead, interview design and reporting team
  • Anne Washington, Product Analyst, OCLC Global Product – ontology development team
  • Gina Winkler, OCLC Global Product – Executive Director of Metadata and Digital Services – project sponsor

Would you like updates on OCLC’s linked data products and services? Sign up here!

* While offering an honorarium is intended to minimize extractive practices, it is not a panacea, and responses may vary. Some participants may be unable to accept payment for legal or practical reasons; others may choose to contribute without compensation.

The post Reimagine descriptive infrastructure: dreaming and enacting change together appeared first on Hanging Together.

IIPC 2024 aka EmiLIL in Paris / Harvard Library Innovation Lab

The Perma team has landed back in the US after our trip to the International Internet Preservation Consortium’s Web Archiving Conference. This year the IIPC met in Paris, at the Bibliothèque Nationale de France.

This is a gathering each year of colleagues from around the globe who are working in the web archiving space, ranging from institutions responsible for legal deposits, to researchers working with collections, to people who are building the core tools used for web archiving.

A major theme of the conference was the introduction of AI technologies into the web archiving space.

The Library of Congress is investigating how machine learning can address some of the difficulties associated with searching and accessing PDF collections, which are becoming more and more important to the historical record. You can read their paper, “Grappling with the Scale of Born-Digital Government Publications: Toward Pipelines for Processing and Searching Millions of PDFs”, on

Folks at the University of Northern Texas have been using machine learning from a different angle: to help teams identify collection-relevant materials from large web archive troves. They say their work is close to being available for libraries to ingest their historical collection policy and run it on their metadata. You can read their paper, “Identifying Documents In-Scope of a Collection from Web Archives”, on

Others were exploring ways to create a map of the web based on semantic similarity instead of traditional hyperlinks, and using LLMs to navigate large news media archives.

For our own part, there was representation from the Perma team at the Tools session on Friday. Kristi and Matteo shared their work on WARC-GPT and our developing concept of the librarianship of AI. Their world tour of talks on WARC-GPT continues this month, and we will post slides and any recordings of sessions we have available when they’ve all wrapped up. But in the meantime, trust us - it was great :)

Some other sessions we enjoyed tuning into included a workshop from friends at Webrecorder who were sharing new QA functionality for Browsertrix, and a great presentation from librarians at the National Library of the Netherlands who had taught themselves R in order to automate their validation and policy-checking workflow when processing new material. Their options for automation were somewhat limited by what they were allowed to run on their government-issued computers. We salute a team dedicated to skill building and working with what their IT departments mandate!

As always, spending time with the international community brought together by IIPC was a pleasure and we look forward to next year in Oslo!

Other uses of a car / Ed Summers

He had come from a country where mathematics and mechanics are natural traits. Cars were never destroyed. Parts of them were carried across a village and readapted into a sewing machine or water pump. The backseat of a Ford was reupholstered and became a sofa. Most people in his village were more likely to carry a spanner or screwdriver than a pencil. A car’s irrelevant parts thus entered a grandfather clock or irrigation pulley or the spinning mechanism of an office chair. Antidotes to mechanized disaster were easily found. One cooled an overheating car engine not with new rubber hoses but by scooping up cow shit and patting it around the condenser. What he saw in England was a surfeit of parts that would keep the continent of India going for two hundred years. (Ondaatje, 1993, p. 188)

This description of the uses of a car, apart from transportation, really spoke to me about how objects can overcome their intended use, and have an unexpected second (or third) act, when you add human care and ingenuity into the mix. Necessity is the mother of reinvention.

For no easy to articulate reason I’ve found myself rereading Michael Ondaatje recently. His work was very important to me in my 20s, I think because it was in vogue at the time, after he won the Booker Prize for The English Patient, and it was turned into a film (which won 9 Oscars).

He was born in Sri Lanka, spent his teenage years in England, and moved to Toronto to study, where he finally settled. I was kind of wandering in my 20s too, and I’m sure his writing appealed to me because it spoke to that keen awareness of new places, being uprooted, and putting down roots as best you could.

I’ve also found myself listening to some interviews with him that are available on the web. In many ways these feel like a luxury now, the web was still busy being born in the early 1990s. Hearing him speak, and read from his work, lends a great deal of richness and depth to his texts.

I think the thing I most appreciate about his novels are their poetic quality (he’s an accomplished poet as well). You can read them slowly. The chapters are sized perfectly, often being broken up into shorter segments, that then divide into the paragraphs within. I’m not in rush to finish, or get to The End.

His stories are told in intense, fluid detail, but in fragments that get stuck together like a collage, with intentional gaps between them. These narrative gaps invite you in to connect them–to participate in the telling of the story. Ondaatje talks about the importance of collage in some of his interviews. He famously starts with a very specific scene or moment, and then grows the larger structure from it, like a generative seed.

Another thing that came across in his interviews is how important archival research is to his writing. He goes deep learning about a particular topic, time, place of profession, which then infuses his story. This of course appeals more to me now, after I’ve had a career working in libraries and archives. Ondaatje texts have a way of communicating this strange characteristic of surviving documents, and even fragments of documents, to transport and change us (Harris, 2002 ; Levy, 2001). But somehow he doesn’t do this by talking about documents explicitly. He enacts them, and puts them to work, instead.

I lost track of him after 2000, maybe because I started reading less fiction then, and became absorbed with work, which is something I regret a bit. Note: keep fiction in your life. I did read Anil’s Ghost recently, and I’ve got The Cat’s Table waiting on the top of the pile on my bedside table. Anil’s Ghost is/was amazing. The central character is a forensic pathologist. It’s about evidence, and political violence, and love, and work, and … so many things.


Harris, V. (2002). The archival sliver: power, memory, and archives in South Africa. Archival Science, 2(1-2), 63–86.
Levy, D. (2001). Scrolling forward. In. Arcade.
Ondaatje, M. (1993). The English patient (1st Vintage International ed). New York: Vintage Books.

Running Song of the Day / Eric Hellman

(I'm blogging my journey to the 2024 New York Marathon. You can help me get there.)

Steve Jobs gave me back my music. Thanks Steve!

I got my first iPod a bit more than 20 years ago. It was a 3rd generation iPod, the first version with an all-touch control. I loved that I could play my Bruce, my Courtney, my Heads and my Alanis at an appropriate volume without bothering any of my classical-music-only family. Looking back on it, there was a period of about five years when I didn't regularly listen to music. I had stopped commuting to work by car, and though commuting was no fun, it had kept me in touch with my music. No wonder those 5 years were such a difficult period of my life!

Today, my running and my music are entwined. My latest (and last 😢) iPod already has some retro cred. It's a 6th generation iPod Nano. I listen to to my music on 90% of my runs and 90% of my listening is on my runs. I use shuffle mode so that over the course of a year of running, I'll listen to 2/3 of my ~2500 song library. In 2023, I listened to 1,723 songs. That's a lot of running!

Yes, I keep track. I have a system to maintain a 150 song playlist for running. I periodically replace all the songs I've heard in the most recent 2 months (unless I've listened to the song less than 5 times - you need at least that many plays to become acquainted with a song!) This is one of the ways I channel certain of my quirkier programmerish tendencies so that I project as a relatively normal person. Or at least I try.

Last November, I decided to do something new (for me). I made a running playlist! Carefully selected to have the right cadence and to inspire the run! It was ordered to have to have particular songs play at appropriate points of the Ashenfelter 8K  on Thanksgiving morning. It started with "Born to Run" and ended with either "Save it for Later", "Breathless" or "It's The End Of The World As We Know It", depending on my finishing time. It worked OK. I finished with Exene. I had never run with a playlist before.

1. "Born to Run". Despite the name it's not the best running song, but it is a great start-me-up-song. With 2,661 runners, it took 45  seconds or so before I crossed the starting line, and the first 45 seconds of BTR had me pumped. 2. "American Land". The first part of the race is uphill, so an immigrant song seemed appropriate. 3. "Wake Up" - Arcade Fire. Can't get complacent. 4. "Twist & Crawl - The Beat. The up-tempo pushed me to the fastest part of the race. 5. "Night". Up and over the hill. "you run sad and free until all you can see is the night".  6. "Rock Lobster" - B-52s. This came up on shuffle last week while I was on the track and it was the perfect beats per minute. That gave me the idea to do a playlist. 7. "Shake It Up" - Taylor Swift. A bit of focused anger helps my energy level. 8. "Roulette". Recommended by the Nuts, and yes it was good. Shouting a short lyric helps me run faster. 9. "Workin' on the Highway". The 4th mile of 5 is the hardest, so "all day long I don't stop". 10. "Your Sister Can't Twist" - Elton John. There's a short nasty hill on this section, but I can rock and roll. 11. "Save it for Later" - The Beat. I could run all day to this, but "sooner or later your legs give way, you hit the ground." 12. "Breathless" - X. If I had hit my goal of 45 minutes, I would have crossed the finish as this started, but I was very happy with 46:12. and a 9:14 pace. 13. "It's The End Of The World As We Know It" - R.E.M. 48 minutes would not have been the end of the world, but I'd feel fine.

Last year, I started to extract a line from the music I had listened to during my run to use as the Strava title for the run. Through September 3, I would choose a line from a Springsteen song (he had to take a health timeout after that). For my New Year's resolution, I promised to credit the song and the artist in my run descriptions as well.

I find now that with many songs, they remind me of the place where I was running when I listened to them. And running in certain places now reminds me of particular songs. I'm training the neural network in my head. I prefer to think of it as creating a web of connections, invisible strings, you might say, that enrich my experience of life. In other words, I'm creating art. And if you follow my Strava, the connections you make to my runs and my songs become part of this little collective art project. Thanks!

Reminder: I'm earning my way into the NYC Marathon by raising money for Amref. 

Elon Musk: Threat Or Menace? Part 5 / David Rosenthal

Much of this series has been based on the outstanding reporting of the Washington Post, and the team's Trisha Thadani is back with Lawsuits test Tesla claim that drivers are solely responsible for crashes. My main concern all along has been that Musk's irresponsible hyping of his flawed technology is not just killing his credulous customers, but much more seriously innocent bystanders who had no say in the matter. The article includes video of:
  • A driver who believed Autopilot could drive him home despite his being drunk. The car drove the wrong way on the highway and killed another innocent victim of Musk's hype.
  • Autopilot rear-ending a merging vehicle and killing another innocent victim, a 15-year-old.
  • Autopilot slamming into a broken down vehicle on the highway. When the Tesla driver left the wreck she was hit and killed by another car.
  • Autopilot speeding through a T-junction and crashing into a parked truck.
Below the fold I look into Tesla's results, Musk's response, the details revealed by the various lawsuits. and this excellent advice from Elon Musk:
"If somebody doesn’t believe Tesla is going to solve autonomy, I think they should not be an investor in the company."
Elon Musk, 24th April 2024

The Results

In Tesla’s biggest problem: cars, Drew Dickson looks at Tesla's first quarter results:
Of Tesla’s total quarterly sales of $21.3bn, 82 per cent were indeed “automotive revenues” while the rest were energy and services.
Tesla burned through $2.5bn of cash in the quarter. Inventories grew by over 10 per cent to $16bn.
82% of $21.3B is $17.5B, so Tesla has almost an entire quarter of unsold cars on hand.

One problem is that Musk's persona as an extreme right-wing troll has been putting off the key Tesla customer demographic, well-off liberals who care about climate change. Another problem is that Tesla lineup of models is old and expensive. Tesla used to recognize that they needed a cheaper product but:
A cut-price Model 2 was first teased at the 2023 Tesla AGM, with Musk saying in January that it would be in production towards the end of next year, but the expected spring product announcement never came.

Tesla now states it is “accelerating” plans, though as with the Cybertruck it’s easy to mistake the accelerator for the brakes. The notion that a Model 2 might be built in new factories in Mexico or elsewhere have been replaced with vague commitments to retool existing infrastructure and production lines.
The resources that could have developed a Model 2 or refreshed the existing models instead went to develop the "Incel Camino", the Cybertruck. This isn't just a $82K laughing-stock, but a manufacturing nightmare that will be lucky to sell 20% of Musk's 250K/year projection, especially since it cannot be road-legal in either of Tesla's #2 and #3 markets (China and EU). It will definitely be a drag on the results for some time. So the Models S (2012), X (2015), 3 (2017) and Y (2020) will have to soldier on for a while.

This aging product line isn't attracting customers:
  • Sequential growth in units sold was down 13 per cent sequentially.
  • Tesla’s price per vehicle, excluding regulatory credits and leasing or finance income, was $38,924.
  • This was down 13 per cent from $44,642 last year, which itself was down 11 per cent from $50,037 the previous year.
Extreme pricing pressure is forcing affordable vehicles on Tesla, irrespective of whether it chooses to launch one. Amid a lack of demand for EVs in general, and Teslas in particular, its quarterly automotive revenues were down nearly 13 per cent over the past year and by over 19 per cent sequentially.

“Clean” automotive margins (which exclude regulatory credits and leasing income) were down from 29.7 per cent in the first quarter of 2022, to 18.3 per cent in the first quarter of 2023, and again to 15.6 per cent in the first quarter of 2024. If you back out the new IRA US tax credits (which Tesla doesn’t seem to disclose) then automotive gross margin looks to have fallen even further, to around 14.1 per cent.
Shrinking margins on shrinking sales hit earnings per share:
GAAP EPS was down 53 per cent year-on-year, accelerating from the 23 per cent drop in the first quarter of 2023. Even using non-GAAP EPS it’s a 47 per cent decrease over the past year.

In the summer of 2022, when the stock was above $300 share, analysts were expecting Q1’24 EPS of $1.80. Instead, they got $0.45. That is a 75 per cent downgrade to expectations.
And the upsell of Fake Self Driving isn't helping, as Craig Trudell reports in Tesla’s Self-Driving Software Is a Perpetual Revenue Letdown:
Tesla released its 10-Q, a quarterly report that provides a more detailed view into the company’s financial position. For several years running, Tesla has provided regular updates in these statements on how much revenue it’s taken in from customers and not yet fully recognized. Some of this deferred revenue relates to a work-in-progress product: Full Self-Driving, or FSD, for short.

Tesla’s deferred automotive revenue amounted to $3.5 billion as of March 31, little changed from the end of last year. Of that amount, Tesla expects to recognize $848 million in the next 12 months — meaning much of the performance obligations tied to what it’s been charging customers for FSD will remain unsatisfied a year from now.
In these filings, Tesla also reports how much deferred revenue it’s actually recognized — and the Austin-based company has consistently undershot its own forecasts. It has recognized $494 million of deferred revenue in the last 12 months, short of the $679 million that it projected a year ago.
Tesla's CFO quit last August:
The carmaker reported this week that its operating margin shrank to 5.5% in the first quarter, the lowest since the last three months of 2020. The measure of profitability was at 16% when Zachary Kirkhorn, Tesla’s then-chief financial officer, said during an earnings call that it was key to the company.

“As a management team here, we’re most focused on what our operating margin is,” he said in January 2023, in response to an investor question on a different earnings metric. “That is what we’re primarily managing to now.”
General Motors operating margin is 7.35%. But not to worry, Tesla isn't a car company, its an AI and robotics company:
If the auto business is worth 3 or 4 times the multiple of a Stellantis or Volkswagen, then it would get a forward PE of, say, 20x. That’s more than generous for a business the CEO talks about as a legacy sideline.

Street numbers for Tesla are consistently far too high but even using the 2024 consensus EPS of $2.64, Tesla would be worth just over $50 per share. Using today’s diluted shares (and assuming that they don’t issue more, which they will) that works out to a market cap of $181bn.

Tesla’s fully diluted market cap at pixel time is still $580bn. Simplistically, that means shareholders are already paying around $400bn for corporate experiments in “robotics and AI”, along with anything else Musk has or tries to conjure up.
Tesla will definitely issue more shares, for example after the 13th June shareholder vote when they will reward Musk's corporate experiments in robotics and AI by reinstating the $56B incentive package cancelled by the Delaware court.

Pumping The Stock

About 70% of the stock price is based on Musk hyping the technology. Thus for Musk it is more than twice as important to pump the stock as it is to sell more cars. He has to follow two strategies:
  • Make the results look better in the short term by increasing margins. The obvious way to do this is to cut costs, even though this will reduce profits in the longer term. After all, in the longer term Tesla isn't about selling cars, it is about AI and robotics.
  • Distract people from looking at the results by unleashing the hype cannon.

Cutting Costs

The knee-jerk reaction of US companies to bad quarterly results is to lay off staff, but they generally target the less successful parts. Elon Musk not so much:
Even Tesla's harshest critics must concede that the company's Supercharger network is its star asset. Tesla has more fast chargers in operation than anyone else, and this year opened them up to other automakers, which are adopting the J3400 plug standard.

All of which makes the decision to get rid of senior director of EV charging Rebecca Tinucci—along with her entire team—a bit of a head-scratcher. If I were the driver of a non-Tesla EV expecting to get access to Superchargers this year, I'd probably expect this to result in some friction. Musk told workers that Tesla "will continue to build out some new Supercharger locations, where critical, and finish those currently under construction."
Like most of the recent desperation moves, this was Musk's decision:
The decision to cut the nearly 500-person group, including its senior director, Rebecca Tinucci, was made by Chief Executive Officer Elon Musk in the last week, according to a person familiar with the matter.
In return for gorvenment subsidies, Tesla had been turning Superchargers into a separate business:
Access to high-speed charging is critical to EV adoption, and Tesla invested billions of dollars into developing a global network of Superchargers that became the envy of other automakers. It’s also a critical driver of Tesla sales, and the carmaker pointed to the division’s growth during its first-quarter results just last week.

“Starting at the end of February, we began opening our North American Supercharger Network to more non-Tesla EV owners,” Tesla said in its shareholder deck.

The Musk-led company has also signed charging partnerships with carmakers including Stellantis NV, Volvo, Polestar, Kia, Honda, Mercedes-Benz and BMW. It’s not clear who will now oversee Tesla’s partnerships with those companies. GM, Volvo and Polestar were all due to open NACS chargers to their customers in the immediate future, according to Tesla’s website.
But maybe Musk couldn't resist a chance to mess with the competition:
The job eliminations mean Rivian, Ford and others have lost their main points of contact in Tesla’s charging unit shortly before the kickoff of the busy summer driving season. Tinucci was one of the main executives building and managing outside partnerships and was thought of highly, two people who had worked with her inside and outside of Tesla said.
Musk Undercuts Tesla Chargers That Biden Lauded as ‘a Big Deal’ by Craig Trudell suggests a political motive:
In addition to potentially compromising budding partnerships with other carmakers looking to tap Tesla’s chargers, another consequence of Musk’s move may be undercutting Biden’s EV push in the midst of his reelection campaign. Presumptive Republican nominee Donald Trump has repeatedly attacked electric cars on the campaign trail and predicted a “bloodbath” for the auto industry if he isn’t elected.
Faced with a huge short-term threat to his wealth Musk isn't concerned with the longer term, when unlike robotaxis, Superchargers could have been a nice little earner:
Tesla had been building a tidy charging business over more than a decade. BloombergNEF estimates that the company delivered 8% of the public charging electricity demanded globally last year. Before Musk’s surprise decision, the researcher was projecting that Tesla’s annual profit from Supercharging could rise to around $740 million in 2030.

That level of earnings is now likely out of reach, as BNEF’s estimates assumed Tesla would accelerate the pace of installations through the end of the decade. Musk had given indications this was the plan.
Musk may already be having second thoughts:
The move will slow the network’s growth, according to a person familiar with the division, who asked not to be identified discussing private matters. There already are discussions about rehiring some of the people affected in order to operate the existing network and grow it at a much slower rate, the person said.
Way to motivate the team, Elon!

Musk believes the future depends upon robotaxis but:
Many Tesla fans had been holding out hope that Musk would debut a cheap Model 2 EV in recent weeks. Instead, the tycoon promised that robotaxis would save the business, even as both of its partially automated driver assistance systems face recalls and investigations here in the US and in China.

Delivering on that goal is more than just a technical challenge, and it will require the cooperation and approval of state and federal authorities. However, Musk is also dissolving the company's public policy team in this latest cull.
Cutting off communication with the regulators who will have to approve robotaxi service isn't likely to help. And if there was another technology critical to Tesla's success it would be batteries:
Earlier this month, Tesla engaged in another round of layoffs that decimated the company and parted ways with longtime executive Drew Baglino, who was responsible for Tesla's battery development.
Jonathan M. Gitlin rounds up reactions in What’s happening at Tesla? Here’s what experts think. He quotes Ed Niedermeyer:
Car companies "go bankrupt because A, they overinvest in factories, and then demand falls off. Which... that fits the profile," said Niedermeyer. "And B, they don't invest in products. Not investing in products is sort of a longer-term cause, and the proximal cause is [that] demand falls, and you've been investing in too many factories, and you get crushed by those fixed costs. So those cases that are common across most auto industry bankruptcies are certainly there."

But with almost $27 billion of cash on hand, that shouldn't happen any time soon. "The thing that is really hard to understand is that if you have tens of billions of dollars in cash but you're losing market share and you're losing margin, losing pricing power, and all the other things that are happening with the business—you don't cut your way out of that problem," Niedermeyer continued. "That's the confusing part about all this. What would you use that cash for if not to solve those problems? And yet, instead, they're cutting.

"One of the things I've said for a really long time, and I think this is what's happening, is that an automaker is not really real until they survived a serious downturn," Niedermeyer said. And while the broader economy looks fine, EV sales are battling a strong negative headwind. "The car game is a survival business. You can capture more upside than the other guy in the good times. And that can be really good for your stock. But if you do that by not investing in the things that protect you in the downturn, it doesn't matter. And you're just another one on the list of defunct automakers,"
Musk isn't listening, because he is still firing people:
On Sunday night, even more Tesla workers learned they were no longer employed by the company as it engaged in yet another round of layoffs. ... The latest round of layoffs has affected service advisers, engineers, and HR.

Hyping The Technology

The Washington Post team's Faiz Siddiqui and Trisha Thadani report that Tesla profit plunges on price cuts, but company unveils plans for affordable models:
CEO Elon Musk, who has a unique penchant for redirecting the conversation, used Tuesday’s earnings call to deflect from the poor numbers, focusing instead on the company’s commitment to artificial intelligence and a fully autonomous car. Details on Tesla’s apparent new offerings — which include the “more affordable models” and the “cybercab” — were scant and did not address how the company would overcome the technological and regulatory hurdles ahead.`
Musk has form when it comes to hyping his technologies and companies. His tweeting that funding had been secured to take Tesla private at $420/share led to a settlement with the SEC that is still in place:
The supreme court on Monday rejected an appeal from Elon Musk over a settlement with securities regulators that requires him to get approval in advance of some tweets that relate to Tesla, the electric vehicle company he leads.
The hype is starting to wear thin but not yet with the markets, as Brandon Vigliarolo points out in Musk moves Tesla's goalposts, investors happily move shares higher:
Elon Musk has a strategy and you may have seen it before: When things aren't going well, he'll say something wild to take everyone's eyes off the trouble, and raise share prices with dreams.
The first quarter of 2024 didn't go well for Tesla, either economically or reputationally. As we reported earlier, sales fell, net profit tumbled off the same cliff Tesla's stock price earlier careened over, and production and deliveries decreased as well.

But give Musk a chance to toss out a flash grenade and he'll do just that: This time around with some wild predictions about his automaker producing a "purpose-built robotaxi" dubbed the "Cybercab," and Tesla's latest vision for the future as one in which it is focused on "solving autonomy."
"It's like some combination of Airbnb and Uber, meaning that there will be some number of cars that Tesla owns itself and operates in the fleet … and then there'll be a bunch of cars where they're owned by the end user," Musk said. He added the fleet will likely grow to include "several tens of millions" of vehicles by the end of the decade.
Last year Tesla shipped 1.8M vehicles. There are 6 years left to the "end of the decade". Musk is promising to ship an average of at least 3M vehicles/year, all of which would be enrolled in the robotaxi fleet. Even if this were plausible, one has to question where all the riders would come from for a fleet 2.5 times bigger than Uber's global driver list. Note that in the US 36% of adults have used Uber or Lyft, so the market is already close to saturated. I'm sure we all remember that:
Musk spent plenty of time in the 2010s claiming he'd have one million robotaxis on the road by 2020.
Pumping the stock full of hype is a Musk habit:
Getting in trouble over "Full-Self Driving" claims? Stick a guy in a robot suit and call it Optimus to distract shareholders. Fail to get FSD realized this year - again? Just kick it down the road. Journalists calling him out on his nonsense? Rant about the "woke mind virus" and the media on Twitter.

Of course, Optimus has been nowhere to be seen and was barely mentioned during the call. Likewise, Tesla's dreams of tens of millions of robotaxis on the road in the next six years rests on the need for serious technological breakthroughs the automaker has failed to make despite years of trying. Oh, and a ton of permits if this is to operate in the States, at least.
Vigliarolo isn't alone. In Musk Sells the Tesla Dream, But Don't Ask for Details Liam Denning notices a detail from the earnings call:
There was an odd tweak to the low-cost vehicle strategy Tesla laid out in March 2023, when management talked about cutting costs in half with revolutionary manufacturing methods. Now, Tesla talks about melding aspects of next-generation platforms with its existing ones in the new models, enabling the company to build them on existing manufacturing lines. To be clear, that is an intriguing possibility, offering efficiencies to reduce stubborn costs.

But also to be clear: It won’t deliver a $25,000 Model 2 anytime soon — “this update may result in achieving less cost reduction than previously expected” — and also isn’t what Tesla talked about only a year or so ago. It is a major overhaul of strategy requiring details.
Tesla is starting to have serious competition:
So consumers — some of whom are turned off by Musk’s incessant posting on X, the social platform he owns, and by his controversial political comments — have a lot of choices when it comes to buying an electric car. Tesla’s share of the EV market in the US was roughly 51% in the first quarter, Cox says, down from almost 62% a year earlier.

The competition is even fiercer outside the US, where Chinese carmakers dominate. About half of all EVs sold globally are Chinese brands — BYD, the top brand within China, sold more cars than Tesla did in the last quarter of 2023, though Tesla regained the lead in the following quarter.
To respond to this competition, Tesla has understood for a long time that they needed a $25K Model 2:
Musk first teased about such a car in September 2020, saying a series of innovations Tesla was working on would enable it to make an EV at that price within about three years. As recently as January, Musk said Tesla was “very far along” with work on its lower-cost vehicle.
But as always, Musk's schedule was just a fantasy, and then the need to pump the stock took over:
Then, in early April, Reuters reported that Tesla had shelved plans for the cheaper vehicle to prioritize its robotaxi, creating bedlam among investors. The tension within Tesla over Musk’s desire to focus on the robotaxi is nothing new. It was chronicled by Walter Isaacson, who wrote in his book published in September that the billionaire had “repeatedly vetoed” plans to make a less-expensive model. Musk refused to give any details about a new, more-affordable model when asked about them by analysts on the first-quarter call.
My guess is that it has dawned on Tesla that, without the resources sunk into the Cybertruck, they simply can't build a $25K car and make money, unlike the competition:
China’s EV advantage is in batteries — the most expensive part of an EV. They’re much cheaper in China because of the country’s control of the mining and processing of component materials such as lithium, cobalt, manganese and rare earth metals. UBS analysts say BYD had a 25% cost advantage over North American and European brands in 2023. Its cheapest model goes for $10,000. Tesla’s cheapest Model Y — the world’s best-selling car of any kind last year — is about $35,000 in the US after accounting for federal tax credits.
China's other advantage is in driver assistance technology:
“Chinese EVs are simply evolving at a far faster pace than Tesla,” agrees Shanghai-based automotive journalist and WIRED contributor Mark Andrews, who tested the driver assistance tech available on the roads in China. The US-listed trio of Xpeng, Nio, and Li Auto offer better-than-Tesla “driving assistance features” that rely heavily on lidar sensors, a technology that Musk previously dismissed, but which Tesla is now said to be testing.

The Robotaxi Rescue

According to Musk the thing that will transform Tesla's profitability is a robotaxi. Lets assume for the moment that, despite being dependent only upon cameras, Tesla's Fake Self Driving actually worked. In Robotaxi Economics I analyzed the New York Times' reporting on Waymo and Cruise robotaxis in San Francisco and concluded:
These numbers look even worse for Tesla. Last year Matthew Loh reported that Elon Musk says the difference between Tesla being 'worth a lot of money or worth basically zero' all comes down to solving self-driving technology, and the reason was that owners would rent out their Teslas as robotaxis when they weren't using them. This was always obviously a stupid idea; who wants drunkards home-bound from the pub throwing up on their Tesla's seats? But the fact that the numbers don't add up for robotaxis in general, and the fact that Hertz is scaling back its EV ambitions because its Teslas keep getting damaged because half of them are being used by Uber drivers as taxis, make the idea even more laughable.
Even for Waymo, it turns out that replacing a low-wage human with a lot of very expensive technology (Waymo's robotaxis "are worth as much as $200,000"), and higher-paid support staff isn't a path to profitability.

It is true that Tesla's robotaxis would be cheaper than Waymo's, since they won't have the lidar and radar and so on. But these things are what make the difference between Waymo's safety record, which is good enough that regulators allow them to carry passengers, and Tesla's safety record, which is unlikely to impress the regulators.

The regulators have a lot of reasons to be skeptical. Back in 2021 they started investigating Autopilot:
The U.S. government has opened a formal investigation into Tesla’s Autopilot partially automated driving system after a series of collisions with parked emergency vehicles.
NHTSA says it has identified 11 crashes since 2018 in which Teslas on Autopilot or Traffic Aware Cruise Control have hit vehicles at scenes where first responders have used flashing lights, flares, an illuminated arrow board or cones warning of hazards.
Since then the evidence has piled up, as the Washington Post team report:
At least eight lawsuits headed to trial in the coming year — including two that haven’t been previously reported — involve fatal or otherwise serious crashes that occurred while the driver was allegedly relying on Autopilot. The complaints argue that Tesla exaggerated the capabilities of the feature, which controls steering, speed and other actions typically left to the driver. As a result, the lawsuits claim, the company created a false sense of complacency that led the drivers to tragedy.
Musk claimed they would never settle these cases, but:
Tesla this month settled a high-profile case in Northern California that claimed Autopilot played a role in the fatal crash of an Apple engineer, Walter Huang. The company’s decision to settle with Huang’s family — along with a ruling from a Florida judge concluding that Tesla had “knowledge” that its technology was “flawed” under certain conditions — is giving fresh momentum to cases once seen as long shots, legal experts said.
The regulators move slowly but they keep moving:
Meanwhile, federal regulators appear increasingly sympathetic to claims that Tesla oversells its technology and misleads drivers. Even the decision to call the software Autopilot “elicits the idea of drivers not being in control” and invites “drivers to overly trust the automation,” NHTSA said Thursday, revealing that a two-year investigation into Autopilot had identified 467 crashes linked to the technology, 13 of them fatal.
Last December, the NHTSA forced Tesla to recall more than 2M vehicles because Autopilot:
has inadequate driver monitoring and that the system could lead to "foreseeable misuse,"
The agency suspects the recall wasn't adequate:
The National Highway Traffic Safety Administration disclosed Friday that it’s opened a query into the Autopilot recall Tesla conducted in December. The agency is concerned as to whether the company’s remedy was sufficient, in part due to 20 crashes that have occurred involving vehicles that received Tesla’s over-the-air software update.
The recall involved an over-the-air update, but Tesla's attitude to regulation showed through:
the agency writes that "Tesla has stated that a portion of the remedy both requires the owner to opt in and allows a driver to readily reverse it" and wants to know why subsequent updates have addressed problems that should have been fixed with the December recall.
What is the point of a safety recall that is opt-in and reversible? Clearly, it is to avoid denting the credibility of the hype. The NHTSA is not happy:
In a separate filing, NHTSA detailed findings from its investigation that preceded the December recall. The agency found that Autopilot didn’t sufficiently ensure drivers stayed engaged in the task of driving, and that Autopilot invited drivers to be overconfident in the system’s capabilities. Those factors led to foreseeable misuse and avoidable crashes, at least 13 of which involved one or more fatalities, according to the report.

“Tesla’s weak driver-engagement system was not appropriate for Autopilot’s permissive operating capabilities,” NHTSA said. This resulted in a “critical safety gap” between drivers’ expectations and the system’s actual capabilities, according to the agency.
The NHTSA is skeptical that the recall was effective:
But NHTSA says it knows of at least 20 crashes involving Tesla Autopilot that fall into three different categories. It says there have been nine cases of a Tesla having a frontal collision with another vehicle, object, or person, for which there was time for an alert driver to have avoided the crash. Another six crashes occurred when Teslas operating under Autopilot lost control and spun out or understeered into something in a low-grip environment. And five more crashes occurred when the driver inadvertently canceled the steering component of Autopilot without disengaging the adaptive cruise control.

NHTSA also says it tested the post-recall system at its Vehicle Research and Test Center in Ohio and that it "was unable to identify a difference in the initiation of the driver warning cascade between pre-remedy and post-remedy (camera obscured) conditions," referring to the supposedly stronger driver monitoring.
The agency is giving Tesla until July 1st:
to send NHTSA a lot of data, including a database with information for every car it has sold or leased in the US, with information on the number and dates of all Autopilot driver warnings, disengagements, and suspensions for each of those vehicles. (There are currently more than 2 million Teslas on the road in the US.)

Tesla must also provide the cumulative mileage covered by Autopilot, both before and after the recall. NHTSA wants Tesla to explain why it filed an official Part 573 Safety Recall Notice, "including all supporting engineering and safety assessment evidence." NHTSA also wants to know why any non-recall update was not part of the recall in the first place.
Finally, Mike Spector and Chris Prentice report that In Tesla Autopilot probe, US prosecutors focus on securities, wire fraud:
U.S. prosecutors are examining whether Tesla committed securities or wire fraud by misleading investors and consumers about its electric vehicles’ self-driving capabilities, three people familiar with the matter told Reuters.
Reuters exclusively reported the U.S. criminal investigation into Tesla in October 2022, and is now the first to report the specific criminal liability federal prosecutors are examining.

Investigators are exploring whether Tesla committed wire fraud, which involves deception in interstate communications, by misleading consumers about its driver-assistance systems, the sources said. They are also examining whether Tesla committed securities fraud by deceiving investors, two of the sources said.

The Securities and Exchange Commission is also investigating Tesla’s representations about driver-assistance systems to investors, one of the people said.
This is all about Autopilot, but Fake Self Driving has problems too, as the Washington Post team reported in Tesla worker killed in fiery crash may be first ‘Full Self-Driving’ fatality:
Two years ago, a Tesla shareholder tweeted that there “has not been one accident or injury” involving Full Self-Driving, to which Musk responded: “Correct.” But if that was accurate at the time, it no longer appears to be so. A Tesla driver who caused an eight-car pileup with multiple injuries on the San Francisco-Oakland Bay Bridge in 2022 told police he was using Full Self-Driving. And The Post has linked the technology to at least two serious crashes, including the one that killed von Ohain.
The regulators still approve Waymo's cautious and well-engineered robotaxi effort. Uber's and Cruise's robotaxi efforts flamed out. Given the lack of sensors, the history of crashes, the fact that their "autonomy" technology is still at level 2, and the resistance to regulation, why would any regulator approve even the testing, let alone the revenue service of a Tesla robotaxi?

After Robotaxis, What?

Now that the effectiveness of the robotaxi hype is starting to fade, it is time for Musk to roll out the next shiny object. Dan Robinson reports on it in Elon Musk's latest brainfart is to turn Tesla cars into AWS on wheels:
EV carmaker Tesla is considering a wonderful money-making wheeze – use all of that compute power in its vehicles to process workloads for cash, like a kind of AWS on wheels.

The Elon Musk-led outfit said in its recent earnings conference call for calendar Q1 that it had noticed its vehicles spend a considerable amount of their time just sitting there not moving. Many pack in a decent amount of processing power, so why not get them to do something useful and earn some cash for the company as well?

Speaking on the conference call, Musk said that he thought most Teslas were probably used for about a third of the hours in a week.
Seriously? Unless you're a gig worker for Uber or Lyft, who clocks 56 hours/week sitting behind the wheel? I can't believe that Musk is under-estimating the potential here:
"And now that we have already paid for this compute in these cars, it might be wise to use them and not let them be, like, buying a lot of expensive machinery and leaving to them idle. We don't want that. We want to use the computer as much as possible and close to like basically 100 percent of the time to make full use of it," Elluswamy said.

"It takes a lot of intelligence to drive the car anyway. And when it's not driving the car, you just put this intelligence to other uses, solving scientific problems like a human or answering dumb questions for someone else," he added.
"If you get, like, to the 100 million vehicle level, which I think we will at some point get to, and you've got a kilowatt of usable compute – I think you could have on the order of 100 gigawatts of useful compute, which might be more than anyone, more than any company, probably more than any company," he mused.
Tesla is currently selling around 2M vehicles/year, so "at some point" will be sometime in the 2070s, by which time the vast majority of the vehicles Tesla has shipped will have been scrapped, and even if they still work 50 years of Moore's law will have made all but the last few obsolete.

Robinson starts thinking about the details:
Of course, all this compute capacity isn't sitting conveniently clustered together in a datacenter. It is distributed here and there, reached via a cellular connection in each Tesla, or possibly via Wi-Fi if the car is on the owner's driveway.

So the model Tesla would be looking at is perhaps more akin to edge computing, such as Heata in the UK, which uses heat from servers in homes to provide domestic hot water and rents out the compute capacity via cloud company Civo.

Among the issues we can see is that Tesla would be effectively using electricity that the car owner has paid for to run any workloads while it is idle, so would they get a cut of the money generated?

Yes, it seems. CFO Vaibhav Taneja, said "the capex is shared by the entire world. Sort of everyone owns a small chunk, and they get a small profit out of it maybe."
IDC Senior Research Director for Digital Infrastructure Andrew Buss said the idea sounds technically feasible, but the potential downsides are perhaps too big to justify it being actually implemented.

"They'd not even be edge processing nodes as the code and data would have to be centrally managed and stored and then packaged and sent for processing before being returned once complete," he told The Register.

Other downsides include third-party code and data running on a private asset, Buss said, and if taking power from the battery, this would accelerate the degradation of these, which are the single most expensive and crucial part of a Tesla and need to be kept in as optimal a shape as possible for longevity and consistency of range.

In other words, Tesla might well find that implementing this idea may prove more trouble than it is actually worth for the returns it generates.

And as The Register noted after the earnings conference, Elon has a habit of throwing out wild ideas when things aren't going well to distract the punters and energize investors. This could well be one of them.

#ODDStories 2024 @ Yaoundé, Cameroon 🇨🇲 / Open Knowledge Foundation

Future lies in the hands of the younger generation. It is with this hope that we, Geosm Family, have chosen, with the support of the Open Data Day community, to organize on Wednesday March 6, following the ODD 2024 calendar, at the “Junior Government Bilingual Primary school”, an event based on the theme “Build a mapping community for Kids”.

With the aim of installing in the younger ones, a passion for data-related professions, and above all a better understanding of territories, through the use of geographic data.

It was a moment full of joy and enthusiasm that we loved sharing with our younger ones.

Activities organization

The Open Data Day was composed of a series of activities and mobilized a total of 22 pupils from the Class 06 francophone section.

The kids were grouped in 5 with each group named by the kids. We then had “intellos”, “genies”, “excellents” and “juniors”.

List of activities

  • The first activity was a question-answer quiz. Here the kids were asked question on geographic related topics about Cameroon and Africa
  • Next, the kids were shown a world map where they were asked to draw any country or continent of their choice and name it.
  • The kids ended up doing a treasure hunting where using a map of their school we provided them, they had to follow the indications and find the hidden treasure.

By the end of the day then Souvenirs were shared with them and the prepared meal was shared.

About Open Data Day

Open Data Day (ODD) is an annual celebration of open data all over the world. Groups from many countries create local events on the day where they will use open data in their communities.

As a way to increase the representation of different cultures, since 2023 we offer the opportunity for organisations to host an Open Data Day event on the best date within a one-week period. In 2024, a total of 287 events happened all over the world between March 2nd-8th, in 60+ countries using 15 different languages.

All outputs are open for everyone to use and re-use.

In 2024, Open Data Day was also a part of the HOT OpenSummit ’23-24 initiative, a creative programme of global event collaborations that leverages experience, passion and connection to drive strong networks and collective action across the humanitarian open mapping movement

For more information, you can reach out to the Open Knowledge Foundation team by emailing You can also join the Open Data Day Google Group to ask for advice or share tips and get connected with others.

Limited General Registration Now Open for 2024 In-Person DLF Forum / Digital Library Federation

The Council on Library and Information Resources is pleased to announce that limited general registration is now open for the in-person Digital Library Federation’s (DLF) Forum happening at Michigan State University in East Lansing, MI, July 29-31, 2024.

Space is limited; register here to secure your spot.

The DLF Forum welcomes digital library, archives, and museum practitioners from member institutions and beyond—for whom it serves as a meeting place, marketplace, and congress. Here, the DLF community celebrates successes, learns from mistakes, sets grassroots agendas, and organizes for action. Learn more about the event and review the conference program.

Also, reminder that the Call for Proposals for our virtual DLF Forum event happening this October is open through May 15. Learn more and submit here

Subscribe to our newsletter to be sure to hear all the Forum news first.

The post Limited General Registration Now Open for 2024 In-Person DLF Forum appeared first on DLF.

#ODDStories 2024 @ Oguta, Nigeria 🇳🇬 / Open Knowledge Foundation

InspireIT organised an Open Data Day event on 9 March 2024 and 10 March 2024 at Oguta Local Government Area of Imo State in Nigeria, where an event on “Climate-Induced Displacement: Understanding Impacts on African Women through Open Data” was done.

I led a panel discussion on “Utilizing Open Data for Resilience and Adaptation,” which elucidated the causes, dynamics, and consequences of climate-induced displacement, emphasizing its disproportionate impact on women and marginalized communities. I spoke about the importance of leveraging publicly available data to enhance the resilience and adaptive capacity of communities, organizations, and governments in the face of various challenges, such as climate change, and socio-economic disruptions.

Udochukwu Chukwu led the field workshop and breakout activities on “Community Voices and Experiences,” on the second day of the event, to gain valuable insights into local perspectives, and challenges, related to resilience and adaptation in Oguta Local Government Area.

The field workshop conducted mainly in the local language and in three villages; Abatu village, Ngegwu village and Umutogwuma village, served as a pivotal platform for amplifying the voices of women, and calls were made for the provision of targeted capacity building, financial resources, and technical assistance to enhance the women’s resilience and adaptive capacity in the face of climate-induced displacement.

Later, I also led discussions on “Towards Gender-Inclusive Policies” which aimed to explore strategies for mainstreaming gender considerations into policies addressing climate-induced displacement and lightning talks on “Innovative Solutions and Initiatives,” which showcased a series of rapid-fire presentations highlighting creative approaches and successful initiatives aimed at addressing the impacts of climate-induced displacement.

We extend our gratitude to all participants, speakers, and partners for their invaluable contributions to the success of the event.

About Open Data Day

Open Data Day (ODD) is an annual celebration of open data all over the world. Groups from many countries create local events on the day where they will use open data in their communities.

As a way to increase the representation of different cultures, since 2023 we offer the opportunity for organisations to host an Open Data Day event on the best date within a one-week period. In 2024, a total of 287 events happened all over the world between March 2nd-8th, in 60+ countries using 15 different languages.

All outputs are open for everyone to use and re-use.

In 2024, Open Data Day was also a part of the HOT OpenSummit ’23-24 initiative, a creative programme of global event collaborations that leverages experience, passion and connection to drive strong networks and collective action across the humanitarian open mapping movement

For more information, you can reach out to the Open Knowledge Foundation team by emailing You can also join the Open Data Day Google Group to ask for advice or share tips and get connected with others.

Revamping / Raffaele Messuti

A few years ago, I had developed a small application that allowed you to "frame" a specific part of an IIIF image and share it on the web through simple, concise URLs. But the initial version was rudimentary and only supported IIIF 2, I've since revamped it using the latest release of the TIFY viewer.

TIFY is a lightweight IIIF viewer written with VueJS. Its standout feature is the automatic reflection of document navigation states (zoom, pan, page) in the URL itself. This unique capability enables users to bookmark and share URLs effortlessly. TIFY operates entirely client-side, eliminating the need for additional services.

However, I like short, simple (and possibly persistent) URLS, ideal for sharing in various documents and messages. Enter, which facilitates remote saving of current states to generate short URLs with unique identifier, following this format:{nanoid}


This application operates without requiring user login, ensuring complete anonymity, and once a link is generated, it cannot be modified. The user interface remains minimalistic and will continue to do so. The server, a lightweight Go application, stores data in a SQLite database. Its source code is publicly available here

What's missing since the previous version which I'm still working on:

  • Opengraph metadata

    This feature provided a preview when sharing links on social networks and messaging platforms. Old example:

  • HTTP headers

    Some HTTP headers exposed the IIIF resources, such as the canvas, the image, the label, the manifest. While I'm unsure of its utility, it might serve as a simpler alternative for machine consumption compared to APIs.

    curl -I{id}
    X-Iiif-Canvas: https://___
    X-Iiif-Image: https://____/35,168,1703,788/,100/0/default.jpg
    X-Iiif-Label: Document title
    X-Iiif-Manifest: https://___/123/manifest.json
    X-Iiif-Page: 11

But isn't there an IIIF standard for this?

Indeed, the Content State API 1.0 exists, although it's yet to be integrated into major viewers. However, I'm considering implementing an export feature in this format for greater interoperability.

Bridging the Gap: Digital Rights, Sustainability, and Inclusion at #DRIF24 / Open Knowledge Foundation

In the face of the pressing global challenges posed by climate change, the recent Digital Rights and Inclusion Forum 2024 event hosted by Open Knowledge Ghana brought together a diverse group of stakeholders to explore the intersection of digital rights, sustainability, and inclusion. The panel discussion, titled “Bridging the Gap: Digital Rights, Sustainability, and Inclusion in the Face of Climate Change,” delved into the crucial role that technology, innovation, and digital empowerment can play in addressing the climate crisis.

Unlocking the Power of Open Knowledge

The session kicked off with a thought-provoking lightning talk by Monica Granados, Assistant Director at Open Climate Project, Creative Commons. Granados emphasized the urgent need for open access to knowledge in the fight against climate change. She highlighted the alarming statistic that 57.1% of research outputs from 1980 to 2020 were inaccessible due to paywalls, hindering the progress of scientists, communities, and policymakers.

Granados advocated for a cultural shift towards open knowledge sharing, underscoring the importance of bridging the gap between digital rights, sustainability, and climate action. She outlined the efforts of the Open Climate Campaign to promote the open sharing of research through advocacy, coalition building, policy labs, workshops, and the implementation of robust open access policies.

Empowering Women and Leveraging Technology

The panel discussion, moderated by Maxwell Beganim, the Open Knowledge Foundation Network Anglophone Africa Coordinator, featured esteemed speakers including Francis Acquah Amaning, President of Internet Society Ghana Chapter, Anita Ofori, Executive Director of Women For Sustainability Africa, and Yakubu Adam, Policy, Programmes and Projects Lead at the Institute for Energy Security.
Anita Ofori spoke passionately about the disproportionate impact of climate change on women, emphasizing the need to understand these differences and empower women digitally and economically. She highlighted the importance of closing the gender gap by providing women with opportunities, particularly in the digital space, and the value of collaboration between organizations and grassroots groups to tackle these complex issues.

Francis Acquah Amaning underscored the significance of raising awareness about climate change and leveraging technology to address it. He discussed how digital rights, such as access to information online, are crucial in this endeavor. Acquah Amaning shared examples of how technology can contribute to tackling climate change, from smart meters that help reduce energy consumption to projects like Radionet, which uses AI and Raspberry Pi to help farmers in underserved communities predict rainfall patterns.

The Pivotal Role of ICT in Climate Action

Maxwell Beganim highlighted the critical role of ICT in addressing climate change, emphasizing the need to safeguard the digital ecosystem to protect the rights of activists championing climate action. He acknowledged the contribution of ICT to anthropogenic emissions, from manufacturing to consumer use, and stressed the importance of ICT companies mainstreaming efforts to reduce emissions and utilize renewable energy.

Beganim also discussed how simple actions, such as not charging phones overnight or reducing screen brightness, can contribute to reducing greenhouse gas emissions. He underscored the pivotal role of ICT in climate action, showcasing the potential for technology to both contribute to and help address the challenges posed by climate change.

Fostering Inclusivity and Sustainability

Yakubu Adam shared his impressions of DRIF24, describing it as a remarkable gathering of young innovators committed to addressing inequality and exclusion in Africa’s digital ecosystem. He emphasized the importance of innovation in ensuring inclusivity and sustainability, as outlined in the UN Sustainable Development Goals (SDGs). Adam highlighted how climate change exacerbates global inequality, making it imperative to leverage digital rights and innovation to leave no one behind in achieving these goals.

Collaborative Efforts for Positive Change

The session was marked by engaging discussions and insightful contributions from participants, who actively engaged with the panelists on various aspects of the topic. Abigail Afi Gbadago, the Technical Associate for Open Knowledge Ghana, expertly coordinated the session, ensuring a fruitful exchange of ideas.

As the discussion unfolded, it became evident that bridging the gap between digital rights, sustainability, and inclusion is essential for effectively addressing the challenges posed by climate change. DRIF24 provided a platform for stakeholders to come together, share insights, and collaborate on solutions aimed at promoting digital rights, fostering inclusivity, and advancing social justice in the digital age.

STAPLR on hiatus / William Denton

STAPLR (Sounds in Time Actively Performing Library Reference), my sonification of activity at the help and reference desks at York University Libraries, is on hiatus.

Yesterday we moved from a free and open source self-hosted system to LibAnswers (one of the proprietary hosted services rented out by Springshare, the most well known of which is LibGuides). I will look at how I can adapt STAPLR to use its API.

#ODDStories 2024 @ Bogotá, Colombia 🇨🇴 / Open Knowledge Foundation

Colombia has been present in the celebration of Open Data Day with participation in different events in various cities for several years. From the OpenStreetMap-OSM Colombia community, we have found in this date a space to invite citizens to become new contributors to the map. Events with different approaches have been held in the past with such focus. Sometimes, the planning, organisation and subsequent realisation could require a lot of work on the part of the volunteers, and sometimes the retribution of doing an event with a good audience was not appropriate, which generated a bit of disinterest in retrying it the following year. To avoid that, this year the OSM community decided to join forces and make a bigger event in Colombia, holding the same activity in different cities.

The community of YouthMappers of the National University of Medellin – SAGEMA has been working on OpenStreetMap with topics associated with forestry engineering. One of the works that integrated SAGEMA students with the OSM mapping community was the mapping of trees on the university campus. This activity included not only locating the tree but also taking some field measurements and identifying the species. All this work has involved both groups, and the result has been very satisfactory.

As the young people from the YouthMappers chapters have started to play an important role in the OSM community, as they have integrated very well and have taken the lead in several areas, it was proposed to do a parallel event for the ODD in Medellín and Bogotá. For this, it was decided to do the tree mapping activity, as it could be simplified and divided by removing some of the complexity of the technicality of not only mapping OpenStreetMap, but also the identification of trees, and in this way be able to attract a wider audience. It should be noted that this also fitted perfectly with the ODD 2024 guidelines, which sought to align activities with the UN Sustainable Development Goals.

The entire organisation was done remotely, relying on tools from the OSM LatAm community, such as the HedgeDoc pad, which is a collaborative tool for writing documents. Everything was coordinated in this space, as well as designing the invitation to the community to attend the events.

As tree mapping is a long and complex task, and taking into account that those attending the event probably did not know about mapping, trees or open data, it was decided to divide the whole event into 3 parts:

  1. In the first part, the whole community was invited to attend in person to an area to collect tree data. In this space, we wanted to talk about open data, OpenStreetMap, and licenses, and we invited them to install the StreetComplete application, which greatly simplifies mapping, as well as allowing them to take photos and write data in notes. This tool was appropriate for this stage as we needed to take some data on the ground, such as height and trunk diameter at breast height. We decided that the best way to collect the data was with OSM notes, rather than having them map directly with the phone, as this requires explaining the concepts of node, way, tags, changeset, and all these particularities of OSM could detract from the objective of the event and confuse the attendees.
  2. In the second part, we turned OSM notes into points on the map, with a basic tree tag. In this part, mappers were invited who knew the OSM note resolution process, and thus could quickly get the basic information on the map.
  3. The last part consisted of identifying the species and adding the appropriate labels, with the help of experts on the subject of trees. This part, as well as the second, was decided to be done virtually, as the activities carried out lend themselves to mapping from the computer.

As all this was prepared, the dialogue continued through the communication channels of the Colombian community, and it was in this way that members from other cities became interested in participating. So we went from 2 cities to having the event in 6 cities in parallel: Medellín, Bogotá, Yopal, Duitama, Villavicencio and Granada. This was achieved mainly because the burden of planning and organising the event was distributed so that each volunteer could focus on the realisation of the event in his or her city. What’s more, the different organisers came up with different ways to reach out to their communities, and in Yopal even the mayor’s office invited people to the event on their social networks.

On the day of the event, there was a cumulative attendance of more than 80 people across Colombia, with approximately 30 people in Yopal, and 20 in Villavicencio. This exceeded our expectations, as these events for the organisers in those cities could be complex to articulate the activity with so many attendees. Explaining open data, showing OpenStreetMap, helping with the installation, guiding the tree survey process, and creating notes. But on the other hand, this reflects the interest of citizens to recognise their terrain and contribute to its improvement.

Anyway, the first part of the event, which was the most important part because open data was collected on the ground, had very good results. More than 500 notes were created all over Colombia, which is equivalent to 500 trees inventoried; more than 1200 photos of trees were captured.

As the other two parts required virtual computer work, attendance was much lower, but those who attended understood the potential of how this data collected in the field would be aggregated in the OSM database, and how it could then be used for different analyses. However, due to the large numbers captured in the field in the first stage, we were not able to complete the work in the virtual sessions, and this data will remain for the OSM volunteers who continuously resolve notes in Colombia.

In conclusion, we can say that the results of this event exceeded our expectations, but this success was achieved by the integration of the community. In addition, it was integrated with activities that are already carried out within the community, such as tree data collection and note resolution, so the ODD was an event to attract new contributors and to spread the importance of open data. Finally, as we continually do these activities, attendees have participated in other events and have become interested in OSM, open data, and how they can contribute to the community.

About Open Data Day

Open Data Day (ODD) is an annual celebration of open data all over the world. Groups from many countries create local events on the day where they will use open data in their communities.

As a way to increase the representation of different cultures, since 2023 we offer the opportunity for organisations to host an Open Data Day event on the best date within a one-week period. In 2024, a total of 287 events happened all over the world between March 2nd-8th, in 60+ countries using 15 different languages.

All outputs are open for everyone to use and re-use.

In 2024, Open Data Day was also a part of the HOT OpenSummit ’23-24 initiative, a creative programme of global event collaborations that leverages experience, passion and connection to drive strong networks and collective action across the humanitarian open mapping movement

For more information, you can reach out to the Open Knowledge Foundation team by emailing You can also join the Open Data Day Google Group to ask for advice or share tips and get connected with others.

DLF Digest: May 2024 / Digital Library Federation

DLF Digest logo: DLF logo at top center "Digest" is centered. Beneath is "Monthly" aligned left and "Updates" aligned right.
A monthly round-up of news, upcoming working group meetings and events, and CLIR program updates from the Digital Library Federation. See all past Digests here

Hello DLF Community! We are delighted to share that the program is now available for our in-person conference at Michigan State University this July. Browse it here and take special note of our featured speakers, Kathleen Fitzpatrick and Germaine Halegoua. May is a busy month for us – registration will open in the coming weeks, and the Call for Proposals for our virtual Forum in October is open through May 15. Our working groups are also busy with meetings all month long. We hope to see you at an event soon!

— Team DLF

This month’s news:

This month’s DLF group events:

DLF Project Managers Group – Digital Repository Management and Ownership with Kim Leaman

Friday, May 17, 1pm ET / 10am PT; Register in advance

Kim will give an introductory overview with examples of her role as a Product Owner and how she fits into the workflows and expectations of the Digital Library Services (DLS) team and their stakeholders. 

About the speaker: Kim Leaman is a Library IT Project Manager who focuses primarily on Princeton University Library’s (PUL) digital projects and initiatives. She collaborates with staff members from various departments to meet the needs and manage the expectations of Princeton’s internal and external stakeholders. Kim is also a Product Owner for PUL’s digital repository and curated collections platform (Digital PUL).

This month’s open DLF group meetings:

For the most up-to-date schedule of DLF group meetings and events (plus NDSA meetings, conferences, and more), bookmark the DLF Community Calendar. Can’t find meeting call-in information? Email us at Reminder: Team DLF working days are Monday through Thursday.


  • Digital Accessibility Working Group (DAWG): Wednesday, 5/1, 2pm ET / 11am PT.
  • Born-Digital Access Working Group (BDAWG): Tuesday, 5/7, 2pm ET / 11am PT.
  • Assessment Interest Group (AIG) Metadata Working Group: Thursday, 5/9, 1:15pm ET / 10:15 PT. 
  • AIG Cost Assessment Working Group: Monday, 5/13, 3pm ET / 12pm PT.
  • AIG User Experience Working Group: Friday, 5/17, 11am ET / 8am PT.
  • Committee for Equity and Inclusion: Monday, 5/20, 3pm ET / 12pm PT.
  • Climate Justice Working Group: Wednesday, 5/29, 12pm ET / 9am PT.  
  • Digital Accessibility Working Group: Policy and Workflows subgroup: Friday, 5/31, 1pm ET / 10am PT. 


Interested in joining a current group, reviving a past one, or do you have a general question? Let us know at DLF Working Groups are open to all, regardless of whether or not you’re affiliated with a DLF member organization. Team DLF also hosts quarterly meetings with working group leaders and occasionally produces special events or resources for members. Learn more about working groups on our website and check out our Organizer’s Toolkit

Get Involved / Connect with Us

Below are some ways to stay connected with us and the digital library community: 

The post DLF Digest: May 2024 appeared first on DLF.

Common(s) Cause: Towards a Shared Advocacy Strategy for the Knowledge Commons / Open Knowledge Foundation

Creative Commons, Open Knowledge Foundation, Open Future, and Wikimedia Europe are hosting a day-long side event to Wikimania 2024. The event will take place in Katowice, Poland, on 6 August 2024, the day before Wikimania kicks off on 7 August 2024. 

Wikimania 2024 is the biggest meeting of open movement activists and organizations this year. It offers a rare occasion for activists to meet in person. We are making use of this opportunity to bring together those working in the field of Openness, Free Knowledge, and the Digital Commons to talk about shared advocacy strategies: the political challenges of Knowledge Commons. We are counting on the participation of people already planning to attend Wikimania, and those who will come especially to attend our side event. We are expecting around 70 people to join our event. 

Our goal is to establish relationships needed to design a shared advocacy vision that over time can result in stronger, collaborative advocacy work. To this end, the event will focus on three topics:

  1. Legal and Policy issues
  2. Communication and Global Campaigns
  3. Community activation and Sustainability

Are you planning to attend Wikimania and interested in joining us for this event? Please let us know.

There are few opportunities to bring together the movement’s most engaged participants and discuss shared strategies for advocacy and ways of moving forward together. Wikimania’s 2024 motto is “Collaboration of the Open.” Our one-day side event to Wikimania is an opportunity to bring this motto to life.

2024 In-Person DLF Forum Program Now Available / Digital Library Federation

The Council on Library and Information Resources is pleased to announce that the program is now available for the in-person Digital Library Federation’s (DLF) Forum happening at Michigan State University in East Lansing, MI, July 29-31, 2024.

The program for the in-person event is available here.

We are also pleased to announce our featured speakers, Kathleen Fitzpatrick and Germaine Halegoua. Meet our featured speakers and read about their talks.

The DLF Forum welcomes digital library, archives, and museum practitioners from member institutions and beyond—for whom it serves as a meeting place, marketplace, and congress. Here, the DLF community celebrates successes, learns from mistakes, sets grassroots agendas, and organizes for action. Learn more about the event. Limited general registration for our in-person event opens soon!

Also, reminder that the Call for Proposals for our virtual DLF Forum event happening this October is open through May 15. Learn more and submit here

Subscribe to our newsletter to be sure to hear all the Forum news first.

The post 2024 In-Person DLF Forum Program Now Available appeared first on DLF.

We'll run 'til we drop / Eric Hellman

(I'm blogging my journey to the 2024 New York Marathon. You can help me get there.)

 It wasn't the 10 seconds that made me into a runner.

Eric running across a bridge

I started running races again 20 years ago, in 2004. It was a 10K sponsored by my town's YMCA.  I had run an occasional race in grad school to join my housemates; and I continued to run a couple of miles pretty regularly to add some exercise to my mostly sitting-at-a-computer lifestyle. I gradually added 10Ks - the local "turkey-trot"  because the course went almost by my house - and then a "cherry-blossom" run, through beautiful Branch Brook Park. But I was not yet a real runner - tennis was my main sport.

In 2016, things changed. My wife was traveling a lot for work, and one son was away at college, and I found myself needing more social interaction. I saw that my local Y was offering a training program for their annual 10K, and I thought I would try it out. I had never trained for a race, ever. The closest thing to training I had ever done was the soccer team in high school. But there was a HUGE sacrifice involved - the class started at 8AM on Saturdays, and I was notorious for sleeping past noon on Saturdays! Surprise, surprise, I loved it. It was fun to have people to run with. I'm on the silent side, and it was a pleasure to be with people who were comfortable with the  somewhat taciturn real me.

I trained really hard with that group. I did longer runs than I'd ever done, and it felt great. So by race day, I felt sure that I would smash my PR (not counting the races in my 20's!). I was counting on cutting a couple of minutes off my time. And I did it! But only by a measly 10 seconds. I was so disappointed.

But somehow I had become a runner! It was running with a group that made me a runner. I began to seek out running groups and became somewhat of a running social butterfly.

Fast-forward to five weeks ago, when I was doing a 10-miler with a group of running friends (A 10 miler for me, they were doing longer runs in training for a marathon). I had told them of my decision to do New York this fall, and they were soooo supportive. I  signed up for a half marathon to be held on April 27th  - many of my friends were training for the associated full marathon. The last 2 miles were really rough for me (maybe because my shoes were newish??) and I staggered home. That afternoon I could hardly walk and I realized I had strained my right knee. Running was suddenly excruciatingly painful.

By the next day I could get down the stairs and walk with a limp, but running was impossible. The next weekend, I was able to do a slow jog with some pain, so I decided to stick to walking, which was mostly pain-free. I saw a PT who advised me to build up slowly and get plenty of rest. It was working until the next weekend, when I was hurrying to catch a train and unthinkingly took a double step in Penn Station and re-sprained the knee. It was worse than before and I had only 3 weeks until the half marathon!

The past three weeks have been the hardest thing I've had to deal with in my running "career". I've had a calf strain, T-band strains, back strains, sore quads, inter-tarsal neuromas and COVID get in the way of running, but this was the worst. Because of my impatience.

Run-walk (and my running buddies) were what saved me. I slowly worked my way from 2 miles at a 0.05-to-0.25 mile run-to-walk ratio up to 4 miles at 0.2-to-0.05 mile run-to-walk, with 2 days of rest between each session. I started my half marathon with a plan to run 2 mimutes and walk 30 seconds until the knee told me to stop the running bits. I was hoping for a 3 hour half.

The knee never complained (the rest of the body complained, but I'm used to that!!) I finished with the very respectable time of 2:31:28, faster than 2 of my previous 11 half marathons. One of my friends took a video of me staggering over the finish. 

 I'm very sure I don't look like that in real life.

Here's our group picture, marathoners and half-marathoners. Together, we're real runners.

After this weekend, my biggest half marathon challenge to date, I have more confidence than ever that I'll be able to do the New York Marathon in November - in one piece - with Team Amref. (And with your contributions towards my fund-raising goal, as well.)

We're gonna get to that place where we really wanna go and we'll walk in the sun

Jim Thorpe Half Marathon 2024 results. 

My half on Strava.

Call for Host of the 2025 DLF Forum / Digital Library Federation

Apply Here

The Digital Library Federation (DLF) cordially invites libraries, museums, cultural heritage organizations, and academic institutions (or a combination of collaborating organizations) to submit expressions of interest in hosting the in-person 2025 DLF Forum

The DLF Forum is a multi-day immersive experience dedicated to learning, networking, and skill-building. Having evolved since its inception in 1999, the Forum has traditionally been hosted in hotel venues. Embracing deliberate, experimental change in conference structure after participant feedback, we are excited to announce our intention to hold the event at a cultural heritage or academic organization in 2025, following the 2024 event at Michigan State University (Summer 2024) and online (Fall 2024)

We are open to hosting the event anytime between late spring and the end of fall 2025, before the holiday season. Prospective hosts should be located in the United States or Canada. 

DLF will oversee the coordination of the volunteer Planning Committee and conference logistics, assuming fiscal responsibility for the event. Host sites are expected to provide 2-3 designated lead staff members to offer location-specific support as outlined below. Additionally, hosts will be responsible for covering indirect costs. We welcome submissions for any capacity over 200 attendees. 

Evaluation of applicants will be based on their capacity to fulfill these requirements, as well as considerations such as venue space, food and beverage options, local lodging, and transportation options. We welcome collaborative applications from multiple organizations.

Hosting a national conference like the DLF Forum can offer numerous benefits for a cultural heritage or academic organization. Some key advantages include: 

  • Increased visibility. Hosting a well-known national conference provides an opportunity to showcase your organization’s facilities, capabilities, and expertise to a wide audience of professionals, potentially leading to new partnerships, collaborations, and opportunities. 
  • Networking opportunities. Hosting a national conference facilitates networking with professionals from diverse backgrounds and geographical locations. 
  • Enhanced reputation. Hosting a successful conference demonstrates the hosting organization’s leadership, organizational skills, and commitment to advancing the field. 
  • Community engagement. Hosting a national conference can foster community engagement by involving local collaborators, such as businesses, academic institutions, government agencies, and community organizations. 
  • Economic impact. Hosting a conference can have a positive economic impact on the local community, generating revenue for local businesses, hotels, restaurants, and other service providers. 
  • Professional development opportunities. Hosting a conference provides opportunities for staff members of the hosting organization to gain valuable experience in event planning, project management, and leadership roles. 
  • Knowledge sharing and learning. Hosting a conference allows the organization to contribute to the advancement of knowledge and best practices in its field. 

We eagerly anticipate receiving your proposals to make the 2025 DLF Forum a memorable and enriching experience for all attendees. 

The deadline to apply to host the 2025 DLF Forum is Monday, June 10, 2024.

Host Requirements and Roles 

If your organization does not meet all of the desired requirements but is interested in the possibility of hosting the DLF Forum, we’d still love to hear from you! Please feel free to apply anyway. If you’d like to discuss the feasibility of hosting this event at your organization before applying, please reach out to Team DLF at We welcome collaborative applications from multiple organizations.

Meeting Space 

  • All meeting, meals, and reception spaces should be ADA-compliant. 
  • Space for 1-2 plenary (general) sessions that can accommodate 200-500 attendees each time, including a stage, projector, and AV equipment, preferably theater style or organized in table rounds. In-house livestreaming or the ability to bring in an outside vendor is a plus.
  • Spaces for concurrent program sessions over 2 days that can accommodate 200-500 people across 5-7 spaces in various room formats, such as theater and classroom, and include appropriate AV capabilities. 
  • Desired, but not required: Space for a reception that can accommodate all attendees and includes some seating and/or standing table options. 
  • Spaces for breaks and meals that are in a central location and easily accessible. If on a university campus, access to dining halls would suffice.
  • If lunch will not be provided, restaurants and other food options should be located within close walking distance of the conference venue. 
  • Space near plenary and concurrent sessions for registration setup for Team DLF and local volunteers to check in registered attendees, provide information, and display sponsor signage. 
  • Space and tables for sponsors (up to 7) to exhibit during the event, preferably in spaces where breaks and/or meals take place. Many sponsors bring their own signage as well. 

Food and Beverage

  • Local catering options (onsite catering, campus dining hall, local restaurants or catering businesses, etc.) should be able to meet a wide variety of dietary needs. 

Technical Requirements

  • Robust wi-fi for conference participants. 
  • Tech support is available during the event to help address any issues in plenary or breakout rooms.
  • Support for presenters to use their own laptops, possibly including access to necessary connecting cables. 
  • For the plenary session room: a projector with a large screen; microphones for speakers and audience Q&A; and the in-house ability to provide live streaming and recording of speakers and slides, OR the ability for DLF to contract services with an outside provider.
  • For breakout rooms: a projector with a screen; one microphone for speakers; one microphone for audience Q&A.


  • The host site should have a variety of hotels within 2 miles. Public transportation between hotels and venues is a plus.
  • Not required, but desired: If the host site is an academic institution, lodging would be made available on campus in dorm rooms (to be paid for by participants). 


  • The venue is in close/reasonable proximity to a major airport (within 90 minutes). 
  • Public transportation options and/or shuttles are available to/from the airport. 
  • Public transportation options around campus (if an academic institution) and/or around the city/town.

Host Staff Roles 

  • The host provides 2-3 primary contacts for the conference who meet with Team DLF regularly, serve on the Planning Committee, and provide knowledge and help make local arrangements like reserving session rooms. 
  • The primary contacts are also the onsite point people for local issues for the duration of the conference. 
  • The primary contacts are also able to connect the planning team with relevant other departments, such as marketing/communications, room bookings, etc.

Costs and Expenses 

  • DLF covers the cost of running the conference, including catering. 
  • DLF manages all sponsorships. 
  • The host site provides complimentary access to general sessions and meeting rooms as well as basic AV and wi-fi in those spaces.
  • Host sites will be expected to cover indirect costs. 

If your organization does not meet all of the desired requirements but is interested in the possibility of hosting the DLF Forum, we’d still love to hear from you! Please feel free to apply anyway. If you’d like to discuss the feasibility of hosting this event at your organization before applying, please reach out to Team DLF at


Please apply through this form by June 10, 2024.


Sample Conference Schedule 

Day 1 

Welcome Reception (evening event)

Attracts almost all registered attendees. Could be sponsored by our Platinum partner and requires a table for this one sponsor only. 


Day 2, 9:00am-5:00pm

Plenary Session: Opening plenary event for all registered attendees. Consists of information from Team DLF and a speaker or panel. 

AM Coffee Break

Sessions: 6 concurrent sessions 

Lunch Break 

Sessions: 6 concurrent sessions

PM Coffee Break 

Networking event 


Day 3, 9:00am-5:00pm

Sessions: 6 concurrent sessions

(15 minute transition) 

Sessions: 6 concurrent sessions 

AM Coffee Break 

Networking event 

Lunch Break 

Sessions: 6 concurrent sessions

PM Coffee break 

Closing plenary: Closing plenary for all registered attendees. Consists of closing information from Team DLF and a speaker or panel. 

The post Call for Host of the 2025 DLF Forum appeared first on DLF.

AI and Copyright (for libraries) / Ed Summers

A colleague in Slack (thanks Snowden) shared that the Coalition for Networked Information recently had a good session on Copyright and AI for librarians. It was run by Jonathan Band, who is an attorney for the Library Copyright Alliance, and Timothy Vollmer who does scholarly communications at Berkeley. The presentation is an hour long and worth watching, or listening to as you do something else, like walking your dog…as you do.

Below are the brief notes I jotted down afterwards.

Band helpfully pointed out that discussions about AI and copyright often center on the question of whether fair use applies, and that these discussions often tangle up three separate issues that are useful to think about on their own.

  1. Can ingestion for training AI constitute infringement?
  2. Can AI output infringe?
  3. Is AI output copyrightable?

For ingestion (1) it seems like the EU Copyright law and the new AI Act will (like the GDPR) have a lot of sway elsewhere around the world since businesses will want to operate their AI services in Europe.

AI companies will be required to disclose what content they used to build their models, and also to provide a way for publishers to opt out (e.g. robots.txt or newly developing standards) from having their content used to train generative AI models. There are also provisions in EU Copyright that protect non-commercial ingestion of copyrighted material.

However even if these legal instruments help shape how generative AI services get deployed, a law on the books in the EU can’t be used in a court case outside of the EU?

For 2 it seems like things are largely up in the air, and that the [NYTimes v Microsoft] case is something to watch, since they are arguing that OpenAI’s ChatGPT can generate near verbatim text from their copyrighted materials. Unlike other quasi AI tools (e.g. Google Search) ChatGPT doesn’t link to cited material (because it doesn’t know how to), which negatively impacts web publishers like the NYTimes since it deprives them of clicks (and revenue).

There are multiple other court cases testing the waters around 2, and apparently the same law firm seems to be representing many of them?

As for 3, the US Copyright Office has a report coming out later this year, but it’s likely that it will present the factors to consider, without providing explicit guidance. So questions about whether AI generated content can be copyrighted, and by who (the creators of the tool, the creator of the prompt, etc) will likely be decided in court cases. Band didn’t mention if there are any of these pending.

There was also some interesting points shared by Timothy Vollmer nearer the end about vendors who are making libraries sign contracts that prevent library patrons from ingesting content for research purposes, which is something that the EU Copyright explicitly allows. He had some good suggestions for how to push back on these by demanding alternate language in these contracts that doesn’t infringe on fair use.

I think some coordination amongst libraries for pushing for consistent legal language here would be helpful in making sure library patrons aren’t negatively impacted, and libraries aren’t held liable for breach of contract in cases where fair use should apply. I’m not sure where that work is happening, but presumably Vollmer would be a good person to reach out to, to find out.

All this prompted me to add a robots.txt file to this website, based on directives I saw in the NYTimes robots.txt. Band said that legal questions about ingestion may hinge on whether a publisher has indicated that they did not want their content used in generative AI tools. I don’t realistically expect to be suing any of these companies, but I decided to do it in solidarity because I’m pretty skeptical of this generative AI technology.

I’m really interested to hear if we get more declarative ways of controlling whether content is used for generative AI instead of bluntly blocking particular bots by User-Agent. Some companies may want to crawl a page once and repurpose content for different things (search index, llm, etc) without requiring multiple fetches from the multiple bots. I wonder what will happen over at the IETF in this area? The use of robots.txt has proven problematic for use cases around web archiving (crawl and replay) in the past, so a fresh approach would be helpful I think.

Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 30 April 2024 / HangingTogether

The following post is one in a regular series on issues of Inclusion, Diversity, Equity, and Accessibility, compiled by a team of OCLC contributors.

Libraries Northern Ireland defends stocking LGBTQ+ children’s books following complaints 

Blurred image of a woman in profilePhoto by Teslariu Mihai on Unsplash

On 2 April 2024, the Gay Community News website, which is headquartered in Dublin, Ireland, reported that the Chief Executive of Libraries NI (Northern Ireland) and the chairperson of the Library Board released a statement to defend the presence of LGBTQ+ children’s books in their collection. In the statement, they argued that libraries should “meet the needs of the entire community” by providing “a range of library materials and resources reflecting the diversity of the population and library customers.” The statement comes after Communities Minister Gordon Lyons requested a meeting with the Libraries NI chiefs to discuss the children’s books dealing with LGBTQ+ themes available in their library branches. Minister Lyons said that having these books as part of the collection was “concerning,” adding that “parents should not need to worry” about whether the titles their kids can find in the libraries are age-appropriate or not. Sinn Fein Member of the Legislative Assembly (MLA) Colm Gildernew accused Lyons and others of “an unwarranted and disgraceful attack on an entire section of our children.” 

Public libraries everywhere are being forced to debate the importance of maintaining collections that represent diverse communities. It is very important that we continue to support those libraries that provide inclusive materials for their users. Contributed by Morris Levy

Creating inclusive spaces in cultural heritage institutions with AI 

Dr. Piper Hutson of Lindenwood University (OCLC Symbol: MOQ) discusses how artificial intelligence (AI) can help cultural heritage spaces can provide more cognitive inclusion in her webinar “Inclusive Design in Cultural Heritage: Embracing Sensory Processing and Neurodiversity.” The webinar (presented live on 8 April 2024) is available as a recording on the Balboa Park Online Collaborative YouTube channel. Hutson’s design considerations for neurodiversity include wayfinding that is interactive and multi-sensory so that users are not dependent on finding and interpreting signs and maps posted in the museum. AI chatbots can cater to sensory profiles by providing personalized recommendations for visiting museums and reduce anxiety by giving visitors information about less crowded times and sensory spots ahead of time. [A preview of this webinar was covered in the 2 April edition of Advancing IDEAs.] 

Hutson’s research on cognitive inclusion in museums may be applicable to libraries. Visiting a library today can be an overwhelming experience for a person with sensory processing issues. Although we think of libraries as quiet spaces, they are often not. Sound and visual stimuli can be overwhelming for neurodivergent people, and sources of these stimuli in libraries include public programs, computers, and copy machines. The same AI technology that helps neurodiverse people navigate museums can help them navigate libraries. The current literature about using AI in libraries seems focused on improving technical services workflows and academic honesty, both of which are important. Given the importance of the library as a public space, we will also consider using AI to improve the library space for neurodivergent users. Contributed by Kate James. 

Support for menopause in the workforce 

Although librarianship is a predominantly female profession, even in 2024 men still hold a disproportionate number of leadership roles.  Librarian Bobbi L. Newman, who writes the award-winning blog “Librarian by Day,” posits that one reason for that discrepancy could be “The challenges associated with menopause, such as the need for flexible work arrangements and the stigma and stereotypes associated with menopause, [which] may affect women’s career paths or opportunities for advancement within the field.”  Newman’s post, “Supporting Menopause in Libraries for Workplace Wellbeing,” was prompted by a 9 April 2024, BBC report by Megan Tatum entitled “Without support, many menopausal workers are quitting their jobs.”  Tatum cites the impact of menopause, as well as the premenopausal transition known as perimenopause, on workers, especially within the male-dominated world of work.  As Newman puts it, “After all, the workplace is not supportive of childbirth, post-childbirth physical or emotional issues, or childrearing; why would it be supportive of menopausal issues? And, of course, there is the real risk that disclosing menopause symptoms could increase age and gender-related discrimination at work.” 

Newman makes common sense suggestions for supporting library workers who are experiencing perimenopause and menopause, all of which dovetail with practices that responsible institutions will already have in place (or under development) for improving productivity, enhancing worker health and well-being, promoting inclusivity and equity, and increasing worker retention.  An explicit recognition of menopause as a wellness issue can make the library workplace more welcoming and comfortable for everyone. Contributed by Jay Weitz

The post Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 30 April 2024 appeared first on Hanging Together.

The 50th Asilomar Microcomputer Workshop / David Rosenthal

Last week I attended the 50th Asilomar Microcomputer Workshop. For a crew of volunteers to keep a small, invitation-only, off-the-record workshop going for half a century is an amazing achievement. A lot of the credit goes to the late John H. Wharton, who chaired it from 1985 to 2017 with one missing year. He was responsible for the current format, and the eclecticism of the program's topics.

Brian Berg has written a short history of the workshop for the IEEE entitled The Asilomar Microcomputer Workshop: Its Origin Story, and Beyond. It was started by "Three Freds and a Ted" and one of the Freds, Fred Coury has also written about it in here.Six years ago David Laws wrote The Asilomar Microcomputer Workshop and the Billion Dollar Toilet Seat for the Computer History Museum.

I have attended almost all of them since 1987. I have been part of the volunteer crew for many, including this one, and have served on the board of the 501C3 behind the workshop for some years.

This year's program featured a keynote from Yale Patt, and a session from four of his ex-students, Michael Shebanow, Wen-mei Hwu, Onur Mutlu and Wen-Ti Liu. Other talks came from Alvy Ray Smith based on his book A Biography of the Pixel, Mary Lou Jepsen on OpenWater, her attempt to cost-reduce diagnosis and treatment, and Brandon Holland and Jaden Cohen, two high-school students on applying AI to the Prisoner's Dilemma. I interviewed Chris Malachowsky about the history of NVIDIA. And, as always, the RATS (Rich Asilomar Tradition Session) in which almost everyone gives a 10-minute talk lasted past midnight.

The workshop is strictly off-the-record unless the speaker publishes it elsewhere, so I can't discuss the content of the talks.