Planet Code4Lib

The Tech We Want Summit: Full Programme Announcement / Open Knowledge Foundation

The Tech We Want Summit is just around the corner. Today we are announcing the full programme for day one, Thursday 17 October, featuring 28 speakers from all around the world across 5 panels and 2 keynotes.

The following day, there will be 15 demos from projects that are already making the technology we want (to be announced soon).

As of today, more than 400 people have already registered for what promises to be an incredible day of discussions.

At the Open Knowledge Foundation, we’ve been thrilled to realise that our discomfort is also the discomfort of so many people. We need to rethink the way we build technology and together develop new practical ways to build software that is useful, simple, long-lasting and focused on solving people’s real problems.

Browse the images below or click on the button to see the full programme:

A huge thank you to the content partners who agreed to join us on this journey!

I Fondled Salvador Dalí's Earrings / Eric Hellman

 Content Warning: AI

My Uncle Henry was a Professor of Chemistry at NYU. He lived, for the most part, in his sister-in-law Barbara's 7-story townhouse on East 67th street in Manhattan. He acted as the caretaker of this mansion when Barbara went off living her socialite life in Paris or wherever. My family would stay in the townhouse whenever we came to New York to visit my favorite uncle.

This is how my parents ended up being at a fancy party attended by Salvador Dalí. It seems that Barbara had commissioned a portrait of herself, and the occasion of the party was the painting's unveiling. I was there too; I was a few months old. The great painter was amused to see a baby at this party and the baby was extremely amused at this strange looking adult. More accurately, I was captivated by his shiny earrings and reached out to play with them as though they were a mobile hanging in my crib. Or so I have been told. So many times.

A surrealist figure resembling Salvador Dalí, dressed in an eccentric outfit with a curled mustache and large, ornate earrings. A baby is playfully tugging on the ornate earrings
Dalí and Eric as hallucinated by DALL-E

My dad was presented to Dalí as a brilliant young engineer, which he was. Dad was born in Gary, Indiana, but moved to Sweden with his family when he was 7 years old. (That's a whole 'nother story!) After graduation from the Royal Institute of Technology in Stockholm, he decided to take a job with Goodyear Aerospace in Akron, Ohio, because that way he didn't have to serve in the Swedish Army and give up his American citizenship. He worked on semiconductor devices before anyone had ever heard of semiconductors.

Maybe brilliant engineers were exotic creatures in that fancy New York City party circuit, because Salvador Dalí buttonholed my dad. He wanted my dad to invent something for him. The conversation went something like this (imagine me sitting in Dalí's lap, not paying attention to the conversation at all):

Dalí: "Tell me, young man, do you invent things?"

Dad: "As a matter of fact, I'm working on what they call a buffered amp..."

Dalí: "Never mind that, I have an idea I want you to work on..."

Dad: "Yes?"

Dalí: "I want you to invent a paint gun..."

Dad: "That doesn't sound too hard..."

Dalí: "... that will paint what I see in my mind."

Dad: "??"

Dalí: "I paint, but the paintings are never what I want."

Dad: "That's not how..."

Dalí: "I want to press a button and have the paint go in the right place."

Dad: "Well maybe someday..."

Dalí: "You start working on it, let me know how it goes"

Eric: "Waaaaaaaaa!"

Apparently, the paint gun was a bit of an obsession with Dalí. He created a technique called "bulletism" that involved using an antique gun (an "arquebus") to shoot vials of paint at a canvas. A couple of months after the fancy party, he appeared on the Ed Sullivan show firing a paint gun at a canvas! 

Sixty-four years later, we sort of know how to build Dalí's mind reading paint-gun. We have technologies that let us see the brain think (functional brain imaging combined with deep learning), and technologies that can make pictures from human thoughts (when expressed as LLM prompts). It's now easy to imagine a device that uses your brain to control an AI image generator (see the image above!). Such a device could take advantage of the brain's plasticity to give Dalís of the future the power to make images from activity that exists only in their brains.

People are arguing about whether AI can make art. There's even a copyright case in which the US copyright office is saying, effectively, that you can't copyright what you tell an AI to create.

It seems clear to me, at least, that AI, wielded as a tool, can make art, in the same way that a Stradivarius, wielded by a musician, can make art, or that a camera, wielded by a photographer, can make art, or that computer program, wielded by a poet, can make art. 

Salvador Dalí was just ahead of his time. 

Notes:

  1. While OpenAI's "DALL-E" is supposed to be a combination of "Dalí" And "WALL-E", I've not been able to find any mention of Dalí's interest in brain-computer interfaces!
  2. I couldn't find an image of the painting "Portrait of Bobo Rockefeller" on the web; a study for the painting is in the Dalí Museun in Spain. Dalí had a policy of not allowing his subjects to see their portrait before is was unveiled, and my understanding is that Barbara was never really fond of the painting. It had an prominent place in her living room though.
  3. Researchers have studied the use of brain-scanning techniques to develop brain-computer interfaces for uses such as the development of speech prostheses that convert brain activity into intelligible speech. 
  4. Openwater is combining infrared and acoustic imaging to see brain activity for neurological diagnosis. But they can see the potential for mind reading using the help of deep learning pattern recognition. Founder May Lou Jepsen says “I think the mind-reading scenarios are farther out, but the reason I'm talking about them early is because they do have profound ethical and legal implications.” 
Comments. I encourage comment on the Fediverse or on Bluesky. I've turned off commenting here.

Reminder: I'm earning my way into the NYC Marathon by raising money for Amref Health Africa. 

LibraryThing in Your Language—Even British! / LibraryThing (Thingology)

We’ve made some exciting changes and improvements to LibraryThing’s member-drive translations, first developed in 2006.

Try it out: Spanish, German, Dutch, French, Italian or British English! (Change back by clicking the name of the language you’re in at the top right of the screen.)

CataloGUE to your heart’s content!

It’s Working!

This blog post explains the changes, and why we made them. But the best justification is already evident: Members are finding and using LibraryThing in their language more than ever! Some 5% of members are already using our new “English (UK)” option. Another 5% are using LibraryThing in a (non-English) language.

Best of all, new, non-English members are up 50%, and I suspect we are also reeling in some new English members too! (It’s hard to tell, because TriviaThing is also reeling in new members.)

Goodbye All Those Domains

The core change is a big one: We’re phasing out our non-English domains, like LibraryThing.fr, LibraryThing.de and tr.LibraryThing.com, in favor of members chosing their preferred language on LibraryThing.com. Nothing is being taken away here—we’re just changing where you go! In fact, we’re adding some features (see below).

We’re getting rid of the non-English domains to improve your experience of the site. First, search engines never fully understood what we were doing, so English-language people were coming to LibraryThing off Google searches, and finding themselves on a site in Danish, or Catalan! (They’d leave.)

More importantly, we’re doing it to reduce our “non-human traffic”—the search-engines and AI bots that make up more than 50% of LibraryThing’s traffic. The AI bots in particular have been particularly wild, with rogue bots hitting us night and day. Unfortunatley, having some 50 separate domains meant 50 targets. Reducing this traffic will help us serve you—the “human” traffic—faster and better.

Feature Changes

Here’s a run down of the changes:

  • Language Switcher. Every page now shows your language. Click it to change your language, or to help us translate non-English languages.
  • British English. Do the Amrican “catalog” and “color” annoy you? We’ve added a new language, British English, called “English (UK)” in our language menu. Apparently you want it, because already 5% of members are using it!
  • Domain Forwarding. If you go to an old domain, like LibraryThing.fr, you’ll be forwarded to LibraryThing.com and asked if you want French or English.
  • Home Pages for Every Language. While you can change language on any page, each language also has its own, dedicated home page, like LibraryThing.com/t/fr (French), LibraryThing.com/t/de (German), or LibraryThing.com/t/gb (UK English). You can find them by changing languages before you sign in. You’ll also get them when you sign out. If you want to avoid changing languages again, bookmark your page.
  • Language Detection. When you go to a website like LibraryThing, your browser actually tells us your preferred language. Some websites just follow that, but we know a lot of our members straddle languages. So if, when you first come to LibraryThing, we detect a disconnect between what your browser wants and what you’re using, we ask you if you want to switch.
  • Better Translation Pages. Our Translations page is better in various small ways. If you are using a non-English language, it has new options to see and edit only machine-translated text.

Member Translated, with Help

Since 2006, translation has been in the hands of members. This hasn’t changed. But we’ve gone ahead and had a translation program have a go at untranslated text. Members can, of course, change these translations, and we’ve given them special tools to do.

The change is minimal for most of LibraryThing’s popular languages:

  • Spanish — 99.2% translated, 16.3% by machine
  • German — 99.5% translated, 1.5% by machine
  • Dutch — 99.3% translated, 2.3% by machine
  • French — 99.3% translated, 4.2% by machine
  • Italian — 99.6% translated, 0.4% by machine

For less-used languages, the percent is much higher:

  • Maori — 92.9% translated, 71.1% by machine
  • Korean — 92.5% translated, 88.9% by machine
  • Armenian — 92.1% translated, 90.9% by machine
  • Tagalog — 91.4% translated, 89.5% by machine
  • Welsh — 91.1% translated, 75.3% by machine

While human translation is best, these versions were seas of untranslated, yellow text. It’s a Catch 22—you can’t get new Armenian members if the site isn’t translated, and you can’t get it translated without Armenian members.(1)

Problems and Improvements

We are working on a few improvements:

  • Multiple Accounts. Some members appreciated being able to have one member on one language site, and another on another. I think it’s clear we need to get a “Switch account” feature, like Facebook and some other sites have.
  • AI is Meh. We are aware that machine translation isn’t ideal. If we have time, we will try to do it again, feeding in appropriate human-translated text, so we can be consistent on terms like “tags.” For now, however, if the translation annoys you—maybe that’s the prod we need to give you?
  • Cookies? The way we implemented languages, cookies, has various implications—some good, some bad. You can read more about this here.
  • Account-level Language Setting. If you want to set your account language, go to Account Settings. As many members have a dissonance between their account langauge and the language they actually use, you won’t be switched when you log in, but you will be asked if you want to switch.

For more on this change, and a lot of great suggestions read Talk > New Features > Big language changes.


1. There’s actually a wrinkle here in that it’s not about the total number of translated strings, but how often they are used. A site with only 50% of its strings translated could still be quite useful—if they were the RIGHT strings. Unfortunately, many languages had untranslated home pages. Nobody is going to join a site like that!

The Myth of Black Box AI: Why Explainable, Configurable AI Is the Effective Alternative / Lucidworks

Discover how composable AI is addressing the issues of black box AI. Learn why transparency and adaptability are key to future-proofing your AI strategy.

The post The Myth of Black Box AI: Why Explainable, Configurable AI Is the Effective Alternative first appeared on Lucidworks.

The post The Myth of Black Box AI: Why Explainable, Configurable AI Is the Effective Alternative appeared first on Lucidworks.

Getting rspec/capybara browser console output for failed tests / Jonathan Rochkind

I am writing some code that does some smoke tests with capybara in a browser of some Javascript code. Frustratingly, it was failing when run in CI on Github Actions, in ways that I could not reproduce locally. (Of course it ended up being a configuration problem on CI, which you’d expect in this case). But this fact especially made me really want to see browser console output — especially errors, for failed tests, so I could get a hint of what was going wrong beyond “Well, the JS code didn’t load”.

I have some memory of being able to configure a setting in some past capybara setup, to make error output in browser console automatically fail a test and output? But I can’t find any evidence of this on the internet, and at least I’m pretty sure there is no way to do that with my current use of selenium-webdrivers and with the headless chrome to run capybara tests.

So I worked out this hacky way to add any browser console output to the failure message on failing tests only. It requires using some “private” rspec API, but this is all I could figure out. I would be curious if anyone has a better way to accomplish this goal.

Note that my goal is a bit different than “make a test fail if there’s error output in browser console”, although I’m potentially interested in that too, here I wanted: for a test that’s already failing, get the browser console output, if any, to show up in failure message.

# hacky way to inject browser logs into failure message for failed ones
  after(:each) do |example|
    if example.exception
      browser_logs = page.driver.browser.logs.get(:browser).collect { |log| "#{log.level}: #{log.message}" }

      if browser_logs.present?
        # pretty hacky internal way to get browser logs into 
        # existing long-form failure message, when that is
        # stored in exception associated with assertion failure
        new_exception = example.exception.class.new("#{example.exception.message}\n\nBrowser console:\n\n#{browser_logs.join("\n")}\n")
        new_exception.set_backtrace(example.exception.backtrace)

        example.display_exception = new_exception
      end
    end
  end

I think by default, with selenium headless chrome, you should get browser console that only includes error/warn log levels but not info, but if you aren’t getting what you want or want more you need to make a custom Capybara driver with custom loggingPrefs config that may look something like this:

Capybara.javascript_driver = :my_headless_chrome

Capybara.register_driver :my_headless_chrome do |app|
  Capybara::Selenium::Driver.load_selenium
  browser_options = ::Selenium::WebDriver::Chrome::Options.new.tap do |opts|
    opts.args << '--headless'
    opts.args << '--disable-gpu'
    opts.args << '--no-sandbox'
    opts.args << '--window-size=1280,1696'

    opts.add_option('goog:loggingPrefs', browser: 'ALL')
  end
  Capybara::Selenium::Driver.new(app, browser: :chrome, options: browser_options)
end

Editorial / Code4Lib Journal

Welcome to a new issue of Code4Lib Journal! We hope you like the new articles. We are happy with Issue 59, although putting it together was a challenge for the Editorial Board. This was in no small part because Issue 58 was so tumultuous, including a crisis over our unintentional publication of personally identifiable information, a subsequent internal review by the Editorial Board, an Extra Editorial, and much self-reflection. All of this (quite rightly) slowed down our work. Several Editorial Board members resigned, which left us with a much smaller team to handle a larger workload. As a volunteer-run organization without a revenue stream, Code4Lib Journal is a labor of love that we all complete off the side of our overfilled desks. It was demoralizing to feel that we had lost the support of many in our community. A lot of us were tempted to quit rather than try to pick up and carry on. So, although we have published Issue 59 later than planned, and with a different coordinating editor, we made it. This issue is testament to the perseverance of my colleagues on the Editorial Board, and to the wonderful articles contributed by our community.

Response to PREMIS Events Through an Event-Sourced Lens / Code4Lib Journal

The PREMIS Editorial Committee (EC) read Ross Spencer’s recent article “PREMIS Events Through an Event-sourced Lens” with interest. The article was a useful primer to the idea of event sourcing and in particular was an interesting introduction to a conversation about whether and how such a model could be applied to Digital Preservation systems. However, the article makes a number of specific assertions and suggestions about PREMIS, with which we on the PREMIS EC disagree. We believe these are founded on an incorrect or incomplete understanding of what PREMIS actually is, and as significantly, what it is not. The aim of this article is to address those specific points.

Customizing Open-Source Digital Collections: What We Need, What We Want, and What We Can Afford / Code4Lib Journal

After 15 years of providing access to our digital collections through CONTENTdm, the University of Louisville Libraries changed direction, and migrated to Hyku, a self-hosted open-source digital repository. This article details the complexities of customizing an open-source repository, offering lessons on balancing sustainability via standardization with the costs of developing new code to accommodate desired features. The authors explore factors in deciding to create a Hyku instance and what we learned in the implementation process. Emphasizing the customizations applied, the article illustrates our unexpected detours and necessary considerations to get to “done.” This narrative serves as a resource for institutions considering similar transitions.

Cost per Use in Power BI using Alma Analytics and a Dash of Python / Code4Lib Journal

A trio of personnel at University of Oregon Libraries explored options for automating a pathway to ingest, store, and visualize cost per use data for continuing resources. This paper presents a pipeline for using Alma, SUSHI, COUNTER5, Python, and Power BI to create a tool for data-driven decision making. By establishing this pipeline, we shift the time investment from manually harvesting usage statistics to interpreting the data and sharing it with stakeholders. The resulting visualizations and collected data will assist in making informed, collaborative decisions.

Launching an Intranet in LibGuides CMS at the Georgia Southern University Libraries / Code4Lib Journal

During the 2021-22 academic year, the Georgia Southern University Libraries launched an intranet within the LibGuides CMS (LibGuides) platform. While LibGuides had been in use at Georgia Southern for more than 10 years, it was used most heavily by the reference librarians. Library staff in other roles tended not to have accounts, nor to have used LibGuides. Meanwhile, the Libraries had a need for a structured intranet, and the larger university did not provide enterprise level software intended for intranet use. This paper describes launching an intranet, including determining what software features are necessary and reworking software and user permissions to provide these features, change management by restructuring permissions within an established and heavily used software platform, and training to introduce libraries employees to the intranet. Now, more than a year later, the intranet is used within the libraries for important functions, like training, sharing information about resources available to employees, for coordinating events and programming, and to provide structure to a document repository in Google Shared Drive. Employees across the libraries use the intranet to more efficiently complete necessary work. This article steps through desired features and software settings in LibGuides to support use as an intranet.

The Dangers of Building Your Own Python Applications: False-Positives, Unknown Publishers, and Code Licensing / Code4Lib Journal

Making Python applications is hard, but not always in the way you expect. In an effort to simplify our archival workflows, I set out to discover how to make standalone desktop applications for our archivists and processors to make frequently used workflows easier and more intuitive. Coming from an archivists’ background with some Python knowledge, I learned how to code things like Graphical User Interfaces (GUIs), to create executable (binary) files, and to generate software installers for Windows. Navigating anti-virus software flagging your files as malware, Microsoft Windows throwing warning messages about downloading software from unknown publishers (rightly so), and disentangling licensing changes to a previously freely-available Python library all posed unexpected hurdles that I’m still grappling with. In this article, I will share my journey of creating, distributing, and dealing with the aftereffects of making Python-based applications for our users and provide advice on what to look out for if you’re looking to do something similar.

Converting the Bliss Bibliographic Classification to SKOS RDF using Python RDFLib / Code4Lib Journal

This article discusses the project undertaken by the library of Queens’ College, Cambridge, to migrate its classification system to RDF applying the SKOS data model using Python. Queens’ uses the Bliss Bibliographic Classification alongside 18 other UK libraries, most of which are small libraries of the colleges at the Universities of Oxford and Cambridge. Though a flexible and universal faceted classification system, Bliss faces challenges due to its unfinished state, leading to the evolution in many Bliss libraries of divergent, in-house adaptations of the system to fill in its gaps. For most of the official, published parts of Bliss, a uniquely formatted source code used to generate a typeset version is available online. This project focused on converting this source code into a SKOS RDF linked-data format using Python: first by parsing the source code, then using RDFLib to write the concepts, notation, relationships, and notes in RDF. This article suggests that the RDF version has the potential to prevent further divergence and unify the various Bliss adaptations and reflects on the limitations of SKOS when applied to complex, faceted systems.

Simplifying Subject Indexing: A Python-Powered Approach in KBR, the National Library of Belgium / Code4Lib Journal

This paper details the National Library of Belgium’s (KBR) exploration of automating the subject indexing process for their extensive collection using Python scripts. The initial exploration involved creating a reference dataset and automating the classification process using MARCXML files. The focus is on demonstrating the practicality, adaptability, and user-friendliness of the Python-based solution. The authors introduce their unique approach, emphasizing the semantically significant words in subject determination. The paper outlines the Python workflow, from creating the reference dataset to generating enriched bibliographic records. Criteria for an optimal workflow, including ease of creation and maintenance of the dataset, transparency, and correctness of suggestions, are discussed. The paper highlights the promising results of the Python-powered approach, showcasing two specific scripts that create a reference dataset and automate subject indexing. The flexibility and user-friendliness of the Python solution are emphasized, making it a compelling choice for libraries seeking efficient and maintainable solutions for subject indexing projects.

It Was Ten Years Ago Today / David Rosenthal

Ten years ago today I posted Economies of Scale in Peer-to-Peer Networks . My fundamental insight was:
  • The income to a participant in a P2P network of this kind should be linear in their contribution of resources to the network.
  • The costs a participant incurs by contributing resources to the network will be less than linear in their resource contribution, because of the economies of scale.
  • Thus the proportional profit margin a participant obtains will increase with increasing resource contribution.
  • Thus the effects described in Brian Arthur's Increasing Returns and Path Dependence in the Economy will apply, and the network will be dominated by a few, perhaps just one, large participant.
In the name of blatant self-promotion, below the fold I look at how this insight has held up since.

Experience in the decade since has shown that this insight was correct.

Source
The insight applies to Proof Of Work networks; for the entire decade Bitcoin mining has always been dominated by five or fewer mining pools. As I write this AntPool, ViaBTC and F2Pool have had more than 50% of the hashrate over the last week. Even within those pools, the vast expense of mining rigs, the data centers to put them in, and the power to feed them make economies of scale essential.


Source
The insight applies to Proof Of Stake networks at two levels:
  • Block production: over the last month almost half of all blocks have been produced by beaverbuild.
  • Staking: Yueqi Yang noted that:
    Coinbase Global Inc. is already the second-largest validator ... controlling about 14% of staked Ether. The top provider, Lido, controls 31.7% of the staked tokens,
    That is 45.7% of the total staked controlled by the top two.
Source
In addition all these networks lack software diversity. For example, as I write the top two Ethereum consensus clients have nearly 70% market share, and the top two execution clients have 82% market share.

Economies of scale and network effects mean that liquidity in cryptocurrencies is also highly concentrated. In Decentralized Systems Aren't I wrote:
There have been many attempts to create alternatives to Bitcoin, but of the current total "market cap" of around $2.5T Bitcoin and Ethereum represent $1.75T or 70%. The top 10 "decentralized" coins represent $1.92T, or 77%, so you can see that the coin market is dominated by just two coins. Adding in the top 5 coins that don't even claim to be decentralized gets you to 87% of the total "market cap".

The fact that the coins ranked 3, 6 and 7 by "market cap" don't even claim to be decentralized shows that decentralization is irrelevant to cryptocurrency users. Numbers 3 and 7 are stablecoins with a combined "market cap" of $134B. The largest stablecoin that claims to be decentralized is DAI, ranked at 24 with a "market cap" of $5B.
ProtocolRevenueMarket
 $MShare %
Lido30455.2
Uniswap V35510.0
Maker DAO488.7
AAVE V3244.4
Top 4 78.2
Venus183.3
GMX142.5
Rari Fuse142.5
Rocket Pool142.5
Pancake Swap AMM V3132.4
Compound V2132.4
Morpho Aave V2101.8
Goldfinch91.6
Aura Finance81.5
Yearn Finance71.3
Stargate50.9
Total551 
Similar effects apply to "Decentralized Finance". In DeFi Is Becoming Less Competitive a Year After FTX’s Collapse Battered Crypto Muyao Shen wrote:
Based on the [Herfindahl-Hirschman Index], the most competition exists between decentralized finance exchanges, with the top four venues holding about 54% of total market share. Other categories including decentralized derivatives exchanges, DeFi lenders, and liquid staking, are much less competitive. For example, the top four liquid staking projects hold about 90% of total market share in that category,
Based on data on 180 days of revenue of DeFi projects from Shen's article, I compiled this table, showing that the top project, Lido, had 55% of the revenue, the top two had 2/3, and the top four projects had 78%.

Because these systems, if successful, cannot be decentralized, the cryptosphere doesn't care about the fact that they aren't. In Deconstructing ‘Decentralization’: Exploring the Core Claim of Crypto Systems Prof. Angela Walch explains what the label "decentralized" is actually used for:
the common meaning of ‘decentralized’ as applied to blockchain systems functions as a veil that covers over and prevents many from seeing the actions of key actors within the system. Hence, Hinman’s (and others’) inability to see the small groups of people who wield concentrated power in operating the blockchain protocol. In essence, if it’s decentralized, well, no particular people are doing things of consequence.

Going further, if one believes that no particular people are doing things of consequence, and power is diffuse, then there is effectively no human agency within the system to hold accountable for anything.
In other words, it is a means for the system's insiders to evade responsibility for their actions.

Teaching: one year in / Lorcan Dempsey

Teaching: one year in

I am one year into my two-year term as a Distinguished Practitioner in Residence at the Information School at the University of Washington. I have been fascinated to see academic life from the inside, as it were, even though I am a visitor rather than fully domiciled. A bonus has been how much we have enjoyed Seattle, the city, and its amazing watery, mountainy and islandy hinterland.

I have been teaching two courses, one that I created myself from scratch, and one almost oven-ready that I adapted. The new course was on library collaboration and partnerships, a topic that has always seemed to me to be underexamined. I am also about to begin a small research project looking at some of the characteristics of collaboration. I am lucky that the Orbis Cascade Alliance is on my doorstep here in that regard. The other was on management, a course that is mandatory for all MLIS students, and which is viewed with some ambivalence by some.

At this mid-way point, I thought I would reflect a little on my newfound teaching experience, understanding that what I say is not necessarily unique or surprising.

Teaching and baking

Teaching for the first time presents a steep learning curve. Starting out by developing a new course was in hindsight somewhat optimistic. Once the baking analogy occurred to me, I could not forget it:

Teaching for the first time while developing a new course is like being in the kitchen on your own, with no recipe, baking a loaf of bread for the very first time. Except that you have an audience who observe your every move, and you cannot throw it away if it doesn&apost work out.

But I was not quite on my own. Several people helped, and I am especially grateful to my colleague and Associate Teaching Professor, Chance Hunt, who generously and empathetically stepped in to calm my churning and to help clear a path, and to Sue Morgan, Teaching and Learning Specialist in Learning Technologies at the iSchool, who patiently helped me climb the Canvas learning curve. They each saved me from some foolishness; what remained was mostly my own.

A major takeaway was that I needed to talk less!

This is a short piece, so here are some brief takeaways:

  • I succumbed early on to the common newbie hubris of imagining that I was there to communicate my knowledge and experience. However, what I was really there for was to facilitate learning. My first course outing needed more interaction and engagement with issues, and rather less of my powerpoint, thoughtful and tasteful though it was. A major takeaway was that I needed to talk less!
  • Understanding that good teaching is both learnable and a craft, I had spent some time in preparation reading around the topic. However, without the goalposts of experience, I was overwhelmed by the pedagogical firehose. To reduce confusing superabundance, I returned to The new college classroom by Cathy Davidson as a pragmatic guide, its main recommendation being that I had met Cathy at the interesting Amical Conference shortly beforehand.
  • That said, one can only do so much in one course, and one of the things I most enjoyed was digging into collaboration and thinking about how to sketch out the collaborative space. I also really liked how it made me reconnect with different types of libraries. I enjoyed, for example, exploring public libraries as social infrastructure and the importance of social capital based on Klinenberg&aposs Palaces of the People. I was struck again by the differential attention in the literature to different kinds of libraries, noticing how community college libraries, for example, did not receive as much attention as other academic libraries.
  • I was curious to see how my assigned readings were received. I do not wholly trust my judgements here, as they are impressionistic, supported by limited end-of-course-feedback. In general, the more abstract or theoretical pieces were less popular than ones which communicated experiences, issues or problems. Not too surprising, perhaps, but I did also wonder about the overall balance between theory and practice. I noted how, especially in the collaboration course, practitioner perspectives in the literature greatly outweighed LIS academic ones.
  • I am very grateful to the guest speakers who gave generously of their time, expertise and opinions. They brought energy into the class. I am thinking of invites now for this year. A couple of things really struck me in terms of learning. The first was to communicate that libraries are social organizations, with all that that means in terms of relationships, decision-making, persuasion and influence. I emphasized this throughout. The second is that collaboration and partnership is central in many ways to what libraries do from an operational point of view, but is also so important for creating the networks and communities of practice that do so much to foster learning and innovation. Guest speakers did much to communicate the experiential reality of these two points, alongside the description of libraries, services and initiatives.
Teaching: one year inI visited Seward Park Library, which features in Klinenberg&aposs Palaces for the People, when in NYC over the Summer

Students

The main reason I took this position was that I knew it would be good to be challenged and engaged by different perspectives. I was interested in what animated and interested those beginning a career in libraries, archives or related organization. This aspect of the work has been very rewarding, and I have learned so much. I am also encouraged by the energy and sense of purpose I encountered among so many students, and I know they will make an impact. Here are some thoughts ...

  • In both of my classes the majority of student career interests broke down approximately evenly between academic and public libraries. I was also interested in the strong archival interest, partly overlapping with the academic interest (in archives and/or special collections) and partly motivated by community archives, or specialist archives of various types. It did strike me in discussions that archives work often appealed where it made direct connections with particular communities of interest, distinctive materials, or reparative recognition and remembrance.
  • There was a strong focus among students on social justice and on the agency of libraries and librarians in their communities. This is a clear emphasis of the UW iSchool and course options reflect this, as does the general ethos. This was refreshing to see, acknowledging also that political and advocacy skills will be very important in the library environment students are entering.
  • In the management class (I taught in parallel and in close coordination with Chance) we asked students at the beginning of the year how many were interested in being managers. I was struck that the interest was not stronger, although a repeat question at the end of the course did suggest that some opinions had shifted. Some students went into the class thinking of management solely in terms of staff supervision. Following Linda Hill, I tried to consistently emphasize that management involves managing oneself and one&aposs network of relationships, in addition to one&aposs team. And of course, organizational management, including strategy, marketing, organizational culture, and so on, is also central and may be new to some. I observed how several students overcame a prior antipathy to the idea of marketing and &aposbrand&apos to realize that a broad approach to positioning the library favorably within its community was actually very important.
  • Teaching presents the classic curse of knowledge situation. In summary it is very difficult to unknow something that you already know, and it can be difficult to imagine what it is like not to know it. This creates a potential communication gap. To close this gap we have to step outside our own usual standpoint. For example, I was initially guided by my own interests which are often organizational and strategic, but soon recognised the strong class interest in operational issues. Talking about collaboration between libraries also relies somewhat on knowledge of the object of collaboration (for example a shared ILS or shared content negotiation and licensing) and this opens out into other issues, open access or concerns around ebook licensing, for example. Similarly, in a management class, one can expect that students may have a variety of organizational and supervisory experiences, but, naturally enough, less acquaintance with some of the ways in which libraries are organized or funded. In the management class, for example, I tried to emphasize that the library is not (usually) a stand-alone entity - typically it is accountable to a city, university, or some other parent organization. Even where it reports to a board, the local government agency may appoint some board members. I learned that I need to work hard to try to traverse the gap, thinking about what is covered, using more analogies or examples, for example, and ensuring that participants are comfortable in discussion and questions. Guest speakers are very important here, bringing varied and rich experiences into the class.
  • I tend to resist generational (and other) classifications, but I was struck by how direct and candid student feedback could be. I was grateful for the often thoughtful and constructive suggestions for improvement. This is certainly true of the management course. It is especially true of the collaboration one, given it was my first outing and it was new material. I am currently looking at some refocusing based on class observations. I might even lose some slides!

Systems and services

I have written much about library systems and services. My perspective tends to be informed by my own usage, by conversations with librarians, and also by the fact that I have worked for organizations that have built systems and services that libraries rely on, both in production and in R&D mode. I was fascinated to be in a somewhat different position here, as a faculty member and teacher needing to use library resources in the construction of courses and in my other work.

I have had a more tangential relationship to instructional technology, but was looking forward to exploring Canvas and some other tools. Given the firehose note above, I did stick to the core and to a small number of tools.

Here are some slightly random observations about my experience.

  • It was a great pleasure to have the resources of a large library available again. Browsing in the stacks may not be quite the adventure it once was, but being able to prospect large reservoirs of print and electronic resources is a joy. While a significant proportion of articles may now be available open access, having access to a large, licensed collection makes a big difference. What was more novel for me was being able to access a large number of ebooks at the chapter level, both for my own use and to add to course readings. The &aposaccess gap&apos between those able to use a well-resourced library and those who do not have such access is still very wide.
  • As library consortia were a central emphasis in the collaboration course, I was very interested to see the benefits of borrowing through the shared Orbis Cascade Alliance system in action: I appreciated receiving items from other Alliance members. The University has just joined BTAA, so the library will be participating in BTAA library initiatives in due course. Although I may be finished my term by then, I will be interested to see how this works out alongside the Orbis Cascade alliance, especially given my work with BTAA in a previous life.
The &aposaccess gap&apos between those able to use a well-resourced library and those who do not have such access is still very wide.
  • I enjoyed the opportunity to interact with library colleagues. I also admired how library liaison Alyssa Deutschler solicitously worked with iSchool colleagues, readily provided expert advice, participated in events and instruction, and generally modeled the value of the relational library.
  • I have written much about the friction in the library &aposdiscovery to delivery&apos (D2D) chain over the years. It is a challenge, bringing a heterogenous set of resources into a (more or less) unified environment of use, and creating required connections across discovery, authorization, and fulfilment options. A lot of plumbing is involved, and unfortunately some of this still shows. When things work well, it is very impressive (and the deployment of LibKey helps here), however the experience is occasionally not very well-seamed, let alone seamless. Given the state of the art of the involved technologies, the small number of available products to get this work done, and the general reliance on the same set of vendors, this experience is much the same in most libraries. It is not something specific to the University of Washington or to the Orbis Cascade Alliance. While I appreciate the complexities, and have experienced the heavy lift on the system/data provider side also, it is disappointing that things are not better by now.
  • This may be one of the places where AI might be helpful in terms of better connecting that D2D workflow, and I look forward to trying out the Primo Research Assistant in my current environment. In general I have been interested to observe the careful way in which AI is being introduced into discovery and other products offered to libraries.
  • This experience of the library D2D apparatus matters for a variety of reasons. One in particular is on my mind, as I have been thinking about perceptions of libraries in the academy in the context of LIS Forward (an initiative of iSchools looking at the future of Library and Information Science education and research within iSchools and the academy). While the D2D setup is actually quite impressive when you know a little about the moving parts (proxy, knowledge base, LibKey, etc), it does not necessarily seem that way to the non-initiate. It does not showcase the library as a technological leader or innovator; quite the reverse in fact, as the library discovery experience feels like it is from an earlier period. Faculty or students cannot be expected to know about the technologies or products that are available to the library to build this experience.
  • I was actually pleasantly surprised by Canvas. I thought it did a good job of supporting some complex workflows in an integrated way. While it may constrain the more adventurous, I appreciated the integrated approach and the continuity across courses. Of course, it can behave inconsistently (sometimes save is automatic, sometimes not). I appreciated that it got the job done and that once you were sufficiently high up the learning curve, you could put your energies elsewhere. As I noted above, I was grateful here to receive a lot of help from the Learning Technologies unit.
  • I really like Papers from ReadCube, a Digital Science company. I have used Mendeley and Zotero in the past, but my incentives were not strong enough to climb very far along the learning curve. In my current role, the incentives are stronger, but I have actually also found Papers more straightforward to use. It also works very nicely with the library systems infrastructure mentioned above, and, when all the connections work well, it is like magic to move from publisher page to well formatted citation and stored PDF smoothly. One frustration was the treatment of books. It does not automagically pick up a citation from Amazon or WorldCat, for example, which seems like a miss all around. There was recently an AI upgrade which added some capabilities, including asking questions of a PDF. (This is a small example of how we will interact in more ways with documents in the future.)
  • I found the annotation tool Hypothesis very useful, and while not universally loved by students, it did add a dimension to reading and discussion, especially if used in moderately sized groups. Again, it is nice the way it manages a workflow smoothly.
However, without the goalposts of experience, I was overwhelmed by the pedagogical firehose. ... I found developing my first course quite stressful as I did not know what I did not know. 
  • I found developing my first course quite stressful as I did not know what I did not know. I made the mistake of trying to build the initial outline in Canvas. This made it difficult to see the course as a whole, and difficult to work on trying to change staging of topics, speakers and exercises. Probably the best advice I received during the year was when Chance suggested I shifted this planning to a whiteboard. In fact I ended up using stickies stuck to the whiteboard at work, and duplicated on the bedroom wall in our small rental. It was simple ... and liberating.
Teaching: one year inSophisticated and streamlined course design process.

Education and career preparation

My experience so far has caused me to reflect quite a bit on library education.

The MLS has always been a challenge, given the variety of skills and specialties at play in libraries. This has become even more so as libraries continue to evolve from being transactional and collections-centered to being relational and community-centered. This means that they manifest interesting educational and research issues, from the technical, to the management, to the social and political.

In one program, how do you balance coverage of broad technical skills and appreciation, nurturing community – whether it is in busy urban settings or around student success and retention –, the management of complex social and political organizations, and the (inappropriately named) soft skills which are so central to so many aspects of work (teamwork, advocacy, empathy, self care, ...)?

It also prompts reflection on the relationship between research and practice, and on how the library community generates ideas and innovation. There are plenty of topics to return to.

It is a critical time for libraries, and so a critical time for library education and research. I am looking forward to year two!

Acknowledgements: Thanks to Alyssa Deutschler, Chance Hunt, Sue Morgan, Denise Pan, and Lauren Pressley for their helpful review of an earlier draft. I am especially grateful to Gabrielle Garcia (UW MLIS &apos24) for a thoughtful reading and helpful suggestions. While their feedback improved the final piece, all opinions are my own and they do not necessarily agree with all I say!

Pictures: I took all the pictures, and also made a lot of soda bread during the pandemic.

Other entries mentioned:

So-called soft skills are hard
So-called soft skills are important across a range of library activities. Existing trends will further amplify this importance. Describing these skills as soft may be misleading, or even damaging. They should be recognized as learnable and teachable, and should be explicitly supported and rewarded.
Teaching: one year in
Libraries and the curse of knowledge
It is important to know what you know, so that you can avoid the curse of knowledge and communicate effectively.
Teaching: one year in
Operationalizing a collective collection
While collective collections have been much discussed, less attention has been paid to how to operationalize them in consortial settings. This post introduces work done with the BTAA to explore this challenge.
Teaching: one year in

Author Interview: Danielle Trussoni / LibraryThing (Thingology)

Danielle Trussoni

LibraryThing is pleased to sit down this month with bestselling author Danielle Trussoni, who made her debut in 2006 with Falling Through the Earth, a memoir chronicling her relationship with her father that was chosen as one of the Ten Best Books of the Year by The New York Times Book Review. Trussoni’s first novel, Angelology, was published four years later, going on to become a New York Times and international bestseller. It was translated into over thirty languages, and was followed in 2013 by a sequel, Angelopolis, which was also a bestseller. Trussoni has also published a second memoir, The Fortress: A Love Story (2016), and a stand-alone novel, The Ancestor (2020), and writes a monthly horror column for the New York Times Book Review. The Puzzle Master, a thriller involving a brilliant puzzle maker and an ancient mystery, was published in 2023, and a sequel, The Puzzle Box, is due out shortly from Random House. Trussoni sat down with Abigail to answer some questions about this new book.

The Puzzle Box continues the story of puzzle maker Mike Brink, a savant who came to his abilities through a traumatic brain injury. How did the idea for this character and his adventures first come to you? Did you always know you wanted to write more about Mike, or did you find that you had more to tell, after finishing The Puzzle Master

The idea for this character didn’t arrive in a lightning flash. Mike Brink developed through slowly working backward from the puzzle that I wanted to be at the center of this novel. I had developed a puzzle that the character of Jesse Price, a woman who is in prison for 30 years for killing her boyfriend, draws. She hasn’t spoken to anyone for five years but creates a cipher. Mike Brink arrives to solve it. At first, Mike was just a regular puzzle solver. And then I began to research real people with extraordinary abilities and stumbled upon Savant Syndrome. He seemed like the perfect vehicle for solving complex and fun mysteries.

I always knew that I wanted to write more about Mike Brink. I feel that this character has an almost endless supply of fascinating angles to write about. I could see writing about him for a long time!

Your hero has Sudden Acquired Savant Syndrome. What does this mean, and what significance does it have, to the story you wish to tell?

Savant Syndrome is an actual disorder that has occurred only a handful of times (there are between 50-75 documented cases). It occurs when there is damage to the brain, and a kind of hyper plasticity occurs, allowing the person to develop startling mental abilities. Some people become incredibly good at playing music, for example. Other people develop an ability with languages. But Mike Brink develops an ability to see patterns, solve puzzles, and make order out of chaos. Once I began to read about this skill—it’s really a kind of superpower!—I knew that this ability would be perfect for a hero of a mystery novel.

The Puzzle Box involves the Japanese royal family, a puzzle created by Emperor Meiji, and a notable samurai family. What kind of research did you need to do to tell this story, and what were some of the most interesting things you learned, in the process?

First of all, I lived in Japan for over two years. That experience was in the back of my mind as I developed the characters and the story of this book. That said, as I wrote The Puzzle Box, I found I wanted to see the places that appear in the novel: the Imperial Palace in Tokyo, the puzzle box museum in Hakone, and the many locations in Kyoto. So, I went to Japan for two weeks in 2023 to do on the ground research at these locations.

The historical elements of the book, especially the storyline about the Emperor Meiji and the Empresses of Japan, were a different story. I read a lot about the Imperial family, their origins, the discussions and controversies surrounding succession. A big part of my process is to read as much as I can find about something in my work and then carve out the most striking details.

How do you come up with the central puzzles in your books? Are they wholly original creations, or are they taken from or inspired by known puzzles?

The ideas for the puzzles are completely original, and necessarily have to do with the story I’m trying to tell. Each of the puzzles in The Puzzle Master and The Puzzle Box act as gateways to information that helps move the story forward. So I start with story. Then, I speak with the REAL puzzle geniuses, who help me imagine what kind of puzzles are possible. I work with two constructors, Brendan Emmett Quigley and Wei-Hwa Huang, who have worked for The New York Times Games Page (Wei-Hwa is a four-time World Puzzle Champion). They are incredibly smart and really understand what I’m trying to accomplish with my storytelling. Because the puzzles are not just gimmicks or diversions: they are essential to the plot of the novel.

What is different about writing a sequel, when compared to the first book in a series? Were there particular writing or storytelling challenges, or aspects that you enjoyed?

The Puzzle Box is designed as a stand-alone novel and can be read without reading The Puzzle Master. Still, Mike Brink is the hero of both novels, and there are other characters and storylines that show up in both books. I loved being able to go back to characters that I’d already spent time with, and found that because they were familiar, I could go deeper into their minds and feelings. The complications of Mike Brink’s superpower are a challenge for him. How he lives with his gift—and how he can continue to solve puzzles and find happiness—is the primary question of this series.

What can we expect next from you? Do you think you’ll write more about Mike? Are there any other writing projects you are working on?

I hope to write more books in this series, and of course Mike would be returning. I always have three or four novels on the back burner, and sometimes it’s hard for me to know which one will be the next to be written. Sometimes I need to wait and see.

Tell us about your library. What’s on your own shelves?

I am a lover of hardcover books, and so my shelves are packed with contemporary fiction in hardcover. I live in San Miguel de Allende Mexico, and it isn’t easy to get new books, but I’ve managed to find a way!

What have you been reading lately, and what would you recommend to other readers?

I used to write a book column for The New York Times Book Review, and a lot of my reading was for the column. But since I stopped writing it last year, I have been reading for pleasure. I’m revisiting books I loved in my twenties—And Then There Were None by Agatha Christie, for example—and I’m reading contemporary thrillers such as The Winner by Teddy Wayne and Look in the Mirror by Catherine Steadman. I have Richard Price’s Lazarus Man, which is out in a few months, on my most anticipated list. There is never enough time to read everything I want, but what I’m reading is exactly what I love most in fiction: sharp, evocative prose that carries me through an engrossing, surprising story. Give me those two things and I’m hooked.

Warning: Slow Blogging Ahead / David Rosenthal

Vicky & I have recently acquired two major joint writing assignments with effective deadlines in the next couple of months. And I am still on the hook for a Wikipedia page about the late Dewayne Hendricks. This is all likely to reduce the flow of posts on this blog for a while, for which I apologize.

keyword-like arguments to JS functions using destructuring / Jonathan Rochkind

I am, unusually for me, spending some time writing some non-trivial Javascript, using ES modules.

In my usual environment of ruby, I have gotten used to really preferring keyword arguments to functions for clarity. More than one positional argument makes me feel bad.

I vaguely remembered there is new-fangled way to exploit modern JS features to do this with JS, including default values, but was having trouble finding it. Found it! It involves using “destructuring”. Putting it here for myself, and in case this text gives someone else (perhaps another rubyist) better hits for their google searches than I was getting!

function freeCar({name = "John", color, model = "Honda"} = {}) {
  console.log(`Hi ${name}, you get a ${color} ${model}`);
}

freeCar({name: "Joe", color: "Green", model: "Lincoln"})
# Hi Joe, you get a Green Lincoln

freeCar({color: "RED"})
# Hi John, you get a RED Honda

freeCar()
# Hi John, you get a undefined Honda

freeCar({})
# Hi John, you get a undefined Honda

The Open Data Editor is now ready for the pilot phase / Open Knowledge Foundation

This week saw the release of version 1.1.0 of the Open Data Editor (ODE), the new Open Knowledge Foundation’s app that makes it easier for people with little to no technical skills to work with data. The app is now ready to enter a crucial phase of user testing. In October, we are starting a pilot programme with early adopters that will provide much-needed feedback and report bugs before the first official stable release, planned for December 2024.

(See below how you can get involved)

The Open Data Editor helps you find errors in your datasets and correct them in no time – a process called “data validation” in industry jargon. It also checks that your spreadsheet or dataset has all the necessary information for other people to use. ODE increases the quality of the data that is produced and consumed, and guarantees, again in technical jargon, “data interoperability”.

Thanks to funding from the Patrick J. McGovern Foundation, our team has been working since the beginning of the year to create this no-code tool for making data manipulation easier for non-technical people, such as journalists, activists, and public administration. This work seeks to put into practice our new vision of open technologies that OKFN will present and discuss at the upcoming The Tech We Want Summit.

It’s been an intense journey, which we briefly recap in this post.

What is in the latest version

  1. A large number of functionalities were removed from the app to transform the Open Data Editor into a table validation tool.
  2. Key UX changes made the application simpler to use. Examples: new button layout, new logic for uploading data and new dialogue boxes explaining some complex things about the tool.
  3. Code improved: it’s now simplified, more accessible and documented to facilitate contributions from the community.
  4. Different data communities engaged in discussions about how the Open Data Editor can help them in their everyday work with data.

ODE in figures

8

months of work

~100

issues solved on Github

5

team members working together

3

presentations to strategic communities

What we have done so far

A February of Sharing
The project plan was shared with the Open Knowledge Network in the monthly call, to gather input and feedback.

A March of Listening
After testing the app and reviewing all the documentation, interviews were conducted with data practitioners to understand the challenges they face when working with data. 

An April of New Directions
The patterns and insights emerged from the interviews were organised to review the application’s concept note and define a new vision for the product. Initially, ODE provided a wide range of options for people: working with maps, images, articles, scripts and charts. From the interviews, we learned that people working with data spend a lot of time understanding the tables and trying to identify problems in them so that they can analyse the data at a later stage. Therefore, we decided to redirect the ODE to a tool for checking errors in tables.

A May of Cleaning
Through a survey, we started asking questions about certain terms used in the application, such as the word ‘Validate’. We realised that a translation for non-technical users was required instead of simply using the vocabulary from Frictionless, the framework used behind the scenes to detect errors in tables. 

During that month we also started to remove many features from the application that did not align with the new product vision. The road was not particularly easy. As is always the case in coding, several things were interconnected and we had to make many decisions at every step. The whole process led us to deeper reflections about how to build civic technology. 

As part of that reflection, we decided to openly share the mistakes, pitfalls, and key learnings from our development journey. The title of our talk at csv,conf,v8 in Puebla, Mexico, was ‘The tormented journey of an app’.

A June of Interfacing
At this time our UX specialist joined the team to focus on making adjustments to clearly communicate the functionalities of the Open Data Editor.

Intending to create a truly intuitive application that addresses existing UX issues, key workflows were redefined, such as processes like Launch and Onboarding, Validation, File Import, File Management, and Datagrid Operations. Leveraging prior user research and agile software methodology, we went through multiple iterations and refinements. This process involved brainstorming, validating ideas, rapid prototyping, updating UX copies, A/B testing, and technical feasibility reviews with the development team. 

Built on Google’s Material UI framework, a new design system was also developed – the single source of truth comprising vibrant colours and patterns aligned with the OKFN’s branding – delivering a fresh, modern, and cohesive user experience, seamlessly extending from our website to the application.

July-September for Rebuilding
The cleanup process of the application continued. But this time the changes in the user interface led to new complexities: changes in workflows, new bugs with the implemented changes, etc. It was a time strongly focussed on development. 

In August, we opened this process in the panel ‘Frictionless data for more collaboration’ at Wikimania 2024, in Katowice, Poland. The community of Wikimedians and open activists discussed data friction and learned how ODE can help enhancing data quality and FAIRness.

At the end of August, we started working with Madelon Hulsebos, professor at CWI Amsterdam and an expert in Table Representation Learning (TRL). She is currently helping us think about the integration of artificial intelligence (AI) in the Open Data Editor by raising great questions and providing key ideas.

What is next

👉🏼 Address two key and complex components of the app: the metadata and the error panels. Adapting both elements to non-technical users requires more in-depth conversations and decisions since the Frictionless Framework creates some constraints for customisation options.

👉🏼 Pilots: To further improve the ODE, we need to receive feedback and recommendations from real users. Therefore, from October until December, two external organisations will be incorporating ODE into their data workflow to test the application, documenting their experience and reporting challenges to improve it.

👉🏼 User testing sessions: In October, we will hold a series of sessions to receive feedback from our community and from other potential users of Open Data Editor. 

👉🏼Codebase testing: As an effort to bring more contributors in the project, in October and November we will have 4 external developers testing the codebase and solving some code issues selected by the core team.

👉🏼 Documentation review: In November, we will hold two sessions to review all the documentation with a selected group of people. This way we will make sure the documentation is as easy to understand as possible for a broad audience.

👉🏼 Translations: In December, the user interface and the documentation will be translated into three languages other than English. 

👉🏼 AI integration: We are now discussing ideas and having conversations on how to make the integration transparent to users. In addition, our AI consultant will provide guidance on how new integrations should look in the future.

👉🏼 Online Course: By December, we will also release a free online course on how to use the Open Data Editor to enhance data FAIRness.

Now we are counting on you! You can apply to take part in the Open Data Editor testing sessions. Please register using this form or by clicking the button below.

Are you a developer? We are also looking for developers interested in testing the codebase and contributing to the project pushing a couple of PRs to solve 3 issues selected by our core team. If you’re interested in open data tools, this is your chance to get involved and make a difference. You can read about the programme here.

You can also email us at info@okfn.org, follow the GitHub repository or join the Frictionless Data community. We meet once a month.

Read more

Are you a developer? Help us test the Open Data Editor! / Open Knowledge Foundation

The Open Knowledge Foundation is looking for four developers with Python and React JS skills to test the Open Data Editor (ODE) desktop application between October and November and help us improve its functionality. 

If you’re interested in open data tools, this is your chance to get involved and make a difference in an application that is in development and finalisation.

You will:

  • Test the app and report any issues (including documentation problems) via GitHub Issues.
  • Push PRs to solve three issues selected by our core team.
  • Have a follow-up call with the core team to report on their experience.

In return, you’ll receive a $1,000 mini-grant for your contributions!

Are you interested? Let us know by filling out this form:


About us

The Open Knowledge Foundation (OKFN) is the world’s ultimate reference in open digital infrastructure and the hub of the open movement. As a global not-for-profit, we have been establishing and advocating for open standards for the last 20 years. We provide services, tools and training for institutions to adopt openness as a design principle.

Our mission is to be global leaders for the openness of all forms of knowledge and secure a fair, sustainable, and open future for all. We envision an open world where all non-personal information is open and free for everyone to use, build on, and share, and creators and innovators are fairly recognised and rewarded. Together, we seek to unlock the knowledge and data needed to solve the most pressing problems of our times.

Learn more:


About the application

Open Data Editor is the new app developed by Open Knowledge Foundation that makes it easier for people with little to no technical skills to work with data. It helps users validate and describe their data in no time, increasing the quality of the data they produce and consume. It is being developed in the context of Frictionless Data, an initiative at OKFN producing a collection of standards and software for the publication, transport, and consumption of data.


Read more

Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 1 October 2024 / HangingTogether

The following post is one in a regular series on issues of Inclusion, Diversity, Equity, and Accessibility, compiled by a team of OCLC contributors.

Woman in colorful dress with her face painted smiles into the camera while marching in a parade.Library of Congress, Prints & Photographs Division, photograph by Carol M. Highsmith [LC-DIG-highsm-20611]. Image is in the public domain.

Changes to Title II, and impact on libraries 

On 24 April 2024, the United States Department of Justice published a final rule updating the regulations for Title II of the Americans with Disabilities Act (ADA). This rule emphasizes the need for web content and mobile applications provided by state and local governments, including public higher education institutions, to be accessible to people with disabilities. This rule change will affect several aspects of higher education online resources, including registration systems, online learning platforms, financial aid information, websites, among other services. Public higher education institutions must ensure compatibility with WCAG 2.1 Level AA standards for screen readers, alt text for images and making interactive elements accessible. This can include course materials and library resources. Institutions have two to three years to improve access in all digital spaces across campus, depending on the size of the community served by the institution. 24 April 2026 is the earliest date (two years after the ruling) when changes must be implemented. 

In a conversation with UX (User Experience) Librarian and Library Assessment colleagues recently, I learned that library staff are being called on to serve as university representatives on accessible web design by sitting on task forces and consulting with departments. While I find it encouraging that universities are acting on this mandate well before the April 2026 deadline, I know that accessible design and accessibility testing are not a one-person job and require marshalling resources far beyond UX. Working with students with disabilities is a meaningful way to engage in real change. I encourage my colleagues out there searching for support and buy-in to find student associations on their campus that can assist with design and test prototypes along the way. Not only do they have the most to gain, resources for designing for cognitive or learning disabilities can be lacking. Testers may be even better in instances like this. Contributed by Lesley A. Langa. 

Readings related to National Hispanic Heritage Month 

In the United States, National Hispanic Heritage Month is commemorated from 15 September through 15 October. That period covers the independence days of Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua on 15 September; of Mexico on 16 September; and of Chile on 18 September; as well as Indigenous Peoples’ Day, Día de la Raza on 12 October. For the 2024 celebration, Washington State’s Seattle Public Library (OCLC Symbol: UOK) has compiled two timely reading lists.  “Hispanic Heritage Month 2024: Recent Fiction for Adults” features twenty-nine novels and story collections published in 2023 or 2024. The companion list, “Latine/Latinx Nonfiction” consists of twenty-five histories, memoirs, and poetry collections. 

There are authors that are familiar, as well as new to me on the fiction list. The genres are as varied as the Hispanic world itself, from a compilation of translated Latin American horror stories to a fictionalized history of the construction of the Panama Canal to a Victorian-era historical romance. The nonfiction titles include the autobiography of dancer and actor Chita Rivera, Juan González’s history of Latinos in America, and the graphic memoir of artist and illustrator Edel Rodriguez. Contributed by Jay Weitz. 

Implementing DEI in stages for success 

Ella F. Washington’s article, “The Five Stages of DEI Maturity” (November-December 2022 issue of Harvard Business Review), outlines five stages companies usually follow when incorporating DEI programs: aware, compliant, tactical, integrated, and sustainable. Washington describes how “a typical journey through these stages includes connecting top-down strategy and bottom-up initiatives around DEI, developing and organization-wide culture of inclusion, and ultimately, creating equity in both policy and practice.” The author provides a description with examples of each stage, noting that in a 2022 survey almost one-third of companies were in the compliant stage and can become stuck in this stage without a change in organizational culture. 

I read this article a while ago and rediscovered it through a citation in another article. Washington’s article was exactly what I needed that day, as I think about DEIA in my work goals for the coming year. At the integrated stage, an organization asks what structures to create for sustainable efforts and challenges existing practices. This requires buy-in from the entire organization. As one person in a large organization, I need to integrate my work with others across the organization to create sustainable DEI programs. Understanding this gives me focus and reminds me that DEI is everyone’s work. Contributed by Kate James. 

The post Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 1 October 2024 appeared first on Hanging Together.

October 2024 Early Reviewers Batch Is Live! / LibraryThing (Thingology)

Win free books from the October 2024 batch of Early Reviewer titles! We’ve got 197 books this month, and a grand total of 3,718 copies to give out. Which books are you hoping to snag this month? Come tell us on Talk.

If you haven’t already, sign up for Early Reviewers. If you’ve already signed up, please check your mailing/email address and make sure they’re correct.

» Request books here!

The deadline to request a copy is Friday, October 25th at 6PM EDT.

Eligibility: Publishers do things country-by-country. This month we have publishers who can send books to the US, Canada, the UK, Australia, Germany, France, Sweden, Poland, Netherlands, Ireland and more. Make sure to check the message on each book to see if it can be sent to your country.

Pictures of YouNorthThe Kiss of the NightingalePeople Are TalkingThe Girl from Raven IslandLemming's First ChristmasFrostfireAn Anishinaabe ChristmasBoy Here, Boy ThereTove and the Island with No AddressReasons to Look at the Night SkyWhat the Seahorse Told MeThe Phantom of Forest Lawn: Romance and Redemption in the City of the DeadThe Colors of April: Fiction on the Vietnam War’s Legacy 50 Years LaterA Bucket Full of MoonlightReclaiming Quiet: Cultivating a Life of Holy AttentionGive Them Grace: Leading Your Kids to Joy and Freedom Through Gospel-Centered ParentingThe Lady of the MineThe Boy & His ConscienceArmored HoursShaded GroveCrossing from Shore to ShoreThe Little Book of Quotes by Women: Inspiring Words to Live ByLies of a ToymakerAuntie D's RecipesRecovering from Purity Culture: Dismantle the Myths, Reject Shame-Based Sexuality, and Move Forward in Your FaithThe Seaside HomecomingCloaked in BeautyThe Leader's Devotional: 90 Days of Biblical Wisdom for Honoring God in All You DoMedEvacZamboni of LoveScaredy Squirrel Gets FestiveRize Novella Anthology, Volume 2Unmasked Moments: A Child and Adolescent Psychiatrist's Memoir of the COVID-19 PandemicWinnie Mandela, Stompie Moeketsi & Me: My Story of a Notorious Murder and the Events That FollowedOnce Will Be Better, or, My Life StoryThe Future of Technology and Society: A Guide for 2040Hope In HardshipAttic Rain: PoemsPieces of a MurderAngst in the Arms of MorpheusBilly and the Epic EscapeNot in My BookBoy vs. SharkMurtagh [Deluxe Edition]Paint with PloofPatsy Cline's Walkin' After MidnightRain In OctoberLila Said NoNana NanaTractor DanceThe Stars Inside Us23 and You and MeSecret FACTopia!: Follow the Trail of 400 Hidden FactsGalápagos Islands: The World's Living LaboratorySolstice: Around the World on the Longest, Shortest DayThe Forgotten SonBillu ButtonsPractical Money Skills for Teens: Personal Finance Simplified, with a Quickstart Guide to Budgeting, Saving, and Investing for a Stress-Free Transition to Financial Independence365 Inspirational & Motivational Quotes to Live By: Daily Wisdom to Inspire Personal Growth, Resilience, Positivity, and MindfulnessLandscapes & Landmarks Coloring Book for Adults: Scenic Beauty and Iconic Places from All 50 States of America for Mindful Relaxation and Stress ReliefHalloween Coloring Book for Adults: Spooky Fun, Stress Relief and Creative Expression with Dark Fantasy and Gothic ArtBlack MarketOur Comeback Tour Is Slaying MonstersA Once In A Lifetime OpportunityKevin Hops To LondonBirth of a GoddessA Simple Guide to Staying Healthy & Living LongerHadron's RunWhy Like Flies?The Lost KingThe Fairy Godmother's TalePontiac Performance 1960-1974: The Era of Super Duty, H.O., and Ram Air Drag and Muscle CarsStreetWhysMake a Little WaveThe Doll from DunedinFriends and Consequences: Tales from the Old Fort - 1973The Neurodiversiverse: Alien EncountersOn The QuietPeace on Earth & Mercy MildThe Truth About Greece That SummerThe Entrepreneur’s Edge: A 3-Book Compilation on AI, Cybersecurity, and AR/VRSanta Fe Uncovered: A Local's Insight into the Heart of New MexicoDenver Dossier: Themed Adventures for Every TravelerLet's Fix This: Cleaner Living in a Dirty WorldAllies, Arson, and Prepping for the ApocalypseHomemade Healthy Dog Food Guide: Discover the Science Behind Nutritional Solutions, Tailored to Your Dog's Health at Every Stage of Life, and for Chronic or Pathological ConditionsBestGhost: A NoveletteTeach Your Child to Read: A Mommy + Me Coloring BookThe Medici Maxim: Exploit the Power of the Matthew Effect to Achieve Exponential Success: The 9 Cardinal Principles to Activate, Amplify, and Accelerate an Accumulating Advantage (and How They Made the Medici the Richest Family in Europe)Keeper's ProphecyFaith in FoolsRescue Your Late ProjectNowhereBeyond Beliefs: The Incredible True Story of a German Refugee, an Indian Migrant and the Families Left BehindA Legend of the SailorsShikareeThe Horseman's TaleDarkness, DarknessVesselKnee-Deep in CindersThe MahdiJourney to 2125: One Century, One Family, Rising to ChallengesTalking about Adolescence: Book 2: Supercharge Your Body and Brain PowerStormflowerBob and Fluffy's First Adventure: A Story of Kindness and FriendshipThe Curse of the Smoky Mountain TreasureThe Little Hedgehog and the Very Windy DayRise of the Black CrossThe Smell of FallLifersThe Forgotten AlphabetA Coffee for TwoFatal FarmingLie Me Down Among The Cold Dark PinesShe Leads: Leadership Development for Women in BusinessPoinsettia LaneThe Art of a ButterflySuper 8The Garden TaleEvery Rule UndoneTHE SYNEquationHush the Cannon’s Roar: The Life & Times of Bennet Riley: Defender of BuffaloPerfectly YouThe Lightning SeedThe Eye of the SeaPhysics FablesIkigai and the Art of Keeping Your Dreams AliveFirst ContactDadding Poorly: Bad Parenting Advice for the First Decade of Your Child's LifeUnder a New and Brilliant SkyWhy So Blue: A StayCoppinKicks StoryThe Lost LampFission #4: An Anthology of Stories from the British Science Fiction AssociationStars, Clouds & ThornsAbout the BoyCows Can't Be ClownsUnreadable: Another Book You Probably Won't ReadFall, Sacred AppleStep OneThe Last Fairy Godmother: WishlessPoems From the End of Eternal SpaceThe Last Nuclear WarTab's Terrible Third EyeThe Matrix of the MindTemporary Beauty: A Memoir About Panic Disorder and Finding Purpose Through Art and MeditationA Cowboy's RunawayBear's Sick Day: A Story of Caring and FriendshipThe 4MIDABLES - How They Came To BeDo Not Take the TrainThe Christmas ProofFilm Noir Fate vs the Working Stiff: Film Noir in the Public Domain Vol IIThe ProjectionistBad Cop, Worse CopFelones de Se: Poems about SuicideApprenticed to the NightTender Paws: How Science-Based Parenting Can Transform Our Relationship with DogsAnomic Bombs: Five Sci-Fi Tales of Organisms Not Quite Fitting InWho Generated My Cheese?: What You Must Do Now to Survive and Thrive in an AI World—A Full-Color Illustrated PrimerWhere Demons ResideGreen Forest, Red Earth, Blue SeaMy Buddy Bali: A Tourist in Kisses and TearsThe Potent SolutionHolistic Retirement Planning: Being Intentional with Heart, Mind, and Money at Any AgeA Cultural History of America's Scots Irish: From Border Reivers of the Anglo-Scottish Border to Mountaineers in AppalachiaA Choir of WhispersDiane: True SurvivorMiranda FightsAlice Pemberton's Bureau of Scientific InquiryTalmadge FarmDesperate MeasuresThe Perfect PawnA Wilder WelcomeDelphiThe Ultimate Prepper's Survival Bible: Guide to Surviving Any Crisis.No Grid Survival Projects Bible. USA 2024-2025 EditionThe Mechanics of Changing the World: Political Architecture to Roll Back State & Corporate PowerUnnatural IntentA Fate Far Sweeter: Passion & Peril In UkraineHogs Head StewPeople of MemoryStashed in a JarThe Ultimate Guide to Rapport: How to Enhance Your Communications and Relationships with Anyone, Anytime, AnywhereGray WrathEchoes of the TombPreserving the PresentBetween the Lines: A Short StoryThe Focused Faith: Detox Your Digital Life, Reclaim Hijacked Attention, Build Habits for Focus & JoyA Love Worth Waiting ForTwo NecklacesThe Last Quest: Business Exit Strategy from an Unlikely SourceInside: A Visual Journey of Mindfulness for Curious KidsBig Love and War HorseTanglesRule #1 for Depression: How to Eliminate Negative Thinking and Rewire Your Anxious Brain with This Simple Depression BookMaya and Waggers: Mega Gossip

Thanks to all the publishers participating this month!

aka Associates Alcove Press Ashwood Press
Baker Books Bethany House CarTech Books
City Owl Press Entrada Publishing Fawkes Press
Harbor Lane Books, LLC. IngramSpark Inhabit Media Inc.
Ink & Quill Press Legacy Books Press Lerner Publishing Group
Middleton Books Modern Marigold Books New Vessel Press
NeWest Press Paper Phoenix Press Prosper Press
PublishNation Purple Moon Publishing Revell
Riverfolk Books RIZE Press Running Wild Press, LLC
Somewhat Grumpy Press Susan Schadt Press Thinking Ink Press
Three Rooms Press Tundra Books Tuxtails Publishing, LLC
Type Eighteen Books What on Earth! Wise Media Group
Yorkshire Publishing Zibby Books

DLF Digest: October 2024 / Digital Library Federation

DLF Digest logo: DLF logo at top center "Digest" is centered. Beneath is "Monthly" aligned left and "Updates" aligned right.

 

A monthly round-up of news, upcoming working group meetings and events, and CLIR program updates from the Digital Library Federation. See all past Digests here

Happy Fall, DLF Community! We’re so excited for October because it’s the month the Virtual DLF Forum is happening. There are over 500 registrants who will experience 34 sessions led by 102 speakers, including Featured Speaker Andrea Jackson Gavin, happening over two days, October 22-23. Join us — registration ends October 15

— Team DLF


This month’s news:


This month’s DLF group events:

DLF Data and Digital Scholarship Working Group and Research Libraries UK — Critical AI Literacy Skills 

Tuesday 29 October 16:00 – 17:30 GMT, 12:00 – 13:30 EDT, 09:00 – 10:30 PDT; Register in advance here

This highly interactive session will follow previous successful joint meetings between members of CLIR’s Digital Library Federation, Data and Digital Scholarship working group (DDS) and RLUK’s Digital Scholarship Network (DSN).

The meeting will explore the topic of ‘Critical AI Literacy skills’ and will include speakers from across the US and UK research library and information communities.

Potential breakout topics for this session include:

  • Responsible AI in Libraries
  • Alternative AI techniques (Adversarial or Defensive AI)
  • Data Transparency in AI models

It will also include opportunities to meet with fellow professionals, share skills (and the Skills Directory) and knowledge, hear from skills experts, and receive updates regarding the continued collaboration between the DDS and DSN. 

Who should attend: You do not need to have attended a previous joint meeting in order to attend this session and the meeting is open to all members of the DLF and the DSN.

This event will be highly interactive and involve lots of delegate participation. Come energised to share your experiences, specialisms, and skills needs in a dynamic, transatlantic skills exchange. Although a free event, all delegates are required to register.   

Further information: Visit the collaboration’s OSF page for useful resources from previous meetings (inc. shared notes from our last meeting): click here.


This month’s open DLF group meetings:

For the most up-to-date schedule of DLF group meetings and events (plus NDSA meetings, conferences, and more), bookmark the DLF Community Calendar. Meeting dates are subject to change, especially if meetings fall on a holiday. Can’t find meeting call-in information? Email us at info@diglib.org. Reminder: Team DLF working days are Monday through Thursday.

AIG = Assessment Interest Group

  • Born-Digital Access Working Group: Tuesday, 10/01, 2pm ET / 11am PT
  • Digital Accessibility Working Group: Wednesday, 10/02, 2pm ET / 11am PT
  • AIG Cost Assessment Working Group: Monday, 10/14, 3pm ET / 12pm PT
  • AIG User Experience Working Group: Friday, 10/18, 11am ET / 8am PT
  • Digital Accessibility Policy and Workflows Subgroup: Friday, 10/25, 1pm ET / 10a PT
  • Digital Accessibility Working Group – IT Subgroup: Monday, 10/28, 1:15pm ET / 10:15am PT 
  • Committee for Equity and Inclusion: Monday, 10/28, 3pm ET / 12pm PT
  • Climate Justice Working Group: Wednesday, 10/30, 12pm ET / 9am PT

DLF groups are open to ALL, regardless of whether or not you’re affiliated with a DLF member organization. Learn more about our working groups on our website. Interested in scheduling an upcoming working group call or reviving a past group? Check out the DLF Organizer’s Toolkit. As always, feel free to get in touch at info@diglib.org


Get Involved / Connect with Us

Below are some ways to stay connected with us and the digital library community: 

The post DLF Digest: October 2024 appeared first on DLF.

NDSA Welcomes One New Member in Quarter 3 of 2024 / Digital Library Federation

As of September 2024, the NDSA Leadership unanimously voted to welcome one new applicant into the membership. Please join me in welcoming our new member! To review our list of members, you can see them here.

The Archive and Heritage Digital Curation Group

In their application, The Archive and Heritage Digital Curation Group noted that they are “a specialised consulting services that ensure that your archival processes meet regulatory compliance and standards.” They continued to note that, “We provide comprehensive support in archiving, records management, metadata management, disaster preparedness, and environment scanning to safeguard your valuable collections. Both from a IT systems and Archives and Records Management perspective. We also do digitisation from equipment to physical digitisation, collection building and storage.” 

 

The post NDSA Welcomes One New Member in Quarter 3 of 2024 appeared first on DLF.

Threats, hopes and tales from the Open Knowledge Network gathering in Katowice, Poland / Open Knowledge Foundation

On 5 August 2024, representatives from the Open Knowledge Network gathered at the Metallurgy Museum in Chorzów (Katowice, Poland) for a day of strategic thinking. The annual gathering is a special occasion, an opportunity for us to come together, share our knowledge, listen attentively, and forge meaningful connections. This gathering has been a way to embark on a journey of discovery, collaboration, and the creation of something greater than ourselves. 

Who was there? The attendees of the meeting were Haydée Svab from Open Knowledge Brazil, Nikesh Balami from Open Knowledge Nepal, Charalampos Bratsas and Lazaros Ioannidis from Open Knowledge Greece, Beat Estermann from Opendata.ch, Poncelet Ileleji from Jokkolabs Banjul, Dénes Jäger from Open Knowledge Germany, Susanna Ånäs from Open Knowledge Finland, Sandra Palacios from BUAP, the regional coordinator for Europe Esther Plomp, and for OKFN Renata Ávila, Patricio del Boca, Lucas Pretti, and Sara Petti. The meeting was facilitated by Jérémie Zimmermann

The date and location was strategically chosen so we could attend all together Wikimania 2024, which incidentally had collaboration as its main theme this year. 

How we can increase collaborations within the Network (and beyond!) and how we can make those collaborations more effective is indeed something we talked about extensively during the gathering. 

We are more and more convinced that in-person gatherings and celebrations of our movement provide excellent opportunities to break the silos and foster collaboration. Bring people together in a room, let them spend some time together, discuss the topics that are close to their heart, share their experience, and magic will happen. You don’t believe me? To date, projects and collaborations are born out of connections that were made at the mythical Open Knowledge Festivals a decade ago (Helsinki 2012, Berlin 2014). And blimey, how much do we need those celebrations in these gloomy days of decaying institutions, proliferation of disinformation, and corporate diktats! 

We also asked ourselves how we could make collaborations more strategic, for example taking advantage of emerging topics and opportunities. And as someone reminded at one of the last Network calls, threats can be opportunities, so it’s worth it to have a look at what is there. We spent a considerable amount of time in Katowice delving into what we feel is threatening open knowledge at the moment, and therefore requires our attention. 

We all agreed on the fact that large segments of people are still lacking the skills to build, use, or understand open data and open technologies, and are therefore excluded from benefiting from open resources, exacerbating existing inequalities and hindering the potential for widespread knowledge sharing. Limited access to quality education and digital literacy creates significant barriers to engaging with open knowledge. We have known this for a while, this is why the School of Data actually started more than a decade ago, but the gap is still there. We have discussed this during one of our last 100+ conversations. Can we do more? 

Of course we all know there’s only so much we can do without funding. We acknowledged that open knowledge initiatives often struggle with insufficient funding, which limits our ability to develop, maintain, and scale sustainable projects. Without adequate resources, many open projects fail to reach their potential, leaving ground to well-funded proprietary solutions that prioritise control over accessibility.

The problem with funding is also linked to the fact that the funders’ agenda is often dominated by trends set by Big Tech, so we sometimes end up doing things because of those trends, instead of other things that we would be more willing to do, but halas don’t attract money. This is something we extensively talked about during a digital commons gathering in Berlin last year, if you are interested you can read the report Problematizing Strategic Tension Lines in the Digital Commons.

Big Tech is also imposing non-sustainable business models that prioritise profit over sustainability and a human-centric development, leading to closed ecosystems that lock users into proprietary platforms, stifle innovation in open-source alternatives, and undermine the broader goal of equitable access to knowledge. These business models concentrate power and resources in the hands of few, at the detriment of the many. One example of this? The concentration of data ownership by a few global entities, currently leading to data colonialism, where resources are extracted from communities without benefiting them. This creates a monoculture of knowledge production, controlled by monopolies that dictate who can access, use, or benefit from data, undermining local autonomy and diverse perspectives, and ultimately exacerbating social injustice. The exact opposite of what open knowledge stands for.

And since we are talking about the opposite of what open actually stands for, we are all very worried by the mis-use of the word “open”, associated with practices that are far from being open (need a little reminder of what open really is? Go and have a look at the Open Definition), what we commonly call open washing. This false openness can be weaponized, spreading misinformation or serving as propaganda or reinforcing a moral hegemony while distorting the general understanding of open knowledge and undermining genuine efforts toward transparency and accountability in open knowledge. Failing to understand what open really is can result in widespread misconceptions, for example the false idea that openness conflicts with personal privacy, security or data protection.

Last but not least, once again for this meeting some of our Network members were not able to join us because of restrictive immigration policies and closed borders for some. These barriers create inequities in who can contribute and benefit from open knowledge initiatives, reinforcing global inequalities and restricting the exchange of ideas.

After discussing what we felt are the most alarming threats to open knowledge, we reminded ourselves that threats can actually be opportunities, and therefore indulged in dreaming about how we could solve some of those challenges as a collective. Telling each other stories under the Polish blue sky, we started realising that story-telling is an essential part of the work we have to do. If we want to stay relevant, and convince people open knowledge is key in solving the most pressing issues of our time, we need to communicate more effectively our values, and we need to communicate them to a broader audience too, reach out to new people out there and bring them into our community. We need to remind people outside our bubble the benefits of open knowledge, such as transparency, collaboration, and innovation. Actually remind is not the right word. We have to tell them, because some of those outside our community actually don’t know.

So here’s our story about how we in the open movement solved the problems and faced the threats highlighted above as a collective. Hope you enjoy it. Note that this story is open ended, and you can contribute to its making if you want to. 

The Tales of Jokkolandia

In 2024, Jokkolandia faced its darkest hour. Devastating floods, social unrest, and raging fires swept across the land, reducing everything to ruins—except for its people. Despite the chaos, the spirit of the Jokkolandians remained unbroken. They gathered together and decided to rebuild their country from scratch. This time, they would do it differently.

All resources were centralised, regulated by a diverse committee of young and old people. The elders, with their memories of what Jokkolandia once was, provided wisdom and perspective. The younger generation, brimming with fresh ideas, brought innovation and new energy. Together, they began a thorough process of revision, questioning what had worked in the past and what had failed. From these reflections, they designed a new society based on collaboration, where the community actively monitored technology, ensuring that it served the people rather than the other way around.

This crisis fostered an unshakeable bond of solidarity and accountability. In Jokkolandia, data governance became a collective responsibility, and every decision was made democratically. Institutions and infrastructure were rebuilt to nurture a participatory democracy, with the free movement of people across borders. There were no visas, and individuals from all over the world flocked to Jokkolandia, including members of the Open Knowledge Network, who found a welcoming home in this utopia. Health coverage was universal, and everyone had their basic needs met.

Rejecting the exploitative models of Big Tech, Jokkolandia built its own solutions. They developed their own digital public infrastructure (DPI), entirely homegrown, and collectively destroyed the stranglehold of tech monopolies. Open hardware and open knowledge became a way of life, and even the Zapatistas and Yanomami communities came together to build amazon.open, a digital interlocal infrastructure governed by local cooperatives. This network of interconnected nodes allowed people to trade resources and fulfill their needs in a decentralised and equitable way.

Meanwhile, the billionaires of the old world, now irrelevant, were sent to Mars, where their lives were broadcast as a satirical reality show. When their time on Mars was over, they returned to Earth with their fortunes devalued, joining the same cooperative nodes they had once dominated. Jokkolandia had achieved a society where knowledge was shared openly, and every individual could access what they needed to thrive. However, one challenge remained: how to dismantle the lingering power dynamics of “knowledge is power” and ensure true equality in the exchange of information.

Neighbouring Jokkolandia was Openlandia, a well-established democracy that had once been ruled by dinosaurs—figures clinging to outdated ideas and detached from the realities of their people. As they began losing popularity, the dinosaurs asked themselves a critical question: how could they stay relevant? The answer came through engaging young people. Inspired by the strategy of Humanitarian OpenStreetMap, they brought young minds into schools to map their local communities, addressing real, tangible problems like fixing broken infrastructure and improving connectivity.

The dinosaurs realised that to secure their future, they had to engage with the next generation and those with a multiplier effect—educators. They started working on the intersection of openness, education, and communication, creating compelling stories about how open knowledge could solve everyday issues. However, Openlandia still faced challenges in connectivity, particularly in rural areas, so they also embraced offline communication strategies, like local radio broadcasts, to reach everyone. Their focus on collective messaging helped restore their relevance, but a new question loomed: were they becoming the dinosaurs they had once fought to overcome?

As both Jokkolandia and Openlandia grappled with the future, the global community had made strides to overcome capitalism. The world had introduced a global basic income, curbing hyper-consumerism and redirecting military spending toward societal good. The rich were taxed heavily, and nationalistic spending was drastically reduced. Yet the question remained: how could we balance societal needs when it was impossible to bring everyone to the same level without overwhelming the planet’s resources?

Studies revealed that society was divided—some acted for the collective good, others for selfish reasons, and the majority simply followed the dominant trend. By establishing a cooperative society that managed resources transparently and equitably, Jokkolandia set a collective standard that most people naturally followed. But as this new world took shape, new questions emerged: What would be the nature of power in a society governed by cooperatives and open communities? And what kinds of problems would arise when collective governance met individual needs?

In this new age, Jokkolandia and its neighbours strived to answer these questions, continuously evolving as they sought to balance openness, fairness, and the complexities of human nature. The journey toward true equality, openness, and shared knowledge had only just begun.


Would you like to tell us your story? Drop us a line! And remember you can always join the Open Knowledge Network.

Would you like to be part of the Network?

Our Network connects activists and change-makers of the open knowledge movement from more than 40 countries around the world, who are advancing open and equitable access to knowledge for all everyday through their work.

We believe knowledge is power, and that with key information accessible and openly available, power can be held to account, inequality challenged, and inefficiencies exposed.

You can check all current members on the Network page and our Global Directory. Or browse through the Project Repository to find out what each member has been working on. For current updates, subscribe to Open Knowledge News, our monthly newsletter. 

Our groups can always benefit from more friendly faces and fresh ideas — we will be happy to hear from you! Please contact us at network[at]okfn.org if you, as an individual or organisation, would like to be a part of Open Knowledge and join our global network.

Advocacy and resourcing in special collections: Priorities, challenges, and advice from an OCLC RLP leadership roundtable / HangingTogether

Image of a hand with palm turned upward with a lightbulb, dollar sign, and gear floating above it. It is meant to portray resources. Resources by angorro on Noun Project

This post is one in a series documenting findings from the RLP Leadership Roundtable discussions.  

Rarely do we work in an environment where we have all the resources we need or would like; therefore, advocacy skills are an essential part of any special collections leader’s portfolio. Our recent OCLC Research Library Partnership (RLP) Special Collections Leadership Roundtable discussions focused on this vital topic of advocacy and resourcing. The rich discussion included how participants identify and make the case for their needs and address the challenges they face, with the hope that sharing obstacles and strategies can help everyone, including those reading this post!   

Across four discussion sessions, we had people join us from 28 RLP institutions.

Boston CollegeBoston UniversityClemson UniversityColorado State University
Emory UniversityGetty Research InstituteThe HuntingtonLibrary of Congress
Monash UniversityMontana State University New-York Historical SocietyNew York Public Library
Northeastern University Penn StateRockefeller Archive CenterStony Brook University
Syracuse UniversityUniversity of California, IrvineUniversity of KansasUniversity of Michigan
University of Nevada, RenoUniversity of PittsburghUniversity of SydneyUniversity of Texas at Austin
University of TorontoUniversity of WashingtonVanderbilt University Virginia Tech

Participants commented on a set of framing questions:

  • What are your top priorities for additional resources, whether they be staff, collections, technology, or something else?    
  • Where do you expect decreasing resources and increasing resources in the next five years?   
  • Can you share an advocacy success story? An advocacy challenge?  

After sharing responses to the questions, we opened the floor for discussion; high-level takeaways are summarized here.  

Priorities

Staff are needed across all parts of operations 

Nearly every institution in our conversations identified staffing as the overwhelming resource priority, with a range of needs within this category. Many are advocating for building their staff to better respond to contemporary program needs, including expanding teaching programs, shifting to greater remote access since the pandemic, supporting born-digital archival collecting, and increasingly emphasizing community connections and relationship stewardship in collection development. While in many cases there is a need for new positions or filling open lines, there is also significant need to build new skills in existing staff. One participant described being in the “long moment between physical and digital [collecting],” with a need for staff that can support both simultaneously.  

A staffing priority among many institutions is advocating for permanent staffing lines. Participants described some success in funding temporary positions, especially to deal with cataloging and processing backlogs. But it is significantly more challenging to secure ongoing funding to support permanent positions. With increasing attention to responsible and resource-sensitive stewardship and a continued desire to not just address backlogs but prevent their accrual, permanent positions dedicated to technical service needs were identified as necessary for building sustainable programs. An additional challenge for partners in academic institutions where librarians and archivists are faculty is that funds for new faculty lines are controlled by the provost, a role outside of the library.  

More space for collections and teaching 

Space was also a top priority for participants. For many, additional collection storage space was needed as institutions are nearing capacity for their physical collections. Digital storage was also a priority, and in many cases cited as a challenging priority to advocate for because it is relatively invisible.  

Space for teaching was identified as a priority nearly as often as collection storage space. As more programs are increasing their teaching engagement, special collections classroom space is at a premium. More than one institution in the conversation described having to turn away faculty requests for instruction because of space issues, not due to a lack of staff or collections resources. 

Support for changing access and engagement needs 

Shifts in the way that students and researchers engage with collections fueled priorities identified by some participants. During the pandemic closures, many archives and special collections responded to user needs with new or scaled up services that provided online access to collections, such as virtual reference services, digitization on demand, and teaching for remote or online classes. Despite the end of pandemic restrictions, user needs and expectations for access to collections seem to have shifted—seeking to maintain what were meant to be temporary accommodations. Discussants described trying to operationalize some of those changes, though significant uncertainty remains.  

Similarly, participants described a need to advocate for truly integrated teaching and learning, continuing a shift toward curriculum-aligned teaching in special collections and away from “white glove show-and-tells.” This kind of teaching and student engagement, along with alignment with institutional mission and goals, provides good fodder for advocacy stories. However, it is also staff time intensive, which in turn creates greater need for advocacy.  

Twin challenges: organizational turnover, funding misalignment  

Participants described seeing significant churn in leadership roles in the library and their larger organizations. Such changes can pose a major advocacy challenge. New leadership in the institution often means starting from scratch with articulating the value of special collections, as well as learning what a new administrator cares about to best communicate that value in terms that are meaningful to them. One participant described advocacy as a long road that requires sustaining relationships over time, and consistency of messaging to leadership about what needs are a priority. Leadership changes mean starting that relationship building all over again. 

Several people identified a mismatch between their endowment funds and their current programmatic needs. Traditionally, endowed funds in special collections are largely designated for purchasing new collections, not for stewarding them via cataloging, processing, conservation treatment, or digitization. A few archives have had success in going back to endowment donors or their descendants and reshaping allowed uses to better align with modern needs. A similar concern exists for expendable gifts from donors—that people want to fund things they view as exciting or novel, which isn’t always what a repository needs. As one participant put it, donors want to fund projects or purchases that are “sexy, and the backlog isn’t sexy.” 

Success stories and advice

Storytelling versus metrics 

Discussion turned to how people are quantifying need and outcomes, and what kinds of metrics are kept, reported, and useful in their advocacy efforts. The consistent theme across our roundtable sessions wasn’t the collection of a particular statistic, but the ways people were working to make those statistics meaningful to different audiences. Participants are thinking about which metrics matter to a specific person or role, or what best illustrates their point. For instance, one person had success in getting her University Librarian to engage with addressing their backlog after she started describing their backlog in FTE hours (actually years) rather than linear feet, and sharing year over year collection growth numbers, rather than just reporting for the past year. Two participants in New York City shared that they talk about the backlog in terms of how many Empire State Buildings it would be if you placed the boxes end to end, a compelling visual image and one that can transform an abstract number into a compelling story. Others shared stories of enhancing metrics with rich detail, for example, augmenting teaching statistics with a story about the deep engagement or knowledge production that teaching facilitates.  

Development colleagues, from awkward to aligned 

Throughout our discussions, people asked for advice from one another, and much of that conversation centered on working with colleagues in development or advancement offices. The relationship between special collections and fundraising colleagues is an important one. When it functions well, both parties can help each other reach their goals. When it doesn’t, it can be counterproductive—allocating resources to low priority projects, committing the institution to collections that aren’t in line with mission and collecting scope, or just failing to raise funds needed for important work. Many participants asked variations on a question that basically boiled down to: how do you work with development to raise money for things that you actually need, and avoid getting saddled with projects you don’t want? The advice people offered was practical and actionable, and underscored the positive role of clear and open communication.  

Several participants created project briefs for colleagues in development, 1–2-page descriptions of funding needs that highlight why that work is important, exciting, or what it would enable. Creating briefs for projects at different price points that align with typical gift amounts can help development colleagues match donors with projects. Supporting fundraising colleagues in this way can help establish a reputation of being easy to work so that special collections is sought as a partner in future opportunities.  

Helping colleagues outside the archive understand the heavy lift of caring for archival or collections was seen as an especially important communication goal. Development colleagues may place pressure on archives to take in collections as part of a major gift agreement, which can end up being burdensome to special collections. One participant aptly described them as “gifts that eat.” Educating colleagues about processing and stewardship work can inform the somewhat awkward conversations with these colleagues, and make sure those kinds of collections, if they must come in, at least come with money for their care and feeding.  

Another important communication tool is the call or contact report to share out when you speak with a potential or major donor. These keep everyone in the loop about what was discussed, support consistency of messaging and keep people from stepping on each other’s toes and are useful when handing off relationship stewardship to someone new.  

Participants also talked about making sure development colleagues know about events in special collections so that they can leverage them in their work. Special collections may still need to support one-off visits from donors, but it can save time and show the archive off in a way you want to emphasize.  

Next Steps

This set of roundtable discussions were rich and fruitful, and it was quite rewarding to see RLP partners share advice and strategies with each other for more successful advocacy. If you are interested in further advocacy advice from the field, I encourage you to join us on October 29 for an RLP Works in Progress webinar with Beth Myers from Smith College. She will be talking about how they used OCLC Total Cost of Stewardship tools to shift the way they worked with donors to build a more equitable, flexible funding model that centers the real cost of archival and rare book management.  

Our next round of gatherings of the Special Collections Leadership Roundtables will be in October; these will dig more deeply into a topic surfaced in these discussions—how institutions are responding to the evolving public services landscape in archives and special collections. If you are a member of an RLP institution and want to make sure you are represented in the roundtables, please reach out to me so we can make sure you are included!  

The post Advocacy and resourcing in special collections: Priorities, challenges, and advice from an OCLC RLP leadership roundtable appeared first on Hanging Together.

Dewayne Hendricks RIP / David Rosenthal

Source
Dewayne Hendricks, my friend of nearly four decades, passed away last Friday at age 74. His mentors were Buckminster Fuller and Paul Baran. He was a pioneer of wireless Internet connectivity, a serial entrepreneur, curator of an influential e-mail list, and for the last 30 years on the organizing committee of the Asilomar Microcomputer Workshop.

For someone of his remarkable achievements he has left very little impression on the Web. An example is his Linkedin profile. Below the fold I collect the pieces of his story that I know or have been able to find from his other friends. If I can find more I will update this post. Please feel free to add information in the comments.

Wayne State University

Dewayne was a student at Wayne State, where he got into systems programming for the IBM 370. They ran the Michigan Terminal System on a 512K 370/155. He tried to run the University of Newcastle's CMTS, an experimental version of MTS that didn't use dynamic address translation, and ran into performance problems. He worked with Larry Chace at the University of Illinois to get it running on their 1+2M 360/75. After Chace rewrote it to swap pages to their 2301 drum storage it ran well.

Southern Illinois University

While at Southern Illinois Univerity in the '70s he worked for Buckminster Fuller and continued his involvement in IBM systems programming, now for VM/370. Melinda Varian, historian of that era of IBM's groundbreaking SHARE user group, wrote in VM and the VM Community: Past, Present, and Future:
Dewayne Hendricks reported at SHARE XLII, in March, 1974, that he had successfully implemented MVT-CP handshaking for page faulting, so that when MVT running under VM took a page fault, CP would allow MVT to dispatch another task while CP brought in the page. At the following SHARE, Dewayne did a presentation on further modifications, including support for SIOF and a memory-mapped job queue. With these changes, his system would allow multi-tasking guests actually to multi-task when running in a virtual machine. Significantly, his modifications were available on the Waterloo Tape.

Dewayne became the chairman of the Operating Systems Committee of the SHARE VM Project. Under his guidance, the Committee prepared several detailed requirements for improvements to allow guest systems to perform better. At SHARE XLV, in 1975, the Committee presented IBM with a White Paper entitled Operating Systems Under VM/370, which discussed the performance problems of guests under VM and the solutions that customers had found for these problems. Many of the solutions that Dewayne and others had found, such as PAGEX, made their way into VM fairly quickly, apparently as the result of customers’ persistence in documenting them. By SHARE 49, Dewayne was able to state that, “It is now generally understood that either MFT or MVT can run under VM/370 with relative batch throughput greater than 1.” That is to say, they had both been made to run significantly faster under VM than on the bare hardware. Dewayne and others did similar work to improve the performance of DOS under VM.

Amateur Radio

Dewayne was a major figure in developing and sustaining the use of amateur packet radio:
He has been involved with radio since receiving his amateur radio operator's license as a teen. He currently holds official positions in several national non-profit amateur radio organizations and is a director of the Wireless Communications Alliance, an industry group representing manufacturers in the unlicensed radio industry.
In particular:
Back in 1986, he ported the popular KA9Q Internet Protocol package to the Macintosh, allowing the Macintosh platform to be used in packet radio networks. Today, thousands of amateur radio operators worldwide use the NET/Mac system he developed to participate in the global packet radio Internet. This system continues to be developed and deployed by the amateur radio service.
Dewayne was a member of the Amateur Radio Digital Communications Grants Evaluation Team from 2021 to his death. ARDC grants around $5M/year:
ARDC makes grants to projects and organizations that are experimenting with new ways to advance both amateur radio and digital communication science. Experimentation by amateur radio operators has benefited society in many ways, including the development of the mobile phone and wireless internet technology. ARDC envisions a world where all such technology is available through open source hardware and software, and where anyone has the ability to innovate upon it. To see examples of the types of grants we make, go to https://www.ardc.net/grants/.

Tetherless Access

One of the many ahead-of-their-time companies Dewayne started was Tetherless Access. He co-founded it with Charlie Brown in 1990 to develop wireless Metropolitan Access Networks. It went public in 1996 on NASDAQ and folded two years later. The idea was to use the 900MHz unlicensed spectrum to distribute Internet connectivity from a base station via point-to-point links, in contrast to Metricom's Ricochet service, started by Dewayne's mentor Paul Baran, which four years later used mesh network technology in the same spectrum.

Source
Tetherless Access launched a testbed network which:
  • Started in Fall ‘96
  • Covered 35 mi area in south bay
  • Delivered from ISDN to 30 Mbps bandwidth
  • Used both licensed and unlicensed equipment (Part 15 and 97)

NSF Projects

Dewayne was involved in a number of NSF funded experiments in using wireless to connect remote communities:
Prior to forming Dandin Group, he was the General Manager of the Wireless Business Unit for Com21, Inc. He joined Com21 following an opportunity to participate as the Co-Principal Investigator in the National Science Foundation’s Wireless Field Tests for Education project. The project sucessfully connected remote educational institutions to the Internet. The test sites ranged from rural primary schools in Colorado, USA to a University in Ulaan Bataar, Mongolia.
Com21 was founded by Dewayne's mentor Paul Baran.

Ulan Bator rooftop
Courtesy Glenn Tenny
The PI for connecting Mongolia to the Internet for the first time in 1996 was the "Cursor Cowboy" Colonel Dave Hughes, an equally remarkable character who was a pioneer of bulletin boards starting in 1981!. The NSF funded a 256Kbit/s satellite dish, the State Department shipped it to the US embassy in Ulan Bator via the "diplomatic pouch", and Dewayne and Glenn Tenney travelled via Beijing to deploy 900MHz links across the city.

The Dandin Group, Dewayne's next company was:
a partner in the Advanced Networking Project with Minority Serving Institutions (AN-MSI) an EDUCAUSE project funded by the National Science Foundation. The project's purpose is to provide improved communication services, including Internet access, to underserved minority and tribal-nation institutions. Because these institutions are frequently in remote locations which currently lack communication infrastructures, Internet-linked services delivered by wireless networks offer the most appropriate and cost-effective approach to connecting their communities to the world and to each other.
The project description is here. NSF Awards $6 Million to Help Minority Schools Prepare for Advanced Computer Networks is EDUCAUSE's press release:
National Science Foundation (NSF) Director Rita Colwell announced last week at EDUCAUSE '99 that the foundation has awarded almost $6 million over four years to help institutions of higher learning that traditionally serve minority communities prepare for the next generation of information technology and computer networks. The grant will be administered by EDUCAUSE.

Developing Countries

Dewayne was not just active in getting Internet service to under-served communities with the NSF. The bio on his website states:
Tetherless Access was one of the first companies to develop and deploy Part 15 unlicensed wireless metropolitan area data networks using the TCP/IP protocols. He has participated in the installation of these networks in other parts of the world including: Kenya, Tonga, Mexico, Canada and Mongolia.
Source
Amara Angelica reported that Tonga first to go wireless for telecommunications:
"We’re replacing the entire existing telecom infrastructure with a wireless IP [Internet protocol] network," says Dewayne Hendricks, CEO of Fremont-based Dandin Group and former general manager of Com21’s wireless business unit. "Since the country is a monarchy, there was only one guy to convince, Crown Prince Tupouto’a, and then we just went for it."

Hendricks’ firm plans to replace Tonga Telecom’s aging landline system—which still uses mechanical relays—with a broadband wireless network for data, video and telephony (using voice over IP). It will run at 30Mbps with user access at 2Mbps and 10Mbps by the end of next year. "We can get all the spectrum we want," Hendricks says.

The prince’s objective, Hendricks says, is to convert the country’s largely agricultural workforce, which has an astonishing 95 percent literacy rate, into knowledge workers, such as programmers. The government launched the Royal School of Science for Distance Learning last year, using Internet connections to allow students to take courses at international universities. There are just fewer than 100,000 people in Tonga scattered across 170 islands.

"We’re going to an Internet-style mesh network," says Hendricks. MMDS, which some carriers are using to deliver broadband services, won’t scale well for an IP network, he says. Hendricks, a technical advisor to the FCC on ultrawideband (UWB) technology, is considering UWB for the network.
Tonga had about 11,000 households and 6,500 phone customers, with an 8-year wait to get a phone. The goals of the project were to deliver 30Mbit/s IP to each home for a customer end budget of $450.

FCC Technological Advisory Council

Dewayne was one of the inaugural members of the Federal Communications Commission's Technological Advisory Council, launched on April 30th 1999, together with luminaries such as Vint Cerf, AT&T CTO David Nagel, CERFnet founder Susan Estrada and many others. He remained a member through the fourth TAC formed in 2005.

Wired Article

In the January 2002 edition of Wired, Brent Hurtig's Digital Cowboy focused on Dewayne's work on the reservation:
At Turtle Mountain Chippewa Reservation in North Dakota, he's installing a wireless network. In its initial form, the system will meet FCC requirements governing frequency, power, and transmission technology. But not for long. Hendricks' mission is to build the best system possible - even if it's illegal - and he intends to use every tool at his disposal. Should the FCC crack down, the tribal leaders will hoist the flag of Native American sovereignty, asserting that they can do whatever they want with the sky above their reservation.
Dewayne's work on the reservation, in Tonga and elsewhere was an attempt to demonstrate the problems with the obsolete US spectrum allocation policy:
There's no sensible reason why Americans shouldn't have inexpensive, ubiquitous, high-performance broadband access, Hendricks says. Using technologies that are already available or in fast-track development, everyone could enjoy reliable, fully symmetrical wireless at T1 speed or better. No more digital divide. No more last-mile problem. No more compromises. The only things standing in the way are the FCC, Congress, and "other people who just don't get it."

EE380 Talks

Dewayne gave three talks to Stanford's EE380 symposium. The first one was apparently "in the ‘90s on wireless MANs" of which I have so far found no record.

The second was on 3rd May 2000 entitled Wiring Tonga: From the Ground Up and the Sky Down. The abstract was:
One of the biggest barriers today standing in the way of deployment of advanced wireless communications systems turns out not to be the technology, but restrictions related to regulatory policies. This presentation will discuss the nature of these barriers and how they have affected the development of wireless data systems over the years.

The speaker will also discuss on-going work in which he is involved to use advanced wireless technology to deploy multiservice IP systems as part of infrastructure-development projects in the Kingdom of Tonga and with Native American groups in the US, and how such projects are able to deal with the limitations imposed by conventional regulatory barriers.
The slides are here.

The third was on 5th March 2014 and entitled Inventing a New Internet: Learning from Icarus. The abstract read:
From a future historical perspective, are we descendants of Icarus? Is our Internet like Icarus' wings? Are our protocols, ciphers and codes, brilliant capabilities built on immature engineering, which like Icarus' wax and feathers, are capable of taking us to great heights, but systematically flawed? For a brief historical moment, humanity has flown high like Icarus, on a vulnerable first generation Internet platform. Which as been used for securing and using distributed ideas, arts, media science, commerce, and machines. Promising brilliant futures with the arrival of networked things, autonomous personalized services and immersive media. But, now our first generation Internet , built on a fragile global network of vulnerable codes and protocols, is falling apart, like Icarus' wings, through a triple shock from:
  • Massive dotcom data stalker economy built on mining of terabytes personal data.
  • Ubiquitous criminal penetration of financial and identity networks, on our devices, in the cloud.
  • Pervasive state intruders at all levels and every encrypted hardware and software node.
Humans eventually conquered the barriers to flight and learned to build durable and resilient aircraft. Similarly, humans must learn to build a more reliable, private and secure Internet for communications, innovation and commerce. We will share our thoughts on how we might go about the design of a more durable and resilient Internet:
  • How prepared is the Internet for future human benefit?
  • What are the attributes of a future more durable internet?
  • What are the existing assets that could be harnessed?
  • What needs to be developed?
Dewayne's slides for this talk are here. Video of the talk is on YouTube

dewayne-net

For many years Dewayne with impeccable taste curated dewayne-net, an e-mail list to which he sent links, most he found but some contributed by his friends. A typical e-mail would have the title of the linked post, a link, and enough of the content to encourage recipients to read the whole thing. The last e-mails were two on 19th August, as it happens both links that I had sent him earlier. I have been one of the more frequent contributors, although only perhaps 20% of my contributions passed the curatorial filter. Prof. Dave Farber's IP list is a similar and I believe even longer-standing list; he and Dewayne exchanged links fairly often.

As an example of the list in full flow, lets take April 2022. That month he sent 66 e-mails, many about the COVID pandemic and the war in Ukraine, obviously both top of mind at the time. But they included topics including satellite tracking of commercial aircraft, the Kessler syndrome, the problems of the US patent system, cybercrime, microplastics, banned math textbooks in Florida, and Elon Musk's purchase of Twitter. I am already greatly missing this window into Dewayne's eclectic set of interests.

Dewayne on YouTube

Adapting Machine Translation Engines to the Needs of Cultural Heritage Metadata / Information Technology and Libraries

The Europeana digital library features cultural heritage collections from over 3,000 European institutions described in 37 languages. However, most textual metadata describe the records in a single language, the data providers’ language. Improving Europeana’s multilingual accessibility presents challenges due to the unique characteristics of cultural heritage metadata, often expressed in short phrases and using in-domain terminology. This work presents the EuropeanaTranslate project’s approach and results, aimed at translating Europeana metadata records from 23 EU languages into English. Machine Translation engines were trained on a cleaned selection of bilingual and synthetic data from Europeana, including multilingual vocabularies and relevant cultural heritage repositories. Automatic translations were evaluated through standard metrics and human assessments by linguists and domain cultural heritage experts. The results showed significant improvements when compared to the generic engines used before the in-domain training as well as the eTranslation service for most languages. The EuropeanaTranslate engines have translated over 29 million metadata records on Europeana.eu. Additionally, the MT engines and training datasets are publicly available via the European Language Grid Catalogue and the ELRC-SHARE repository.

Exploring the Impact of Generative Artificial Intelligence on Higher Education Students’ Utilization of Library Resources / Information Technology and Libraries

In the field of higher education, generative artificial intelligence (GenAI) has become a revolutionary influence, shaping how students access and use library resources. This study explores the intricate balance of both positive and negative effects that GenAI might have on the academic library experience for higher education (HE) students. The key aspects of enhanced discovery and retrieval, personalization and engagement, streamlined research processes, and digital literacy and information evaluation potentially offered through using generative AI will be considered. These prospective advantages to HE students offered by using GenAI will be examined through will be examined through the theoretical framework of the Technological Acceptance Model (TAM) introduced by Davis et al. in 1986, which suggests that perceived usefulness and perceived ease of use are key factors in determining user acceptance and utilization of technology. The adoption of GenAI by higher education students will be analyzed from this viewpoint before assessing its impact on their use of library resources.

Responsible AI Practice in Libraries and Archives / Information Technology and Libraries

Artificial intelligence (AI) has the potential to positively impact library and archives collections and services—enhancing reference, instruction, metadata creation, recommendations, and more. However, AI also has ethical implications. This paper presents an extensive literature and review analysis that examines AI projects implemented in library and archives settings, asking the following research questions: RQ1: How is artificial intelligence being used in libraries and archives practice? RQ2: What ethical concerns are being identified and addressed during AI implementation in libraries and archives? The results of this literature review show that AI implementation is growing in libraries and archives and that practitioners are using AI for increasingly varied purposes. We found that AI implementation was most common in large, academic libraries. Materials used in AI projects usually involved digitized and born digital text and images, though materials also ranged to include web archives, electronic theses and dissertations (ETDs), and maps. AI was most often used for metadata extraction and reference and research services. Just over half of the papers included in the literature review mentioned ethics or values related issues in their discussions of AI implementation in libraries and archives, and only one-third of all resources discussed ethical issues beyond technical issues of accuracy and human-in-the-loop. Case studies relating to AI in libraries and archives are on the rise, and we expect subsequent discussions of relevant ethics and values to follow suit, particularly growing in the areas of cost considerations, transparency, reliability, policy and guidelines, bias, social justice, user communities, privacy, consent, accessibility, and access. As AI comes into more common usage, it will benefit the library and archives professions to not only consider ethics when implementing local projects, but to publicly discuss these ethical considerations in shared documentation and publications.

It Takes a Village / Information Technology and Libraries

The introduction of Large Language Models (LLM) to the chatbot landscape has opened intriguing possibilities for academic libraries to offer more responsive and institutionally contextualized support to users, especially outside of regular service hours. While a few academic libraries currently employ AI-based chatbots on their websites, this service has not yet become the norm and there are no best practices in place for how academic libraries should launch, train, and assess the usefulness of a chatbot. In summer 2023, staff from the University of Delaware’s Morris Library information technology (IT) and reference departments came together in a unique partnership to pilot a low-cost AI-powered chatbot called UDStax. The goals of the pilot were to learn more about the campus community’s interest in engaging with this tool and to better understand the labor required on the staff side to maintain the bot. After researching six different options, the team selected Chatbase, a subscription-model product based on ChatGPT 3.5 that provides user-friendly training methods for an AI model using website URLs and uploaded source material. Chatbase removed the need to utilize the OpenAI API directly to code processes for submitting information to the AI engine to train the model, cutting down the amount of work for library information technology and making it possible to leverage the expertise of reference librarians and other public-facing staff, including student workers, to distribute the work of developing, refining, and reviewing training materials. This article will discuss the development of prompts, leveraging of existing data sources for training materials, and workflows involved in the pilot. It will argue that, when implementing AI-based tools in the academic library, involving staff from across the organization is essential to ensure buy-in and success. Although chatbots are designed to hide the effort of the people behind them, that labor is substantial and needs to be recognized.

The Jack in the Black Box / Information Technology and Libraries

This essay reviews the design and deployment of a critical generative AI and information literacy assignment along with its inspirations for instructional librarians in American colleges today.

Activating Our Intelligence / Information Technology and Libraries

How can we sift through the AI challenges and create a balanced approach drawing on the library’s strengths?  This column presents a reflection on how we can inspire and foster our intelligence and potential to discern useful information about AI and our use of it.  

"Gimme Some Truth" / Information Technology and Libraries

For the past 70 years, researchers and experimental musicians have been working with computer-synthesized music, forming a collaborative relationship with generative artificial intelligences known as human–AI co-creation. The last several years have shown that musical artists are quickly adopting AI tools to produce music for AI music competitions and for commercial production of songs and albums. The United States Copyright Office, in response to this trend, has released its latest policy revisions to clearly define what is eligible for copyright registration. Soon after, the Program for Cooperative Cataloging (PCC) also released new guidelines, providing recommendations for how library catalogers should address AI-generated materials. In both cases, they reject the notion of considering AI as a contributor. The language in each of these policies, however, is self-contradicting, showing that they are ill equipped to address generative AI. This study leverages critical textual analysis and qualitative content analysis and uses case examples to probe the manner in which these policies regard generative AI. Recommendations are made for addressing shortcomings in the PCC’s policies, and moral philosophical frameworks such as virtue ethics and consequentialism support arguments for supplementing catalog item records with information from authoritative external sources, deviating from this policy for the sake of truth-seeking.

Combining a custom Wordpress RSS feed with a webhook to make a Discord bot / Hugh Rundle

Apart from the infamous one about 3D printers (I was right), and the one that featured on the orange site, my most popular post is a fairly technical one about GitHub webhooks. It seems to be especially popular with Russians coming from Yandex, so before I start this post that is also about webhooks, here's a message for my Russian friends:

Пожалуйста, постарайтесь остановить бесчеловечную войну вашей страны против народа Украины.

With that out of the way, this is a post about some things I learned recently about WordPress RSS feeds, and Discord webhooks. If you came here for the library content, this one might not be for you.

The goal

newCardigan wanted to automatically post to a particular channel when we release a new podcast episode, and to a different forum channel when we publish a new event. I'm going to go into more detail on the latter in a future post, because it has some extra complications, but in this post we'll talk about:

  1. How to customise a WordPress RSS feed
  2. How to create a webhook-powered Discord bot

Creating a custom WordPress RSS feed

The feed

I could have used default RSS feed, but by default WordPress doesn't include featured images in the RSS feed. Or at least, it doesn't include them in a separate value anywhere. We also (deliberately) don't include featured images in the actual post - they only appear in social cards and in listings of posts. For the Discord bot I wanted to include the featured image, and this meant I needed to be able to grab it in a reliable, programatic way. Adding the image to the RSS enclosures value seemed like the best bet.

Customising a WordPress RSS (or Atom) feed is reasonably straighforward, but it's a bit hard to find any guidance on how to do it amongst all the noise of WordPress plugins - a problem I've experienced a lot when it comes to WordPress. There are a few different ways to customise feeds but I think the safest and most sensible way to do it is via a theme. In this case, you need to do two things:

  • create a new file to control the feed
  • update your functions.php file

First, you need to look in the /wp-includes directory of your existing WordPress install. This is where all the default feed templates will be. I wanted a custom RSS 2.0 feed, so I copied /wp-includes/feed-rss2.php into a new file I put inside my theme files at feeds/feed-featured-image-rss.php.

Now you can customise the custom file that defines your RSS feed. The changes I wanted to make were pretty simple - I just wanted to add the featured image file as an enclosure. There is already a line in the default feed that processes enclosures for a post:

<?php rss_enclosure(); ?>

However - this only picks up video or audio files included in the post. So we need to add the image manually. Just before this line, I added a few extra lines of code:

<?php $thumbnail = get_the_post_thumbnail_url(get_the_ID(),'full'); ?>
<?php $filepath = get_attached_file(get_post_thumbnail_id(get_the_ID())) ?>
<?php $filesize = wp_filesize($filepath) ?>
<?php if ($filesize > 0) : ?>
<enclosure url="<?php echo $thumbnail; ?>" length="<?php echo $filesize ?>" type="image/png"  />
<?php endif; ?>

To break that down:

  1. get the "thumbnail" file aka featured image
  2. get the filepath of the thumbnail - we need this to ascertain the file size in bytes
  3. get the filesize in bytes - we need this to create a valid enclosure value the RSS feed
  4. if the file actually exists, add the enclosure line - including the URL of the featured image file, the length in bytes, and a MIME type

Because - like category - we can add multiple enclosures to a feed by just listing them one after the other, this enclosure doesn't interfere with the audio file for our podcast post.

Loading the feed

Now we have a custom feed file, but we need to tell WordPress that it exist and how to find it. This happens in our theme's functions.php file:

// custom rss
function create_customfeed() {
    load_template( TEMPLATEPATH . '/feeds/feed-featured-image-rss.php');
}
add_action('do_feed_featured_image_feed', 'create_customfeed', 10, 1);

do_feed_ is a special WordPress value, so what this function does is create a custom feed called featured_image_feed, using the template at /feeds/feed-featured-image-rss.php. You can load the feed at example.com/feed?featured_image_feed

A cool thing about this is that the custom feed works everywhere any feed works. Which means my new custom feed works in all of these locations:

https://newcardigan.org/feed?featured_image_feed # all posts
https://newcardigan.org/category/cardicast/feed?featured_image_feed # only cardiCast posts
https://newcardigan.org/category/cardiparties/feed?featured_image_feed # only cardiParty posts

Creating a Discord Webhook

Whilst there are a lot of legitimate complaints about the use of Discord in open source projects, their own documentation is really excellent.

You can create "bots" in different ways in Discord, but what I was looking for was a simple concept I've used before - a webhook. Webhooks are essentially the inverse of how an API call usually works: instead of making a request to an API endpoint to GET information when you want it, a webhook POSTs information to an endpoint when new information is available.

In this case, we want to post a message to a particular Discord channel whenever there is a new cardiCast episode. I found an incredibly helpful guide to Discord webhooks from birdie0. This includes a section on embeds, including how to add a custom colour, images and more.

To do this, I set up a fairly simple Python script. First, we check the RSS feed using feedparser:

f = feedparser.parse("https://newcardigan.org/category/cardicast/?feed=featured_image_feed")
p = f.entries[0]

Then we check if we've already seen this post. If it's new, we make some content, and an embed. The content is any text you like, using Discord Markdown. An embed appears like quoted text or one of those URL embed images you see around the web.

Content

For our content we want to use a custom emoji, a title, link to the episode page, description, and a user category mention ("ping"). We assigned the first RSS feed entry to p, so title, link, and description are pretty easy, as they come from our RSS feed:

f"**{p.title}**]({p.link})\n\n{p.description}"

So far so good. For emoji and group mentions however, we can't just enter them as text in the same way we would if we were in the Discord app. If I was using Discord directly, I could type :newCardigan: and the custom newCardigan emoji (our logo) would appear in my message. But if I do that in webhook content it will render as plain text - the code rather than the emoji. The same thing will happen with @mentions.

To get it to display the emoji, or to activate the mention, we need to find the unique code. We do this by finding somewhere discrete in the Discord app and appending a backslash (\) to the mention or emoji:

\:newCardigan:
\@cardiParty ping

Instead of rending the emoji or ping, Discord will instead present you with the ID code you need:

<:newCardigan:1280097925149626419>

I don't really want the world to know what the ID code is for mentioning people who have signed up to be alerted to new cardiCast episodes, so I hid that code behind and environment variable I call cardicast_ping. So now our content looks like this:

f"[<:newCardigan:1280097925149626419> **{p.title}**]({p.link})\n\n{p.description}\n\n<@&{cardicast_ping}>\n\n"

Embeds

In addition to content, we can have multiple embeds in a webhook. I only wanted one, so it looks like this:

[
    {
        "title": "Listen to this episode right now",
        "url": p.enclosures[1].href,
        "color": 16741516,
        "image": {
            "url": p.enclosures[0].href
            }
    }
]

Like emoji and mentions, colours get a code in Discord. In this case it's not a special Discord code, but rather a Decimal Value. Most other web apps use hex, RGB or HSL codes, but our friend birdie0 comes to the rescue telling us to check spycolor to find the decimal value of whatever colour we want to use.

You'll notice we're referencing two enclosures here. The first value (0) is the new enclosure we added in the custom RSS feed, representing the featured image file. The second value (1) is added to our podcast post using the Blubrry extension, and is the link to the actual podcast audio file for this episode.

When we put it all together, it looks like this:

Screenshot of the webhook as displayed in Discord

You can check out the full code on my Forgejo repository.


On metrics and power structures / Jez Cope

Recently I was at the 2024 Make Data Count Summit, at Wellcome Trust HQ in London. It's a follow-on from a project of the same name which has now become a more sustained long-term effort with a growing community, focused on metrics to measure the impact of data sharing and reuse.

There was a lot for me to absorb, process and reflect on, but one thing I noticed was that some examples gave me a really icky feeling while others had me mentally cheering them on, and I've been reflecting a little on why that is. My first analysis is that it boils down to this: is this metric being used to exert control over others, or otherwise perpetuate unjust power structures?

So for example, a few of the panellists were senior academic leaders with hiring & promotion responsibility who very matter-of-factly said that, while they understood the higher goals, the fact was that they had a limited pot of money and could only fund either A or B but not both, so using "objective" metrics is both more fair and more efficient. That one had the ick factor.

On the other hand, one of the panellists (representing a research funder) was very clear that they were not using any measures of data sharing & reuse to evaluate grantholders or allocate future funding, but only to evaluate whether their own policies and initiatives to encourage good data practice were working as intended. No ick for me there.

What's the difference? In the first case, we see an example of a group with significant power making use of metrics to exercise power over others. A generous interpretation is that they are themselves subject to the power of those who assign their departmental budgets, and using an "objective", "data-driven" process at least allows them to be as fair as possible. A less generous interpretation is that use of metrics saves them time at the expense of those further down the power gradient, while soothing their own conscience and forestalling any argument with a veneer of objectivity lent by the use of data.

I don't believe anyone at the event is deserving of that second, rather cynical (who, me?) interpretation. My point is that the power differential exists and, intentional or not, this way of using metrics both obscures and perpetuates that, which is undemocratic.

It also falls into the system trap of Success to the Successful, since it awards money and power on the basis of previous success. The more funding an academic receives and the higher their rank within their institution, the easier for them to ensure the next research activity meets whatever criteria are set, whether by having more capacity (their own, or that of research assistants and students) to do the necessary work or simply having influence over the criteria themselves. Reinforcing feedback loops are incredibly powerful, and the only remedies are to break the loop or introduce an opposing balancing loop.

There were some really interesting suggestions that the way to solve this is to accept that people will try to game the system and on that basis introduce systems where gaming them still results in desired behaviour. This is interesting enough to be worth trying, but I'm skeptical of its utility in practice. Trying to predict and fix all the different unintended consequences of such an intervention requires you to out-think human nature itself. On top of that, I'm increasingly uncomfortable with the framing about individuals "gaming the system": since it's the system that permits and rewards undesirable behaviour, it's inevitable that at some point that behaviour will take place.

It seems to me that it's preferable to change the system to be able to absorb a reasonable amount of disruption while still delivering the desired outcomes. Maybe that's just a different framing of the same idea, but it seems important to be looking at the system as a whole along with its goals and those of its constituent parts. Just as important, many of those constituent parts are people towards whom we should be directing some compassion.

I've wandered off-point now (quelle surprise) so let's sum up: I think quantitative metrics are an important source of information. It's important to be aware of the shortcomings of any given metric and to only use it as one signal in a broader analysis of both quantitative and qualitative evidence. Metrics are best used by the people whose policy they are measuring, as an important feedback loop in the development of that policy. I strongly believe they should never be used to control the behaviour of others where the measurer has power over the measured, although measuring up the power gradient can be a way of holding power to account.

Economist Charles Goodhart summed it up nicely with what is commonly referred to as Goodhart's Law, commonly stated as

"When a measure becomes a target, it ceases to be a good measure"1


1

I actually quite like Goodhart's own phrasing, in a nerdy way: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes"

Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 17 September 2024 / HangingTogether

Exhibit on lowrider culture at San José State University

Yellow 1961 Chevrolet Impala Convertible with lowered rear wheels. An example of a lowrider car.1961 Chevrolet Impala SS Convertible. Theusaba, CC BY 3.0 https://creativecommons.org/licenses/by/3.0, via Wikimedia Commons.

Estella Inda, research services and social sciences librarian at the San José State University King Library (OCLC Symbol: CSJ) recounts her frustration as a student when she found that the library had scant resources related to an important part of her culture, lowriders. Worse, the resources that were available showed the activity – which involves cruising, displaying, and admiring customized automobiles – in a negative light. The memory of this experience stayed with Inda, who recently curated an exhibit called Forever Cruising at the  SJSU King Library dedicated to lowrider culture.

I had the opportunity to see the exhibit at the King Library last month, and immediately connected with the colorful exhibit and my own memories of lowriders in my southern California childhood. Inda’s advice about authentically connecting with communities to build collections and represent missing narratives is important: “. . . first do your research and find your focus. Second, be honest about what you are trying to do and why you believe having a cultural exhibition is important. . . . . as long as you are respectful and take the time to acknowledge the importance of the story that is being told through the exhibition, it will be impactful. It can also establish lasting community relationships that can be built on in the future.” Contributed by Merrilee Proffitt.

AI and opportunities for accessibility

EDUCAUSE, the nonprofit community that brings together education and technology, has provided a valuable service in summarizing some of the promises of artificial intelligence (AI) in “The Impact of AI in Advancing Accessibility for Learners with Disabilities.” Rob Gibson, Dean at the Wichita State University Campus of Applied Sciences and Technology (OCLC Symbol: KSWAT), has written a useful summary of current and future prospects for AI to enhance accessibility and inclusion in educational institutions. Categories of promise include the automated generation of image and audio descriptions, support for cognitive and physical disabilities, inclusive design, coding tools, and a whole variety of applications for translating, captioning, lip reading, and speech recognition.

The pros and cons of artificial intelligence have been debated deeply in recent years.  I have been highly skeptical of its morality, legality, real threats to privacy and intellectual property, disregard for human creativity – you get the idea. But one may grudgingly admit that under the right circumstances (and with humane controls), it has the potential to provide genuine assistance to students, educators, library users, and others with disabilities. Rob Gibson’s article offers a brief but informative perspective on some of the developments in AI that can now, or may soon, enable more accessible learning experiences. Contributed by Jay Weitz.

The post Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 17 September 2024 appeared first on Hanging Together.