Planet Code4Lib

Author Interview: Andrew K. Clark / LibraryThing (Thingology)

Andrew K. Clark

LibraryThing is pleased to present our interview with novelist and poet Andrew K. Clark, whose work has been published in The American Journal of Poetry, UCLA’s Out of Anonymity, Appalachian Review, Rappahannock Review, and The Wrath Bearing Tree. Deeply influenced by his upbringing and family history in western North Carolina, Clark received his MFA from Converse College, and made his book debut in 2019, with the poetry collection Jesus in the Trailer. His first novel, Where Dark Things Grow, a work of magical realism set in the Southern Appalachian Mountains in the 1930s, is due out this month from Cowboy Jamboree Press, and is available in our current monthly batch of Early Reviewer giveaways. Clark sat down with Abigail to answer some questions about his new book.

Where Dark Things Grow follows the story of a teenage boy with a troubled home life, who finds something magical and uses it to embark on a course of revenge. How did the story idea first come to you? Did it start with the character of Leo, with the theme of revenge, or with something else?

The novel came from a short story I wrote about my grandfather’s childhood growing up in Southern Appalachia and grew from there. I’ve always been drawn to magical realism and supernatural stories, so I was interested in mixing a sort of hardscrabble Appalachian setting with those more fantastical elements. Initially the story started with Leo, but as I got into the difficulties he faced, I realized he, like all of us, have a choice: to respond to adversity with anger or with resilience. His story is finding his way to resilience after a dark turn toward revenge and violence borne out of his family’s struggles, what he sees happening to missing young women, and a lack of empathy from the community.

Tell us more about wulvers. What are they, where do they come from, and what kinds of stories and traditions are associated with them?

One of the decisions I made early on in writing the novel was that I would use folklore elements from my own cultural heritage, as much as possible. So wulvers come from Scottish folklore. I use them quite differently than they appear in the lore, mixing in elements of horror and even the notion of direwolves from the Game of Thrones books. In Scottish tradition, wulvers are benevolent, and there are stories of them doing things like placing fish in the window sill of families that were struggling, that sort of thing. So in my novel there is a benevolent wulver, but there is also a dark, sinister one causing mischief. In the folklore, one thing that stuck with me is the wulvers can walk on their hind legs, much like a human, so mine do this when they want to seem imposing.

What made you decide to set Where Dark Things Grow during the 1930s, at the height of the Great Depression? Is there something significant about that period, in terms of the story you wanted to tell?

My grandparents grew up during the Great Depression in Southern Appalachia, and that period of time has always fascinated me. My grandfather was a story teller in the Appalachian tradition (my people came to Western NC in 1739), so I grew up hearing a lot of stories, including what it was like to grow up in the 1930s. One thing that always interested me is that Asheville is seen as this wealthy Gilded Age kind of place in literature and popular culture, but for my grandparents, the Great Depression brought almost no change to their lives – they were very poor before it started and so they didn’t feel the pain that some did. As a matter of fact, my grandfather would say their lives got better because of the Great Depression because my great grandfather got a job with the TVA. I always knew I wanted to write a story about a teenager growing up in this time period, and that story grew into Where Dark Things Grow.

You have described yourself as deeply rooted in the region of western North Carolina, where your ancestors have lived since before the American Revolution. In what ways has this geographic and cultural background influenced your storytelling? Which parts of your story are universal, and which parts could only happen in Southern Appalachia?

What’s often said about Appalachian writers is that the landscape is often a central character to story. That’s true for Where Dark Things Grow and so I don’t think it could happen anyplace else, in the same way. The major themes of the novel: revenge, the corrupting influence of power, criminal behavior (human trafficking), the struggle between good and evil, friendship and family, are universal and could be present in any setting. I think at the heart of every story is this sense of conflict, and so in that way, even if my reader doesn’t have reference points for Southern Appalachia, they can connect to the story and see themselves in the characters.

Your first book was a collection of poetry, and you have published individual poems in numerous publications. What was it like to write a novel instead? Does your writing process differ, when approaching different genres? Are there things that are the same?

I think one thing I carry to my prose is a focus on the structure and sound of the individual sentence. I always admire a well crafted sentence in a book I’m reading. So in that focus on language, there doesn’t feel to be as much of a difference as one might think. What’s different is that a single poem captures a more singular feeling or scene in the case of a narrative poem. In fiction, scenes build on each other and excavate themes more deeply over time. What I do find is that I feel comfortable with the novel form and the poem form; I am not as comfortable with the in between, short stories, if that makes sense. If I have that little to say, it feels more natural to distill it down into a poem. That said, I love short fiction, and read a lot of short story collections. In some ways a poetry collection or short story collection is a perfect vehicle for our modern attention challenged brains. But I love to get immersed in a world, in the lives of characters, the way I can with a novel. I think I’ll always write both.

What’s next for you? Are you working on more poetry, do you intend to write more novels, or branch out still further?

One thing I am happy about for readers is that my second novel, Where Dark Things Rise, is coming next fall from Quill and Crow Publishing House. It is a loose sequel to Where Dark Things Grow, which was published by Cowboy Jamboree Press. These two novels took about seven to eight years to write, and while the first book is set in the 1930s, the second is set in the 1980s, both in the Asheville / Western North Carolina area. I have started a third novel, which is quite different but also in the horror / magical realism genre. I have some poems assembled for a second poetry collection as well.

Tell us about your library. What’s on your own shelves?

My taste is pretty eclectic. You’ll find a lot of southern fiction by writers like William Gay, Ron Rash, Taylor Brown, Daniel Woodrell, S.A. Cosby, etc. You’ll also find a lot of magical realism novels: Murakami, Marquez, Toni Morrison, Jesmyn Ward, Robert Gwaltney, etc. And of course horror novels by Andy Davidson, Paul Tremblay, Stephen King, Stephen Graham Jones, Nathan Ballingrud, etc. I also have a couple of shelves dedicated to poetry books. Some favorites: Ilya Kaminsky, Kim Addonizio, Jessica Jacobs, Tyree Daye, bell hooks, Anne Sexton, W.S. Merwin, Ada Limón – I could go on and on.

What have you been reading lately, and what would you recommend to other readers?

One of my favorites this year is Taylor Brown’s Rednecks, about the West Virginia mine wars of the 1910s and 1920s. It’s a rich narrative; one of the most compelling historical fiction novels I’ve read. I’d also recommend The Hollow Kind by Andy Davidson, which mixes historical fiction elements, horror, and folklore in a delightful way. The Red Grove by Tessa Fontaine is a 2024 favorite, and definitely has elements of magical realism. For poetry, I’m really digging Bruce Beasley’s Prayershreds right now.

The 2024 DLF Forum is a Wrap! / Digital Library Federation

What a Journey! Thank You for Joining Us at the Second DLF Forum of the Year

We’ve just concluded our second DLF Forum of the year, following the in-person event at Michigan State University in July. A heartfelt thank you to everyone who joined us virtually this week!

We were thrilled to welcome nearly 700 digital library, archives, and museum professionals from member institutions and beyond. With over 100 speakers and 35 sessions, including an insightful talk by Featured Speaker Andrea Jackson Gavin, the event was full of valuable discussions and collaborations.

A special thanks to our incredible Program Committee for their hard work in reviewing and selecting sessions for both the virtual and in-person programs, and to our generous sponsors who provided essential support, from technology to coffee breaks and swag. We couldn’t have done it without you!

If you weren’t able to register for the Virtual Forum, here are some ways to see what happened:

Subscribe to the DLF Forum newsletter to hear news and updates about the forthcoming 2025 DLF Forum.

The post The 2024 DLF Forum is a Wrap! appeared first on DLF.

The Tech We DON’T Want: Bring your scary tech story to our Halloween / Open Knowledge Foundation

It started as an inside joke: ‘Why don’t we have a Halloween party with the tech we don’t want?’ We could talk about bugs, bad code, closed and proprietary stacks, disappearing dependencies, PDFs, things we generally hate.

The idea got people excited. And then we thought, ‘Why not open it up to anyone who wants to come?’

So we asked some AI we hate to create a poster (which turned out awful). And here we are.

Next Thursday, October 31st, 11:00 CEST, bring your scary tech story and celebrate Halloween – or Buggyween – at this open meeting with the Open Knowledge Foundation team. We’ve just been inspired by last week’s The Tech We Want Summit, and thought it would be a great opportunity to unload all the worst we see out there in a session opposite.

Let’s make a toast with bad coffee and sweets to the technologies we don’t want (like Zoom!).

Open Data Commons in the age of AI and Big Data / Open Knowledge Foundation

Text originally published by CNRS, Paris

Earlier this year, the Centre for Internet and Society, CNRS convened a panel at CPDP.ai. The panel brought together researchers and experts of digital commons to try and answer the question at the heart of the conference – to govern AI or to be governed by AI?

The panel was moderated by Alexandra Giannopoulou (Digital Freedom Fund). Invited panelists were Melanie Dulong de Rosnay (Centre Internet et Société, CNRS), Renata Avila (Open Knowledge Foundation), Yaniv Benhamou (University of Geneva) and Ramya Chandrasekhar (Centre Internet et Société, CNRS).

The common(s) thread running across all our interventions was that AI is bringing forth new types of capture, appropriation and enclosure of data that limit the realisation of its collective societal value. AI development entails new forms of data generation as well as large-scale re-use of publicly available data for training, fine-tuning and evaluating AI models. In her introduction, Alexanda referred to the MegaFace dataset – dataset created by a consortium of research institutions and commercial companies containing 3 million CC-licensed photographs sourced from Flickr. This dataset was subsequently used to train facial-recognition AI systems. She referred to how this type of re-use illustrates the new challenges for the open movement – how to encourage open sharing of data and content, while protecting privacy, artists’ rights and while preventing data extractivism.

There are also new actors in the AI supply chain, as well as new configurations between state and market actors. Non-profit actors like OpenAI are leading the charge in consuming large amounts of planetary resources as well as entrenching more data extractivism in the pursuit of different types of GenAI applications. In this context, Ramya spoke about the role of the state in the agenda for more commons-based governance of data. She noted that the state is no longer just a sanctioning authority, but also a curator of data (such as open government data which is used for training AI systems), as well as a consumer of these systems themselves. EU regulation needs to engage more with this multi-faceted role of the state.

Originally, the commons had promise of preventing capture and enclosure of shared resources by the state and by the market. The theory of the commons was applied to free software, public sector information, and creative works to encourage shared management of these resoruces.

But now, we also need to rethink how to make the commons relevant to data governance in the age of Big Data and AI. Data is most definitely a shared resource, but the ways in which value is being extracted out of data and the actors who share this value is determined by new constellations of power between the state and market actors.

Against this background, Yaniv and Melanie spoke about the role that licenses can continue to play in instilling certain values to data sharing and re-use, as well as serving as legal mechanisms for protecting privacy and intellectual property of individuals and communities in data. They presented their Open Data Commons license template. This license expands original open data licenses, to include contractual provisions relating to copyright and privacy. The license contemplates four mandatory elements (that serve as value signals):

  • Share-alike pledge (to ensure circularity of data in the commons)
  • Privacy pledge (to respect legal obligations for privacy at each downstream use),
  • Right to erasure  (to enable individuals to exercise this right at every downstream use).
  • Sustainability pledge (to ensure that downstream re-uses undertake assessments of the ecological impact of their proposed data-reuse).

The license then contemplates new modular elements that each licensor can choose from – including the right to make derivatives, the right to limit use to an identified set of re-users, and the right to charge a fee for re-use where the fee is used to maintain the licensor’s data sharing infrastructure. They also discussed the need for trusted intermediaries like data trusts (drawing inspiration from Copyright Management Organisations) to steward data of multiple individuals/communities, and manage the Open Data Commons licenses.

Finally, Renata offered some useful suggestions from the perspective of civil society organisations. She spoke about the Open Data Commons license as a tool for empowering individuals and communities to share more data, but be able to exercise more control over how this data is used and for whose benefit. This license can enable the individuals and communities who are the data generators for developing AI systems to have more say in receiving the benefits of these AI systems. She also spoke about the need to think about technical interoperability and community-driven data standards. This is necessary to ensure that big players who have more economic and computational resources do not exercise disproportionate control over accessing and re-using data for development of AI, and that other smaller as well as community-based actors can also develop and deploy their own AI systems.

All panelists spoke about the urgent need to not just conceive of, but also implement viable solutions for community-based data governance that balances privacy and artists’ rights with innovation for collective benefit. The Open Data Commons license presents one such solution, which the Open Knowledge Foundation proposes to develop and disseminate further, to encourage its uptake. There is significant promise in initiatives like the Open Data Commons license to ensure inclusive data governance and sustainability. It’s now the time for action – to implement such initiatives, and work together as a community in realising the promises of data commons.

Mapping Civil Society Organisations on Open Data in Francophone Africa: A Regional Meeting with Open Knowledge Foundation / Open Knowledge Foundation

On 14 October 2024, between 12:30 and 13:30, a crucial regional meeting for the coordination of French-speaking African countries in the Open Knowledge Network was held online. This virtual meeting brought together various stakeholders in the field of open data in French-speaking Africa, with the main aim of mapping the civil society organisations active in this field. The initiative was spearheaded by Narcisse Mbunzama, the Regional Coordinator for Francophone Africa, who led the presentation and discussions.

Objectives and context of the meeting

The meeting aimed to understand and assess the current landscape of civil society organisations engaged in open data across Francophone African countries. The idea was to create a mapping of these organisations to better understand their activities, structures, missions, as well as the challenges and opportunities they face.

A central point of this discussion was the exploration of the sources of funding for these organisations, as well as their relationships and collaborations with the Open Knowledge Network.

Presentation by Mr Narcisse Mbunzama: An Overview of Open Data

During this session, Mr Narcisse Mbunzama gave a detailed presentation that gave participants a clear view of the current state of open data initiatives in French-speaking African countries. The presentation highlighted a number of civil society organisations already active in this field, while also highlighting the specific dynamics in each country.

It emerged that while some organisations have succeeded in developing innovative and impactful projects, they often face a lack of financial support and recognition at international level. The presentation also highlighted a major challenge: the lack of formal collaboration between these local organisations and the Open Knowledge Foundation, as well as the lack of local chapters and individuals affiliated to the Open Knowledge Foundation in many French-speaking countries.

Challenges and opportunities for civil society organisations

The discussions revealed a number of challenges to the growth and impact of open data initiatives in the region.Some of the key barriers identified include:

  1. Lack of sustainable funding: The majority of civil society organisations rely on one-off funding, which limits their ability to develop long-term projects and make strategic plans for the sustainable development of open data.
  2. Lack of structured collaboration: Participants highlighted the lack of formal links between local organisations and the global Open Knowledge Network. This hinders the spread of good practice in open data.
  3. Lack of awareness of the Open Knowledge Foundation: In many French-speaking African countries, the existence of the Open Knowledge Foundation and its role in promoting open data is not well known. This limits the involvement of local players who could otherwise benefit from this global network.

However, the meeting also highlighted significant opportunities, including:

The rise of local initiatives: Several countries in French-speaking Africa are seeing a surge in innovative initiatives and projects promoting the use of open data in various sectors, such as governance, education and health.

Potential for collaboration: There is a strong desire among local organisations to collaborate and connect with the Open Knowledge Network to share resources, expertise and solutions adapted to local contexts.

Strengthening Collaboration and the Membership Process

A key part of the meeting was devoted to discussing the Open Knowledge Network membership process for organisations and individuals in French-speaking African countries. Mr Mbunzama explained the steps involved in joining the network, which include registering as a member and setting up local chapters to represent the Network in their respective countries.

Setting up local chapters was seen as a crucial step in strengthening the presence and impact of the Open Knowledge Foundation in the region. This would not only support local organisations but also facilitate better coordination and cooperation between open data initiatives across French-speaking African countries.

Next Steps and Future Call for Meetings

At the end of the meeting, it was proposed to issue a new call for a follow-up meeting that would focus on implementing the ideas discussed. This call, the date of which will be announced later, aims to deepen discussions on strategic partnerships and explore practical ways in which local organisations can work with the Open Knowledge Foundation to promote the adoption of open data.

The long-term goal is to build a strong and connected community of open data stakeholders in Francophone Africa, capable of overcoming local challenges while aligning with international standards. This will not only help to increase transparency and access to information in the region, but also promote sustainable development through policies based on reliable data that is accessible to all.

Conclusion

This regional meeting was a significant step towards a better understanding and integration of open data initiatives in French-speaking African countries. It laid the foundations for a more structured collaboration between local organisations and the Open Knowledge Network. By building on this momentum, it will be possible to create a robust and inclusive ecosystem that will support efforts towards transparency, innovation and sustainable development in the region.

Joining the Network and collaborating with the Open Knowledge Foundation is a crucial step for local organisations. They will be able to benefit from global expertise and shared resources to maximise the impact of their initiatives on the ground. The next meeting will be an opportunity to deepen these exchanges and define concrete actions to promote open data throughout the French-speaking region of Africa.

Open Data Editor: Our Open Source Dependency Just Disappeared / Open Knowledge Foundation

As the title says, both the repository and website of ReactDataGrid, an important dependency for our Open Data Editor, have suddenly disappeared—404 errors, DNS not resolving, just gone. Normally, we would create an issue in the repository (which we did), explore alternatives, allocate time and resources, and replace it. However, given the context of The Tech We Want initiative we’re currently running, I’d like to share a few additional thoughts.

Thinking of Open Source as Infrastructure

Interestingly, just a couple of days ago, I watched a conference talk titled Building the Hundred-Year Web Service with htmx by Alexander Petros that explores the analogy between physical infrastructure (bridges) and web pages. Now, this situation feels to me like a bridge in my city has vanished, and here I am in my car, staring at an empty gap, not understanding what happened or how to get to the other side. It feels strange and unexpected, something that shouldn’t happen: how can this bridge that I cross every day not be here anymore? My brain does not compute at the moment.

While I know that dependencies or projects disappearing isn’t the norm, this situation still gives me the unsettling feeling that the open-source ecosystem may not be as stable or reliable as I’d like to believe. I may be overreacting to this one example, but then my thoughts quickly turn to the recent takeover of Advanced Custom Fields and then to the back-and-forth licensing issues with Elasticsearch, and more recently, Redis to put some examples (my overthinking can keep going on).

I don’t have any clear answers or suggestions at this point, but I am left with a sense of unreliability. One lesson for me here is that just because something is open source and hosted on GitHub doesn’t mean it will always be accessible. Is GitHub becoming a critical piece of the internet infrastructure on which the whole ecosystem relies? I’d say yes. But what are the consequences of that? Is it good or bad? Should we be concerned? Should we panic? Should we design a plan B? I don’t think so, but I do think it’s worth discussing or at least writing these questions somewhere.

And what about the Open Data Editor?

Our goal with The Tech We Want is to promote the creation of software that can endure over time. So, having this happen just before an important release is doubly ironic and funny.

That said, due to recent changes in the project’s goals, we were already planning to migrate to a simpler stack with fewer dependencies and less turbulent release cycles (more on this later). The sudden disappearance of one of our core dependencies only reinforces the idea that we should aim to build simpler, less dependent technologies.

Read more

The Tech We Want Summit: Review the recordings of what was a great community moment in 2024 / Open Knowledge Foundation

💙 What can we say apart from THANK YOU? 💙

The Tech We Want Summit was a great moment in our year, bringing together our beloved community of technologists, practitioners and creators for two days to show that a different technology stack is possible (and we’re already doing it) – one that’s more useful, simpler, more durable and focused on solving people’s real problems.

At the Open Knowledge Foundation, we are grateful and motivated to continue promoting a fair, sustainable, and open future through technology.

Many thanks to the speakers and hundreds of participants from all over the world!

You can view the recordings by clicking on the links below:

Our team is now working on the summit documentation, which will be published in the coming weeks. Each panel will have its video edited with a summary and notes of what was discussed. We’ll be in touch with the community soon about the next steps in this initiative.

Some top-level stats:

🗣 43 Speakers in total
🌐 23 Countries represented
🤓 15 Demos of the tech we want
🌟 711 Participants
📺 14 Hours of live streaming
🤗 13 Content partners

Huge thanks again to our content partners:

2023 NDSA Storage Survey Report Published / Digital Library Federation

The NDSA is pleased to announce the release of the 2023 Storage Infrastructure Survey Report, available at https://doi.org/10.17605/OSF.IO/9QP4W 

From October 24 to November 22, 2023, the 2023 NDSA Storage Infrastructure Survey Working Group conducted a 51-question survey designed to gather information on the technologies and practices used in preservation storage infrastructure. 

This effort builds upon three previous surveys, conducted in 2011, 2013, and 2019. The survey encouraged responses from NDSA and non-DSA members to gain a broader understanding of storage practices within the digital preservation community. The survey received 138 complete responses, with most coming from the United States, but it did have a global reach. The 2023 survey also incorporated two new questions on storage and environmental impact. 

Some major takeaways from the report include:

  • The amount of preservation storage required for all managed copies appeared to stabilize relative to previous surveys. Fewer organizations reported higher allocations of storage, but the anticipated need for storage over the next three years remains elevated. 
  • Only 28% of respondents currently participate in a cooperative system – down from 45% in 2019 – and 63% indicate they are not considering a distributed storage cooperative. The use of commercial cloud storage providers rose from 46% in 2019 to 55% in 2023. 
  • Heavy use of an onsite storage element was reported by academic institutions (91%), archives (88%), and government agencies (71%). It also shows that use of onsite storage is most often combined with use of either independently managed offsite storage or commercial cloud storage managed by the organization. 
  • The leading offsite storage provider used by 56% of the responding academic institutions is Amazon Web Services. For responding archives, Amazon Web Services (36%) and Preservica (21%) are the most prevalent. Non-profits, museums, historical societies and public libraries use Amazon Web Services 45% of the time.
  • 52% of respondents said their organization is considering their environmental impact during storage planning. 

The proposed schedule for the Storage Infrastructure Survey to be conducted is every three years, allowing for ongoing tracking and analysis of approaches to preservation storage over time.  The next planned Storage Infrastructure goup is scheduled to kick off in 2026. Interested in participating in the next Storage Infrastructure Working group? A call for group members will go out in late 2025 or early 2026.  

~ NDSA 2023 Storage Infrastructure Survey Working Group

The post 2023 NDSA Storage Survey Report Published appeared first on DLF.

New OCLC Research report on open access discovery launched / HangingTogether

Our research report on Improving Open Access Discovery for Academic Library Users has just been published. It is a study into strategies to make scholarly, peer-reviewed open access (OA) publications more discoverable for library users. The findings are based on research conducted at seven academic library institutions in the Netherlands. We interviewed library staff about their efforts around OA discovery and surveyed library users about their experiences with OA. The synthesis of these findings provides new insights in the opportunity to improve OA discovery.

From OA availability to discoverability: bridging the gap

Cover of the OCLC Research report titled "Improving Open Access Discovery for Academic Library Users". The cover is an aerial view of a rural Dutch landscape.

From the very beginning we co-designed and carried out the OA discovery study in collaboration with two Dutch academic library consortia—Universiteitsbibliotheken en Nationale Bibliotheek (UKB) and Samenwerkingsverband Hogeschoolbibliotheken (SHB)—which have been, and still are, instrumental in the progress toward full OA to Dutch scholarly publications. Precisely because they were at the forefront of the shift to OA and investing heavily in OA publishing, they had arrived at a point that they wanted to assess the discoverability of OA publications and address the emerging gap between OA availability and discoverability.

This gap was first revealed by findings from the 2018-2019 OCLC Global Council survey of open content activities in libraries worldwide. The results clearly indicated a disbalance in academic library investment: more effort went into making previously closed content open than into promoting the discovery of open content. Yet, most respondents indicated that the latter was equally important to them. Also noteworthy was the near unanimity with which respondents indicated that OCLC had a role in supporting libraries to make open content discoverable. This was an encouraging acknowledgment of the importance of OCLC’s role in the open access ecosystem.

A series of knowledge sharing consultations with the Dutch academic library community in 2021 confirmed this perceived gap and the need to better understand the role of OA in user discovery behavior. As a result, UKB, SHB, and OCLC decided to carry out a research study that would investigate how expectations and behaviors of academic students, teachers, researchers, and professors could inform libraries’ efforts in making OA discoverable. This was the genesis of the Open Access Discovery project.

The making of the OA discovery landscape: libraries have a role to play

Library staff we interviewed described the emergence of a complex landscape for making OA publications discoverable. New players were eagerly staking out their territory while librarians did what they thought was best, but OA publications did not fit in their traditional processes. There were no guidelines, best practices, or benchmarks for adding OA publications to their collections and integrating them into user workflows. Although national collaborations and new processes were in place to create and expose metadata for institutionally authored OA publications, library staff faced challenges with publication deposits and metadata quality.

Our interviewees were not convinced that their efforts were making a difference for their users, but our report shows they were.

While they were correct in believing that the library was not the first place that users searched, the library search page was in the top three most searched systems. Users’ survey responses paint a somewhat confused picture of the role that OA plays in their discovery journey. Respondents did not find OA publications very easy to search for and access, and nearly half reported not knowing much about OA. However, most relied on OA alternatives when they encountered barriers to full-text access. Although OA was not their first consideration, the increasing amount of OA publications downstream affected their processes of discovery, access, and use. These findings led to the following observation in the report:

Library staff’s outreach and instruction had been primarily focused on increasing users’ awareness of publishing OA. Users needed additional instruction on discovering, evaluating, and using these new types of publications.”

Introducing the report to the Dutch library community

A group of four people shaking hands. Handing over the report to SHB and UKB representatives.

It was with pleasure and pride that Ixchel Faniel and I presented the final report, with findings and main takeaways, to UKB and SHB representatives at the OCLC Contactdag on 8 October 2024, in Amersfoort, the Netherlands. Contactdag is an annual gathering of professionals from Dutch academic and public libraries interested in the latest news about OCLC’s strategic direction and product development. It is also a forum where they share practices and innovative project results.

In my short remarks introducing the OA discovery report, I shared the main takeaway for the Dutch library community as follows:

“If you’re wondering whether your library’s investment in OA discovery is worth it, the answer is a resounding YES!”

The cover of the report—a photo of a Dutch polder landscape—is a nod to the Dutch setting of our research. It also serves as an analogy to the hard work needed to make OA publications discoverable. A polder is created by digging ditches and building dams and dikes to drain the tracks of lowland from water. As I told the audience, similarly to the polder, “there is still much work to be done. OA is still unchartered territory that needs to be explored and cultivated. We cannot afford to sit and watch!

Next steps: working smarter together

A groups of people sit around a table. One person has an open laptop - others are looking at a printed document. Break-out group at the workshop session on improving OA discovery, during the OCLC Contactdag, 8 October 2024

During the afternoon session of the OCLC Contactdag, participants discussed findings, challenges, opportunities, and next steps in break-out groups. Many recognized the dilemmas around OA discovery, as reflected in the report. They also were interested in using the findings to strategize how to proceed with improving OA discoverability.

A recurring theme was the need to collaborate. Participants discussed the potential benefits of working together on selecting OA titles by subject area and increasing users’ awareness of OA resources. They wanted to share practices on exposing institutional metadata, cooperating on metadata harvesting, and partnering with OCLC to improve the quality of metadata. They also talked about greater engagement, on campus and nationally, with recent Diamond OA publishing initiatives to advocate for discovery metadata that worked well both for library workflows and user needs. These ideas illustrate the need for cross-stakeholder collaboration from OA publishing to discovery and align nicely with the closing words from our report:

Truly improving the discoverability of OA publications requires all of the stakeholders involved to consider the needs of others within the lifecycle.

Read the report to learn more about bridging the gap between the availability and discovery of OA publications. https://oc.lc/oa-discovery

The post New OCLC Research report on open access discovery launched appeared first on Hanging Together.

pincushion / Ed Summers

Websites go away. Everything goes away, so it would be kind of weird if websites didn’t too, right? But not all web content disappears at the same rate. Some parts of the web are more vulnerable than others. Some web content is harder for us to lose, because it is evidence of something happening, it tells a story that can’t be found elsewhere, or it’s an integral part of a memory practice that depends on it.

Web archiving is one way of working with this loss. When building web archives web content is crawled, and stored so that a “replay” application (like the Wayback Machine) can make the content accessible as a “reborn digital” resource (Brügger, 2018). But with web archives the people doing this work are typically not the same people who created the content, which can lead to ethical quandaries that are difficult to untangle (Summers, 2020).

Furthermore, as we’ve seen recently with the Cyberattack on the British Library, the DDoS attacks on the Internet Archive, and lawsuits that threaten their existence, web archives themselves are also vulnerable single points of failure. Can web applications be built differently, so that they better allowed our content to persist after the website itself was no more?

As part of the Modeling Sustainable Futures: Exploring Decentralized Digital Storage for Community-Based Archives project I’ve been helping Shift Collective think about how decentralized storage technologies could fit in with the sustainability of their Historypin platform. This work has been funded by the Filecoin Foundation for a Decentralized Web, so we have naturally been looking at how Filecoin and IPFS as part of the technical answers here (Voss et al., 2023).

But perhaps a more significant question than what specific technology to use is how memory practices are changing to adapt to the medium of the web, and how much these changes can be guided in a direction that benefits the people who care about preserving their communities knowledge. We sometimes call these people librarians or archivists, but as the Records Continuum Model points out, many are involved in the work, including the individual users of websites who have invested their time, energy and labor in adding resources to them (McKemmish, Upward, & Reed, 2010).

For the last 15 years Historypin users have uploaded images, audio and video, and placed them as “pins” on a map. These pins can then be described, organized into collections, and further contextualized with metadata. Unsurprisingly, Historypin is a web application. It uses a server side application framework (Django), a database (MySQL), file storage (Google Cloud Storage), a client side JavaScript framework (Angular), and depends on multiple third party platforms like Youtube, Vimeo and Soundcloud for media hosting and playback.

What does it mean to preserve this assemblage? Historypin is a complex, running system, that is deeply intertwingled with the larger web. How could decentralized storage possibly help here? Can the complexity of the running software be reduced or removed? Can its network of links out to other platforms be removed without sacrificing the content itself?

Taking inspiration from recent work on Flickr Foundation’s Data Lifeboat, and some ideas from their technical lead Alex Chan, we’ve been prototyping a similar concept called a pincushion as a place to keep Historypin content safe, in a way that is functionally separate from the running web application. In an ideal local-first world, our web applications wouldn’t be so dependent on being constantly connected to the Internet, and the platforms that live and die there. But until we get there, having a local-last option is critically important.

The basic idea is that users should be able to download and view their data without losing the context they have added. We want a pincushion to represent a user’s collections, pins, images, videos, audio, tags, locations, comments…and we want users to be able to view this content when Historypin is no longer online, or even when the user isn’t online. Maybe the pincushion is discovered on an old thumbdrive in a shoebox under the bed.

This means that the resources being served dynamically by the Historypin application need to be serialized as files, and specifically as files that can be viewed directly in a browser: HTML, CSS, JavaScript, JPEG, PNG, MP3, MP4, JSON. Once a users content can be represented as a set of static files they can easily be distributed, copied, and opportunities for replicating them using technologies like IPFS become much more realistic.

pincushion is a small Python command line tool which talks to the Historypin API to build a static website of the user’s content. It’s not realistic to expect users to install and use pincushion, although they can if they want. Instead we expect that pincushion, or something like it, will ultimately run as part of Historypin’s system deployment, and will generate archives on demand when a user requests it.

At this point pincushion is a working prototype, but already a few design principles present themself:

  1. Web v1.0: A pincushion is just HTML, CSS and media files. No JavaScript framework, or asset bundling is used. Anchor tags with relative paths are used to navigate between pages, all of which are static page. These pages work when you load them locally from your filesystem, when you are disconnected from the Internet, or when the pages are mounted on the web somewhere…and also from IPFS.
  2. Bet on the Browser: A pincushion archive relies on modern browser’s native support for video and audio files. THe pincushion utility uses yt-dlp at build time to extract media from platforms like Youtube, Vimeo and Soundcloud and persist it as static MP4 or MP3 files. Perhaps the browser isn’t going to last forever, but so far it has proven to be remarkably backwards compatible as the web has evolved. If the browser goes away, then its unlikely we’ll know what HTML, CSS and image files are anymore. Preserving web content depends on evolving and maintaining the browser.
  3. Progressive Enhancement: A pincushion is designed to be viewed locally in your browser by opening an index.html from your file system. You can even do this when you aren’t connected to the Internet. But since you can zoom and pan to any region of the Earth in a map, it’s pretty much impossible to display a map offline. So some functionality, like viewing a pin on a map is only available when the browser is “online”.

These pincushion archives can be gigabytes in size, so I don’t want to link to one right here. But perhaps a few screenshots can help give a sense of how this works. Lets take a look at the archive belonging to Jon Voss, one of Historypin’s founders:

The “homepage” displaying Jon’s collections
A specific collection showing a set of pins
A video pin in a collection
Viewing the pin next to other pins on a map
Other pins tagged with “mission”
Other pins tagged with “mission”

So pretty simple stuff right? Intentionally so. In fact the archives load fine off of these:

Thumbdrives with pincushion archives on them for a workshop.

The truth is that this idea of making snapshots of your data available for download isn’t particularly new. Data Portability has been around as an aspirational and sometimes realizable goal for some time. Since 2018 the EU’s General Data Protection Regulation (GDPR) has made it a requirement for platforms operating in the EU to allow their data to be downloaded. This has raised the level of service for everyone. Thanks EU!

Before the GDPR, Twitter set itself apart by a fully functioning local web application codenamed Grailbird for viewing a users tweets. Similarly, work by Hannah Donovan’s on the Vine archive, and before that on the This Is My Jam archive (which sadly seems offline now) provided early examples of how web applications could be preserved in a read-only state (Summers & Wickner, 2019).

However just because you can download the data doesn’t mean it’s easy to use. Some of these archives are only JSON or CSV data files with minimal documentation. Others add only a teensy bit of window dressing that let you browse to the data files, but don’t really let you look at the actual items. Sometimes media files are still URLs out on the live web.

The pincushion tool is a working prototype, that will hopefully guide how to provide user data. But we are looking to the Flickr Data Lifeboat project to see if there are any emerging practices for how to create these archive downloads. A few things that we are thinking about:

  1. It would be great to have client-side search option using Pagefind or something like it?
  2. Can we enhance our HTML files with RDFa or Microdata to express metadata in a machine readable way?
  3. What types of structural metadata, such as a manifest, should we include to indicate the completeness and validity of the data?
  4. To what degree does it make sense to include other people’s content in an archive, for example someone’s comments on your pins, or pins that have been added to your collection?

References

Brügger, N. (2018). The archived web. MIT Press.
McKemmish, S., Upward, F., & Reed, B. (2010). Records continuum model. In M. Bates & M. N. Maack (Eds.), Encyclopedia of library and information sciences. Taylor & Francis.
Summers, E. (2020). Appraisal talk in web archives. Archivaria, 89. Retrieved from https://archivaria.ca/index.php/archivaria/article/view/13733
Summers, E., & Wickner, A. (2019). Archival Circulation on the Web: The Vine-Tweets Dataset. Journal of Cultural Analytics, 4(2). Retrieved from https://culturalanalytics.org/article/11048-archival-circulation-on-the-web-the-vine-tweets-dataset
Voss, J., Johnson, L., Jules, B., Collier, Z., Brown-Hinds, P., Castle, B., … Summers, E. (2023). A Shift Collective Report | December 2023 Modeling Sustainable Futures Proposing a Risk Assessment and Harm Reduction Model for Community-Based Archives Using Decentralized Digital Storage (p. 25). New Orleans: Shift Collective. Retrieved from https://inkdroid.org/papers/shift-ffdw-2023.pdf

Come Join the 2024 Halloween Hunt! / LibraryThing (Thingology)

It’s October, and that means the return of our annual Halloween Hunt!

We’ve scattered a hauntourage of ghosts around the site, and it’s up to you to try and find them all.

  • Decipher the clues and visit the corresponding LibraryThing pages to find a ghost. Each clue points to a specific page on LibraryThing. Remember, they are not necessarily work pages!
  • If there’s a ghost on a page, you’ll see a banner at the top of the page.
  • You have just two weeks to find all the ghosts (until 11:59pm EDT, Thursday October 31st).
  • Come brag about your hauntourage of ghosts (and get hints) on Talk.

Win prizes:

  • Any member who finds at least two ghosts will be
    awarded a ghost Badge ().
  • Members who find all 12 ghosts will be entered into a drawing for one of five LibraryThing (or TinyCat) prizes. We’ll announce winners at the end of the hunt.

P.S. Thanks to conceptDawg for the ghostly flamingo illustration!

Submitting a Notable Nomination: Suggestions from the Excellence Award Working Group / Digital Library Federation

The National Digital Stewardship Alliance (NDSA) is an organization with a diverse international membership sharing a commitment to digital stewardship and preservation. Its Excellence Awards Working Group (EAWG) is just as diverse and just as committed. Since 2012 this team has come together to select awardees who have offered their significant engagement with the theory and practice of long-term digital preservation stewardship at a level of national or international importance. EAWG members understand the importance of innovation and risk-taking in the developing successful digital preservation tools and activities. This means that excellent digital stewardship can take many forms; therefore, eligibility for these awards has been left purposely broad. 

I started as a member of the EAWG in 2019 and took part in discussions that led to the group’s move to presenting awards biennially in the odd-numbered years, to interleave them with the Digital Preservation Coalition’s Digital Preservation Awards. I have been co-chairing the group since January 2023, and, although the timing for awards may have changed, our standards have not. Any person, any institution, or any project meeting the criteria for any of the Excellence Awards’ six categories can be nominated. Neither nominators nor nominees need to be NDSA members or to be affiliated with member institutions. Self-nomination is accepted and encouraged, as are submissions reflecting responses to the needs or accomplishments of historically marginalized and underrepresented communities. It is truly inspiring to receive the nominations each year and learn about exciting work that is happening in the field of digital stewardship and preservation that we may never have known about otherwise.

Screenshot of spreadsheet for reviewing nominations.Basic spreadsheet shared by Excellence Awards Working Group members to review, discuss, and select awardees.

Award categories are: Individual, Educator, Future Steward, Organization, Project, and Sustainability. The criteria for each category specified on the EAWG webpage will help nominators select the “big bucket” their nominations will best fit, and every nomination must support the specific contributions named with evidence of their significance. Yet individual nominations focus on individual efforts. So, what can a nominator include to encourage EAWG members to recognize the importance of the nominee’s contributions? Let’s look at a few things that can help a nomination stand out.

 

  • Firsts
    • Efforts producing—or even on their way to producing—something absolutely fresh for the field of digital stewardship are worth nominating. This could be work to produce new tools, connections, workflows, methods, strategies, and more. Nominations for the new developments could offer information showing such aspects as: how this output is new; why it is notably original; what its impact or expected impact will be; and what potential it will have for widespread use. Past nominations have included phrases such as “facilitate the creation of a field that is easier, kinder, smarter, and faster,” “establish tangible solutions to put into practice,” “drawing on the collective experience of those in the field,” and “open resources that have been created and shared.”
  • A New Angle on the Known
    • Another perspective on fresh outputs is that of rethinking the known. This work could offer updated preservation formats, updated tools, or even an enhancement  for providing access or enhancing discoverability. Nominations for such work could offer information evidencing: how this update is an improvement; why it is important to the field; what benefit it will provide; and how wide a range of digital stewards can implement it. Nominations for this type of work have included phrases like: “re-thinking this for the next generation,” “ensuring the outputs were shared with the greater community and not created within an academic silo,” “advance future generations of digital stewards,” and “enhancing tools and standards our field has used for decades.”
  • Hot Topics
    • Significant work being done in areas of high interest to the digital stewardship and preservation communities is certainly worth nominating. Recently, such areas of interest have included DEI initiatives, study on the environmental impact of digital stewardship, and the use of artificial intelligence. Nominations reflecting efforts in such areas have incorporated aspects including: multidisciplinary connections, research and training methodologies, the promotion of integrating diverse perspectives, and strategies to increase awareness of a specific digital preservation challenge. Such efforts have been described as “uplifting while educating,” “improving experience for new digital preservationists through work on documentation, information-sharing, and tools development,” and “actively seeks out venues to spread the message.”
  • Widespread Impact
    • Another type of work worthy of nominating is that which will bring a positive impact to a significant portion of the field of digital stewardship. This impact will often include the characteristics of recognized reusability or adaptability and could be seen via open access to code, guides to a topic or practice, or policies that were developed. It could possibly be achieved through outreach activities or collaborations. Nominations describing such work have noted details such as: “demystifying often-challenging material required for working in digital preservation,” “bolsters others offering leadership and growth opportunities,” “informs digital preservation best practices,” “shaped the design and implementation of open-source software,” and “engaged with the preservation community as speakers, writers, and collaborators.”

These are just a few suggestions on nominating your colleagues and their work. There are certainly more areas, perspectives, and outputs that could be recognized. For more ideas, links to announcements for past winners can be found at the bottom of the Excellence Awards Working Group webpage. Remember, there is no perfect nomination expected by the EAWG. All submissions are received, reviewed, and discussed by all group members equally. Working group members realize that this is an opportunity to celebrate the achievements of our colleagues, and the selection has never been easy. Yet during my time with the group, we have ensured that no final selection has been solidified without the unanimous support of the members.

The EAWG will be seeking nominations again next year. Until then, we will be offering other blogs and video clips to help digital stewards and preservationists better understand our work. We also hope this information will encourage them to nominate their colleagues or themselves. We look forward to your submissions! 

Written by Kari May, Excellence Awards Working Group, Co-Chair

 

The post Submitting a Notable Nomination: Suggestions from the Excellence Award Working Group appeared first on DLF.

Jennifer Ferretti to Depart CLIR’s Digital Library Federation, Takes on New Role as Director of Archives at Texas After Violence Project / Digital Library Federation

See the original post on the Council on Library and Information Resource’s News page.

Jennifer Ferretti, who has served as the Director of CLIR’s Digital Library Federation (DLF) since 2021, will be stepping down from her role to become the Director of Archives at the Texas After Violence Project. Ferretti, who joined DLF after six years at the Maryland Institute College of Art, will conclude her tenure on October 24.

During her tenure, Ferretti played a pivotal role in expanding and diversifying the DLF community. She fostered important connections with institutions such as liberal arts colleges, museums, Historically Black Colleges and Universities (HBCUs), civic data groups, and archives, all while prioritizing inclusivity and social justice in digital library practices. Her leadership in building these relationships was instrumental in furthering DLF’s mission of collaboration and knowledge sharing among digital library professionals.  This work has been particularly significant within the GLAM community, where inclusivity and diverse perspectives are essential for shaping the future of cultural preservation and access to knowledge.

Under Ferretti’s guidance, DLF hosted Toward Radical Imagination, a two-day online event that centered HBCUs and digital libraries, celebrating the conclusion of the Authenticity Project, an IMLS-funded mentorship program, which was co-led with HBCU Library Alliance. This program was critical in elevating diverse voices and broadening DLF’s reach in underrepresented communities.

Ferretti also established new mentorship models within DLF, focusing on building stronger networks among practitioners at different career stages. Additionally, she worked to promote and strengthen DLF’s Working Groups, which made significant strides under her leadership. One example is DLF’s recent strategic growth grant from the Society of American Archivists Foundation, which further empowered Working Groups to address key issues in digital preservation and library services.

CLIR’s president Chuck Henry said, “Jenny’s vision and strategic acumen have anchored DLF during a period of evolution and growth, while helping to reshape and broaden our understanding of a digital library and its capacity for more equitable, open, and multivocal knowledge in service to the public good. On a personal note, I am grateful for the opportunity of learning from Jenny’s wise and empathetic leadership, and wish her the very best.”

Information on the transition and interim leadership will be shared in the coming weeks. For more on her accomplishments and vision, visit the Toward Radical Imagination event page here and read about DLF’s Working Groups here.

The post Jennifer Ferretti to Depart CLIR’s Digital Library Federation, Takes on New Role as Director of Archives at Texas After Violence Project appeared first on DLF.

Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 15 October 2024 / HangingTogether

The following post is one in a regular series on issues of Inclusion, Diversity, Equity, and Accessibility, compiled by a team of OCLC contributors.

A variety of colorful umbrellas photographed from below.Photo by Ricardo Resende on Unsplash.

Books for Indigenous Peoples’ Day 2024

“Indigenous Peoples’ Day” is not yet a federal holiday in the United States, but since 2021, the administration of President Joe Biden has recognized it as an official observance. This year, the commemoration is on 14 October 2024, but there is no reason to confine the recognition to a single day. In the American Library Association’s “Booklist,” Tennessee-based author and citizen of the Cherokee Nation, Christine Hartman Derr has compiled a list of fourteen recent books for kids to “honor North America’s first people.” “Essentials: Celebrating Indigenous Peoples’ Day” includes picture books, chapter books, novels, novels in verse, graphic novels, thrillers, and a few stories with recipes, appropriate for pre-school through high school ages.

As Derr notes, “There are more than 500 Native Nations on the land presently known as the U.S., each with its own culture, traditions, language, and ways of being. Native cultures and people are often relegated to museums and history books—but Indigenous people are still here, with inherent sovereignty and thriving cultures.” Derr’s eclectic list has author and character representations from the Blackfeet, Cherokee, Choctaw, Muscogee, and Ojibwe, among other cultures. Because a few of the books are in series, they can introduce readers to further adventures. Contributed by Jay Weitz.

University of Nevada Las Vegas and We Need to Talk

Since 2020, the UNLV Libraries (OCLC Symbol: UNL) has hosted the We Need to Talk series, which are facilitated panel discussions hosted by UNLV’s Oral History Research Center that feature “university and community experts discussing issues on race and seeking solutions for a more inclusive society.” The library now has nineteen recorded episodes in the series, which focus on a range of topics: Black and Asian inclusion, Indigenous focused segments, as well as episodes focusing on Muslim and Queer community members, and more. In addition to the archived segments which are available for viewing, UNLV librarian Brittani Sterling has created a LibGuide to accompany the series.

The LibGuide created by Sterling to support We Need to Talk give additional depth to each episode, including definitions for terms, pointers to key selected resources in the library’s collection, and avenues for researchers to discover their own materials. In keeping with the local focus of the series, Sterling also puts the spotlight on community organizations and resources. The inclusion of Sterling’s LibGuide for the event series helps to demonstrate how a combination of domain and community knowledge can enhance an already wonderful resource. Contributed by Merrilee Proffitt.

Incorporating neuroinclusivity in library apprenticeship programs

“You never tell a patron no” and “Make sure you are smiling to welcome people” are examples of the training document instructions that that University of Washington-Tacoma (OCLC Symbol: WAU) librarians Johanna M. Jacobsen Kiciman and Alaina C. Bull inherited in their learning employment program for MLIS students. In the article “Apprenticeships, MLIS Students, and Neurodiversity: Centering the Humanity of Student Workers, Part 1” (College & Research Libraries News, Volume 85, Number 9, October 2024) the authors discuss their redesign of the program, focusing on a framework of inclusivity and belonging that better prepares students for their first librarian jobs. Bull, who self-identifies as neurodivergent, explains, “The amount of emotional and physical labor it takes to perform neurotypical expectations is a huge part of burnout for people under the neurodivergent umbrella.” The authors refocused the program to teach students skills that would enable them to set appropriate boundaries to keep them safe while acquiring library experience.

Part two of this article will appear in the November issue of College & Research Libraries News, and I am looking forward to learning more about how the authors refocused the program. Their quotations from the old training documentation are very familiar—the kind of oversimplifications that can be disastrous for neurodivergent people that think literally. The assumption that “everyone” will understand what is meant places a burden on the student workers who should be learning about the profession rather than deciphering unclear instructions. Refocusing the training to be neuroinclusive better positions everyone for success, strengthening the future diversity of our profession. Contributed by Kate James.

The post Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 15 October 2024 appeared first on Hanging Together.

The Tech We Want Summit: Full Programme Announcement / Open Knowledge Foundation

The Tech We Want Summit is just around the corner. Today we are announcing the full programme for day one, Thursday 17 October, featuring 28 speakers from all around the world across 5 panels and 2 keynotes.

The following day, there will be 15 demos from projects that are already making the technology we want (to be announced soon).

As of today, more than 400 people have already registered for what promises to be an incredible day of discussions.

At the Open Knowledge Foundation, we’ve been thrilled to realise that our discomfort is also the discomfort of so many people. We need to rethink the way we build technology and together develop new practical ways to build software that is useful, simple, long-lasting and focused on solving people’s real problems.

Browse the images below or click on the button to see the full programme:

A huge thank you to the content partners who agreed to join us on this journey!

I Fondled Salvador Dalí's Earrings / Eric Hellman

 Content Warning: AI

My Uncle Henry was a Professor of Chemistry at NYU. He lived, for the most part, in his sister-in-law Barbara's 7-story townhouse on East 67th street in Manhattan. He acted as the caretaker of this mansion when Barbara went off living her socialite life in Paris or wherever. My family would stay in the townhouse whenever we came to New York to visit my favorite uncle.

This is how my parents ended up being at a fancy party attended by Salvador Dalí. It seems that Barbara had commissioned a portrait of herself, and the occasion of the party was the painting's unveiling. I was there too; I was a few months old. The great painter was amused to see a baby at this party and the baby was extremely amused at this strange looking adult. More accurately, I was captivated by his shiny earrings and reached out to play with them as though they were a mobile hanging in my crib. Or so I have been told. So many times.

A surrealist figure resembling Salvador Dalí, dressed in an eccentric outfit with a curled mustache and large, ornate earrings. A baby is playfully tugging on the ornate earrings
Dalí and Eric as hallucinated by DALL-E

My dad was presented to Dalí as a brilliant young engineer, which he was. Dad was born in Gary, Indiana, but moved to Sweden with his family when he was 7 years old. (That's a whole 'nother story!) After graduation from the Royal Institute of Technology in Stockholm, he decided to take a job with Goodyear Aerospace in Akron, Ohio, because that way he didn't have to serve in the Swedish Army and give up his American citizenship. He worked on semiconductor devices before anyone had ever heard of semiconductors.

Maybe brilliant engineers were exotic creatures in that fancy New York City party circuit, because Salvador Dalí buttonholed my dad. He wanted my dad to invent something for him. The conversation went something like this (imagine me sitting in Dalí's lap, not paying attention to the conversation at all):

Dalí: "Tell me, young man, do you invent things?"

Dad: "As a matter of fact, I'm working on what they call a buffered amp..."

Dalí: "Never mind that, I have an idea I want you to work on..."

Dad: "Yes?"

Dalí: "I want you to invent a paint gun..."

Dad: "That doesn't sound too hard..."

Dalí: "... that will paint what I see in my mind."

Dad: "??"

Dalí: "I paint, but the paintings are never what I want."

Dad: "That's not how..."

Dalí: "I want to press a button and have the paint go in the right place."

Dad: "Well maybe someday..."

Dalí: "You start working on it, let me know how it goes"

Eric: "Waaaaaaaaa!"

Apparently, the paint gun was a bit of an obsession with Dalí. He created a technique called "bulletism" that involved using an antique gun (an "arquebus") to shoot vials of paint at a canvas. A couple of months after the fancy party, he appeared on the Ed Sullivan show firing a paint gun at a canvas! 

Sixty-four years later, we sort of know how to build Dalí's mind reading paint-gun. We have technologies that let us see the brain think (functional brain imaging combined with deep learning), and technologies that can make pictures from human thoughts (when expressed as LLM prompts). It's now easy to imagine a device that uses your brain to control an AI image generator (see the image above!). Such a device could take advantage of the brain's plasticity to give Dalís of the future the power to make images from activity that exists only in their brains.

People are arguing about whether AI can make art. There's even a copyright case in which the US copyright office is saying, effectively, that you can't copyright what you tell an AI to create.

It seems clear to me, at least, that AI, wielded as a tool, can make art, in the same way that a Stradivarius, wielded by a musician, can make art, or that a camera, wielded by a photographer, can make art, or that computer program, wielded by a poet, can make art. 

Salvador Dalí was just ahead of his time. 

Notes:

  1. While OpenAI's "DALL-E" is supposed to be a combination of "Dalí" And "WALL-E", I've not been able to find any mention of Dalí's interest in brain-computer interfaces!
  2. I couldn't find an image of the painting "Portrait of Bobo Rockefeller" on the web; a study for the painting is in the Dalí Museun in Spain. Dalí had a policy of not allowing his subjects to see their portrait before is was unveiled, and my understanding is that Barbara was never really fond of the painting. It had an prominent place in her living room though.
  3. Researchers have studied the use of brain-scanning techniques to develop brain-computer interfaces for uses such as the development of speech prostheses that convert brain activity into intelligible speech. 
  4. Openwater is combining infrared and acoustic imaging to see brain activity for neurological diagnosis. But they can see the potential for mind reading using the help of deep learning pattern recognition. Founder May Lou Jepsen says “I think the mind-reading scenarios are farther out, but the reason I'm talking about them early is because they do have profound ethical and legal implications.” 
Comments. I encourage comment on the Fediverse or on Bluesky. I've turned off commenting here.

Reminder: I'm earning my way into the NYC Marathon by raising money for Amref Health Africa. 

LibraryThing in Your Language—Even British! / LibraryThing (Thingology)

We’ve made some exciting changes and improvements to LibraryThing’s member-drive translations, first developed in 2006.

Try it out: Spanish, German, Dutch, French, Italian or British English! (Change back by clicking the name of the language you’re in at the top right of the screen.)

CataloGUE to your heart’s content!

It’s Working!

This blog post explains the changes, and why we made them. But the best justification is already evident: Members are finding and using LibraryThing in their language more than ever! Some 5% of members are already using our new “English (UK)” option. Another 5% are using LibraryThing in a (non-English) language.

Best of all, new, non-English members are up 50%, and I suspect we are also reeling in some new English members too! (It’s hard to tell, because TriviaThing is also reeling in new members.)

Goodbye All Those Domains

The core change is a big one: We’re phasing out our non-English domains, like LibraryThing.fr, LibraryThing.de and tr.LibraryThing.com, in favor of members chosing their preferred language on LibraryThing.com. Nothing is being taken away here—we’re just changing where you go! In fact, we’re adding some features (see below).

We’re getting rid of the non-English domains to improve your experience of the site. First, search engines never fully understood what we were doing, so English-language people were coming to LibraryThing off Google searches, and finding themselves on a site in Danish, or Catalan! (They’d leave.)

More importantly, we’re doing it to reduce our “non-human traffic”—the search-engines and AI bots that make up more than 50% of LibraryThing’s traffic. The AI bots in particular have been particularly wild, with rogue bots hitting us night and day. Unfortunatley, having some 50 separate domains meant 50 targets. Reducing this traffic will help us serve you—the “human” traffic—faster and better.

Feature Changes

Here’s a run down of the changes:

  • Language Switcher. Every page now shows your language. Click it to change your language, or to help us translate non-English languages.
  • British English. Do the Amrican “catalog” and “color” annoy you? We’ve added a new language, British English, called “English (UK)” in our language menu. Apparently you want it, because already 5% of members are using it!
  • Domain Forwarding. If you go to an old domain, like LibraryThing.fr, you’ll be forwarded to LibraryThing.com and asked if you want French or English.
  • Home Pages for Every Language. While you can change language on any page, each language also has its own, dedicated home page, like LibraryThing.com/t/fr (French), LibraryThing.com/t/de (German), or LibraryThing.com/t/gb (UK English). You can find them by changing languages before you sign in. You’ll also get them when you sign out. If you want to avoid changing languages again, bookmark your page.
  • Language Detection. When you go to a website like LibraryThing, your browser actually tells us your preferred language. Some websites just follow that, but we know a lot of our members straddle languages. So if, when you first come to LibraryThing, we detect a disconnect between what your browser wants and what you’re using, we ask you if you want to switch.
  • Better Translation Pages. Our Translations page is better in various small ways. If you are using a non-English language, it has new options to see and edit only machine-translated text.

Member Translated, with Help

Since 2006, translation has been in the hands of members. This hasn’t changed. But we’ve gone ahead and had a translation program have a go at untranslated text. Members can, of course, change these translations, and we’ve given them special tools to do.

The change is minimal for most of LibraryThing’s popular languages:

  • Spanish — 99.2% translated, 16.3% by machine
  • German — 99.5% translated, 1.5% by machine
  • Dutch — 99.3% translated, 2.3% by machine
  • French — 99.3% translated, 4.2% by machine
  • Italian — 99.6% translated, 0.4% by machine

For less-used languages, the percent is much higher:

  • Maori — 92.9% translated, 71.1% by machine
  • Korean — 92.5% translated, 88.9% by machine
  • Armenian — 92.1% translated, 90.9% by machine
  • Tagalog — 91.4% translated, 89.5% by machine
  • Welsh — 91.1% translated, 75.3% by machine

While human translation is best, these versions were seas of untranslated, yellow text. It’s a Catch 22—you can’t get new Armenian members if the site isn’t translated, and you can’t get it translated without Armenian members.(1)

Problems and Improvements

We are working on a few improvements:

  • Multiple Accounts. Some members appreciated being able to have one member on one language site, and another on another. I think it’s clear we need to get a “Switch account” feature, like Facebook and some other sites have.
  • AI is Meh. We are aware that machine translation isn’t ideal. If we have time, we will try to do it again, feeding in appropriate human-translated text, so we can be consistent on terms like “tags.” For now, however, if the translation annoys you—maybe that’s the prod we need to give you?
  • Cookies? The way we implemented languages, cookies, has various implications—some good, some bad. You can read more about this here.
  • Account-level Language Setting. If you want to set your account language, go to Account Settings. As many members have a dissonance between their account langauge and the language they actually use, you won’t be switched when you log in, but you will be asked if you want to switch.

For more on this change, and a lot of great suggestions read Talk > New Features > Big language changes.


1. There’s actually a wrinkle here in that it’s not about the total number of translated strings, but how often they are used. A site with only 50% of its strings translated could still be quite useful—if they were the RIGHT strings. Unfortunately, many languages had untranslated home pages. Nobody is going to join a site like that!

The Myth of Black Box AI: Why Explainable, Configurable AI Is the Effective Alternative / Lucidworks

Discover how composable AI is addressing the issues of black box AI. Learn why transparency and adaptability are key to future-proofing your AI strategy.

The post The Myth of Black Box AI: Why Explainable, Configurable AI Is the Effective Alternative first appeared on Lucidworks.

The post The Myth of Black Box AI: Why Explainable, Configurable AI Is the Effective Alternative appeared first on Lucidworks.

Getting rspec/capybara browser console output for failed tests / Jonathan Rochkind

I am writing some code that does some smoke tests with capybara in a browser of some Javascript code. Frustratingly, it was failing when run in CI on Github Actions, in ways that I could not reproduce locally. (Of course it ended up being a configuration problem on CI, which you’d expect in this case). But this fact especially made me really want to see browser console output — especially errors, for failed tests, so I could get a hint of what was going wrong beyond “Well, the JS code didn’t load”.

I have some memory of being able to configure a setting in some past capybara setup, to make error output in browser console automatically fail a test and output? But I can’t find any evidence of this on the internet, and at least I’m pretty sure there is no way to do that with my current use of selenium-webdrivers and with the headless chrome to run capybara tests.

So I worked out this hacky way to add any browser console output to the failure message on failing tests only. It requires using some “private” rspec API, but this is all I could figure out. I would be curious if anyone has a better way to accomplish this goal.

Note that my goal is a bit different than “make a test fail if there’s error output in browser console”, although I’m potentially interested in that too, here I wanted: for a test that’s already failing, get the browser console output, if any, to show up in failure message.

# hacky way to inject browser logs into failure message for failed ones
  after(:each) do |example|
    if example.exception
      browser_logs = page.driver.browser.logs.get(:browser).collect { |log| "#{log.level}: #{log.message}" }

      if browser_logs.present?
        # pretty hacky internal way to get browser logs into 
        # existing long-form failure message, when that is
        # stored in exception associated with assertion failure
        new_exception = example.exception.class.new("#{example.exception.message}\n\nBrowser console:\n\n#{browser_logs.join("\n")}\n")
        new_exception.set_backtrace(example.exception.backtrace)

        example.display_exception = new_exception
      end
    end
  end

I think by default, with selenium headless chrome, you should get browser console that only includes error/warn log levels but not info, but if you aren’t getting what you want or want more you need to make a custom Capybara driver with custom loggingPrefs config that may look something like this:

Capybara.javascript_driver = :my_headless_chrome

Capybara.register_driver :my_headless_chrome do |app|
  Capybara::Selenium::Driver.load_selenium
  browser_options = ::Selenium::WebDriver::Chrome::Options.new.tap do |opts|
    opts.args << '--headless'
    opts.args << '--disable-gpu'
    opts.args << '--no-sandbox'
    opts.args << '--window-size=1280,1696'

    opts.add_option('goog:loggingPrefs', browser: 'ALL')
  end
  Capybara::Selenium::Driver.new(app, browser: :chrome, options: browser_options)
end

Editorial / Code4Lib Journal

Welcome to a new issue of Code4Lib Journal! We hope you like the new articles. We are happy with Issue 59, although putting it together was a challenge for the Editorial Board. This was in no small part because Issue 58 was so tumultuous, including a crisis over our unintentional publication of personally identifiable information, a subsequent internal review by the Editorial Board, an Extra Editorial, and much self-reflection. All of this (quite rightly) slowed down our work. Several Editorial Board members resigned, which left us with a much smaller team to handle a larger workload. As a volunteer-run organization without a revenue stream, Code4Lib Journal is a labor of love that we all complete off the side of our overfilled desks. It was demoralizing to feel that we had lost the support of many in our community. A lot of us were tempted to quit rather than try to pick up and carry on. So, although we have published Issue 59 later than planned, and with a different coordinating editor, we made it. This issue is testament to the perseverance of my colleagues on the Editorial Board, and to the wonderful articles contributed by our community.

Response to PREMIS Events Through an Event-Sourced Lens / Code4Lib Journal

The PREMIS Editorial Committee (EC) read Ross Spencer’s recent article “PREMIS Events Through an Event-sourced Lens” with interest. The article was a useful primer to the idea of event sourcing and in particular was an interesting introduction to a conversation about whether and how such a model could be applied to Digital Preservation systems. However, the article makes a number of specific assertions and suggestions about PREMIS, with which we on the PREMIS EC disagree. We believe these are founded on an incorrect or incomplete understanding of what PREMIS actually is, and as significantly, what it is not. The aim of this article is to address those specific points.

Customizing Open-Source Digital Collections: What We Need, What We Want, and What We Can Afford / Code4Lib Journal

After 15 years of providing access to our digital collections through CONTENTdm, the University of Louisville Libraries changed direction, and migrated to Hyku, a self-hosted open-source digital repository. This article details the complexities of customizing an open-source repository, offering lessons on balancing sustainability via standardization with the costs of developing new code to accommodate desired features. The authors explore factors in deciding to create a Hyku instance and what we learned in the implementation process. Emphasizing the customizations applied, the article illustrates our unexpected detours and necessary considerations to get to “done.” This narrative serves as a resource for institutions considering similar transitions.

Cost per Use in Power BI using Alma Analytics and a Dash of Python / Code4Lib Journal

A trio of personnel at University of Oregon Libraries explored options for automating a pathway to ingest, store, and visualize cost per use data for continuing resources. This paper presents a pipeline for using Alma, SUSHI, COUNTER5, Python, and Power BI to create a tool for data-driven decision making. By establishing this pipeline, we shift the time investment from manually harvesting usage statistics to interpreting the data and sharing it with stakeholders. The resulting visualizations and collected data will assist in making informed, collaborative decisions.

Launching an Intranet in LibGuides CMS at the Georgia Southern University Libraries / Code4Lib Journal

During the 2021-22 academic year, the Georgia Southern University Libraries launched an intranet within the LibGuides CMS (LibGuides) platform. While LibGuides had been in use at Georgia Southern for more than 10 years, it was used most heavily by the reference librarians. Library staff in other roles tended not to have accounts, nor to have used LibGuides. Meanwhile, the Libraries had a need for a structured intranet, and the larger university did not provide enterprise level software intended for intranet use. This paper describes launching an intranet, including determining what software features are necessary and reworking software and user permissions to provide these features, change management by restructuring permissions within an established and heavily used software platform, and training to introduce libraries employees to the intranet. Now, more than a year later, the intranet is used within the libraries for important functions, like training, sharing information about resources available to employees, for coordinating events and programming, and to provide structure to a document repository in Google Shared Drive. Employees across the libraries use the intranet to more efficiently complete necessary work. This article steps through desired features and software settings in LibGuides to support use as an intranet.

The Dangers of Building Your Own Python Applications: False-Positives, Unknown Publishers, and Code Licensing / Code4Lib Journal

Making Python applications is hard, but not always in the way you expect. In an effort to simplify our archival workflows, I set out to discover how to make standalone desktop applications for our archivists and processors to make frequently used workflows easier and more intuitive. Coming from an archivists’ background with some Python knowledge, I learned how to code things like Graphical User Interfaces (GUIs), to create executable (binary) files, and to generate software installers for Windows. Navigating anti-virus software flagging your files as malware, Microsoft Windows throwing warning messages about downloading software from unknown publishers (rightly so), and disentangling licensing changes to a previously freely-available Python library all posed unexpected hurdles that I’m still grappling with. In this article, I will share my journey of creating, distributing, and dealing with the aftereffects of making Python-based applications for our users and provide advice on what to look out for if you’re looking to do something similar.

Converting the Bliss Bibliographic Classification to SKOS RDF using Python RDFLib / Code4Lib Journal

This article discusses the project undertaken by the library of Queens’ College, Cambridge, to migrate its classification system to RDF applying the SKOS data model using Python. Queens’ uses the Bliss Bibliographic Classification alongside 18 other UK libraries, most of which are small libraries of the colleges at the Universities of Oxford and Cambridge. Though a flexible and universal faceted classification system, Bliss faces challenges due to its unfinished state, leading to the evolution in many Bliss libraries of divergent, in-house adaptations of the system to fill in its gaps. For most of the official, published parts of Bliss, a uniquely formatted source code used to generate a typeset version is available online. This project focused on converting this source code into a SKOS RDF linked-data format using Python: first by parsing the source code, then using RDFLib to write the concepts, notation, relationships, and notes in RDF. This article suggests that the RDF version has the potential to prevent further divergence and unify the various Bliss adaptations and reflects on the limitations of SKOS when applied to complex, faceted systems.

Simplifying Subject Indexing: A Python-Powered Approach in KBR, the National Library of Belgium / Code4Lib Journal

This paper details the National Library of Belgium’s (KBR) exploration of automating the subject indexing process for their extensive collection using Python scripts. The initial exploration involved creating a reference dataset and automating the classification process using MARCXML files. The focus is on demonstrating the practicality, adaptability, and user-friendliness of the Python-based solution. The authors introduce their unique approach, emphasizing the semantically significant words in subject determination. The paper outlines the Python workflow, from creating the reference dataset to generating enriched bibliographic records. Criteria for an optimal workflow, including ease of creation and maintenance of the dataset, transparency, and correctness of suggestions, are discussed. The paper highlights the promising results of the Python-powered approach, showcasing two specific scripts that create a reference dataset and automate subject indexing. The flexibility and user-friendliness of the Python solution are emphasized, making it a compelling choice for libraries seeking efficient and maintainable solutions for subject indexing projects.

It Was Ten Years Ago Today / David Rosenthal

Ten years ago today I posted Economies of Scale in Peer-to-Peer Networks . My fundamental insight was:
  • The income to a participant in a P2P network of this kind should be linear in their contribution of resources to the network.
  • The costs a participant incurs by contributing resources to the network will be less than linear in their resource contribution, because of the economies of scale.
  • Thus the proportional profit margin a participant obtains will increase with increasing resource contribution.
  • Thus the effects described in Brian Arthur's Increasing Returns and Path Dependence in the Economy will apply, and the network will be dominated by a few, perhaps just one, large participant.
In the name of blatant self-promotion, below the fold I look at how this insight has held up since.

Experience in the decade since has shown that this insight was correct.

Source
The insight applies to Proof Of Work networks; for the entire decade Bitcoin mining has always been dominated by five or fewer mining pools. As I write this AntPool, ViaBTC and F2Pool have had more than 50% of the hashrate over the last week. Even within those pools, the vast expense of mining rigs, the data centers to put them in, and the power to feed them make economies of scale essential.


Source
The insight applies to Proof Of Stake networks at two levels:
  • Block production: over the last month almost half of all blocks have been produced by beaverbuild.
  • Staking: Yueqi Yang noted that:
    Coinbase Global Inc. is already the second-largest validator ... controlling about 14% of staked Ether. The top provider, Lido, controls 31.7% of the staked tokens,
    That is 45.7% of the total staked controlled by the top two.
Source
In addition all these networks lack software diversity. For example, as I write the top two Ethereum consensus clients have nearly 70% market share, and the top two execution clients have 82% market share.

Economies of scale and network effects mean that liquidity in cryptocurrencies is also highly concentrated. In Decentralized Systems Aren't I wrote:
There have been many attempts to create alternatives to Bitcoin, but of the current total "market cap" of around $2.5T Bitcoin and Ethereum represent $1.75T or 70%. The top 10 "decentralized" coins represent $1.92T, or 77%, so you can see that the coin market is dominated by just two coins. Adding in the top 5 coins that don't even claim to be decentralized gets you to 87% of the total "market cap".

The fact that the coins ranked 3, 6 and 7 by "market cap" don't even claim to be decentralized shows that decentralization is irrelevant to cryptocurrency users. Numbers 3 and 7 are stablecoins with a combined "market cap" of $134B. The largest stablecoin that claims to be decentralized is DAI, ranked at 24 with a "market cap" of $5B.
ProtocolRevenueMarket
 $MShare %
Lido30455.2
Uniswap V35510.0
Maker DAO488.7
AAVE V3244.4
Top 4 78.2
Venus183.3
GMX142.5
Rari Fuse142.5
Rocket Pool142.5
Pancake Swap AMM V3132.4
Compound V2132.4
Morpho Aave V2101.8
Goldfinch91.6
Aura Finance81.5
Yearn Finance71.3
Stargate50.9
Total551 
Similar effects apply to "Decentralized Finance". In DeFi Is Becoming Less Competitive a Year After FTX’s Collapse Battered Crypto Muyao Shen wrote:
Based on the [Herfindahl-Hirschman Index], the most competition exists between decentralized finance exchanges, with the top four venues holding about 54% of total market share. Other categories including decentralized derivatives exchanges, DeFi lenders, and liquid staking, are much less competitive. For example, the top four liquid staking projects hold about 90% of total market share in that category,
Based on data on 180 days of revenue of DeFi projects from Shen's article, I compiled this table, showing that the top project, Lido, had 55% of the revenue, the top two had 2/3, and the top four projects had 78%.

Because these systems, if successful, cannot be decentralized, the cryptosphere doesn't care about the fact that they aren't. In Deconstructing ‘Decentralization’: Exploring the Core Claim of Crypto Systems Prof. Angela Walch explains what the label "decentralized" is actually used for:
the common meaning of ‘decentralized’ as applied to blockchain systems functions as a veil that covers over and prevents many from seeing the actions of key actors within the system. Hence, Hinman’s (and others’) inability to see the small groups of people who wield concentrated power in operating the blockchain protocol. In essence, if it’s decentralized, well, no particular people are doing things of consequence.

Going further, if one believes that no particular people are doing things of consequence, and power is diffuse, then there is effectively no human agency within the system to hold accountable for anything.
In other words, it is a means for the system's insiders to evade responsibility for their actions.

Teaching: one year in / Lorcan Dempsey

Teaching: one year in

I am one year into my two-year term as a Distinguished Practitioner in Residence at the Information School at the University of Washington. I have been fascinated to see academic life from the inside, as it were, even though I am a visitor rather than fully domiciled. A bonus has been how much we have enjoyed Seattle, the city, and its amazing watery, mountainy and islandy hinterland.

I have been teaching two courses, one that I created myself from scratch, and one almost oven-ready that I adapted. The new course was on library collaboration and partnerships, a topic that has always seemed to me to be underexamined. I am also about to begin a small research project looking at some of the characteristics of collaboration. I am lucky that the Orbis Cascade Alliance is on my doorstep here in that regard. The other was on management, a course that is mandatory for all MLIS students, and which is viewed with some ambivalence by some.

At this mid-way point, I thought I would reflect a little on my newfound teaching experience, understanding that what I say is not necessarily unique or surprising.

Teaching and baking

Teaching for the first time presents a steep learning curve. Starting out by developing a new course was in hindsight somewhat optimistic. Once the baking analogy occurred to me, I could not forget it:

Teaching for the first time while developing a new course is like being in the kitchen on your own, with no recipe, baking a loaf of bread for the very first time. Except that you have an audience who observe your every move, and you cannot throw it away if it doesn&apost work out.

But I was not quite on my own. Several people helped, and I am especially grateful to my colleague and Associate Teaching Professor, Chance Hunt, who generously and empathetically stepped in to calm my churning and to help clear a path, and to Sue Morgan, Teaching and Learning Specialist in Learning Technologies at the iSchool, who patiently helped me climb the Canvas learning curve. They each saved me from some foolishness; what remained was mostly my own.

A major takeaway was that I needed to talk less!

This is a short piece, so here are some brief takeaways:

  • I succumbed early on to the common newbie hubris of imagining that I was there to communicate my knowledge and experience. However, what I was really there for was to facilitate learning. My first course outing needed more interaction and engagement with issues, and rather less of my powerpoint, thoughtful and tasteful though it was. A major takeaway was that I needed to talk less!
  • Understanding that good teaching is both learnable and a craft, I had spent some time in preparation reading around the topic. However, without the goalposts of experience, I was overwhelmed by the pedagogical firehose. To reduce confusing superabundance, I returned to The new college classroom by Cathy Davidson as a pragmatic guide, its main recommendation being that I had met Cathy at the interesting Amical Conference shortly beforehand.
  • That said, one can only do so much in one course, and one of the things I most enjoyed was digging into collaboration and thinking about how to sketch out the collaborative space. I also really liked how it made me reconnect with different types of libraries. I enjoyed, for example, exploring public libraries as social infrastructure and the importance of social capital based on Klinenberg&aposs Palaces of the People. I was struck again by the differential attention in the literature to different kinds of libraries, noticing how community college libraries, for example, did not receive as much attention as other academic libraries.
  • I was curious to see how my assigned readings were received. I do not wholly trust my judgements here, as they are impressionistic, supported by limited end-of-course-feedback. In general, the more abstract or theoretical pieces were less popular than ones which communicated experiences, issues or problems. Not too surprising, perhaps, but I did also wonder about the overall balance between theory and practice. I noted how, especially in the collaboration course, practitioner perspectives in the literature greatly outweighed LIS academic ones.
  • I am very grateful to the guest speakers who gave generously of their time, expertise and opinions. They brought energy into the class. I am thinking of invites now for this year. A couple of things really struck me in terms of learning. The first was to communicate that libraries are social organizations, with all that that means in terms of relationships, decision-making, persuasion and influence. I emphasized this throughout. The second is that collaboration and partnership is central in many ways to what libraries do from an operational point of view, but is also so important for creating the networks and communities of practice that do so much to foster learning and innovation. Guest speakers did much to communicate the experiential reality of these two points, alongside the description of libraries, services and initiatives.
Teaching: one year inI visited Seward Park Library, which features in Klinenberg&aposs Palaces for the People, when in NYC over the Summer

Students

The main reason I took this position was that I knew it would be good to be challenged and engaged by different perspectives. I was interested in what animated and interested those beginning a career in libraries, archives or related organization. This aspect of the work has been very rewarding, and I have learned so much. I am also encouraged by the energy and sense of purpose I encountered among so many students, and I know they will make an impact. Here are some thoughts ...

  • In both of my classes the majority of student career interests broke down approximately evenly between academic and public libraries. I was also interested in the strong archival interest, partly overlapping with the academic interest (in archives and/or special collections) and partly motivated by community archives, or specialist archives of various types. It did strike me in discussions that archives work often appealed where it made direct connections with particular communities of interest, distinctive materials, or reparative recognition and remembrance.
  • There was a strong focus among students on social justice and on the agency of libraries and librarians in their communities. This is a clear emphasis of the UW iSchool and course options reflect this, as does the general ethos. This was refreshing to see, acknowledging also that political and advocacy skills will be very important in the library environment students are entering.
  • In the management class (I taught in parallel and in close coordination with Chance) we asked students at the beginning of the year how many were interested in being managers. I was struck that the interest was not stronger, although a repeat question at the end of the course did suggest that some opinions had shifted. Some students went into the class thinking of management solely in terms of staff supervision. Following Linda Hill, I tried to consistently emphasize that management involves managing oneself and one&aposs network of relationships, in addition to one&aposs team. And of course, organizational management, including strategy, marketing, organizational culture, and so on, is also central and may be new to some. I observed how several students overcame a prior antipathy to the idea of marketing and &aposbrand&apos to realize that a broad approach to positioning the library favorably within its community was actually very important.
  • Teaching presents the classic curse of knowledge situation. In summary it is very difficult to unknow something that you already know, and it can be difficult to imagine what it is like not to know it. This creates a potential communication gap. To close this gap we have to step outside our own usual standpoint. For example, I was initially guided by my own interests which are often organizational and strategic, but soon recognised the strong class interest in operational issues. Talking about collaboration between libraries also relies somewhat on knowledge of the object of collaboration (for example a shared ILS or shared content negotiation and licensing) and this opens out into other issues, open access or concerns around ebook licensing, for example. Similarly, in a management class, one can expect that students may have a variety of organizational and supervisory experiences, but, naturally enough, less acquaintance with some of the ways in which libraries are organized or funded. In the management class, for example, I tried to emphasize that the library is not (usually) a stand-alone entity - typically it is accountable to a city, university, or some other parent organization. Even where it reports to a board, the local government agency may appoint some board members. I learned that I need to work hard to try to traverse the gap, thinking about what is covered, using more analogies or examples, for example, and ensuring that participants are comfortable in discussion and questions. Guest speakers are very important here, bringing varied and rich experiences into the class.
  • I tend to resist generational (and other) classifications, but I was struck by how direct and candid student feedback could be. I was grateful for the often thoughtful and constructive suggestions for improvement. This is certainly true of the management course. It is especially true of the collaboration one, given it was my first outing and it was new material. I am currently looking at some refocusing based on class observations. I might even lose some slides!

Systems and services

I have written much about library systems and services. My perspective tends to be informed by my own usage, by conversations with librarians, and also by the fact that I have worked for organizations that have built systems and services that libraries rely on, both in production and in R&D mode. I was fascinated to be in a somewhat different position here, as a faculty member and teacher needing to use library resources in the construction of courses and in my other work.

I have had a more tangential relationship to instructional technology, but was looking forward to exploring Canvas and some other tools. Given the firehose note above, I did stick to the core and to a small number of tools.

Here are some slightly random observations about my experience.

  • It was a great pleasure to have the resources of a large library available again. Browsing in the stacks may not be quite the adventure it once was, but being able to prospect large reservoirs of print and electronic resources is a joy. While a significant proportion of articles may now be available open access, having access to a large, licensed collection makes a big difference. What was more novel for me was being able to access a large number of ebooks at the chapter level, both for my own use and to add to course readings. The &aposaccess gap&apos between those able to use a well-resourced library and those who do not have such access is still very wide.
  • As library consortia were a central emphasis in the collaboration course, I was very interested to see the benefits of borrowing through the shared Orbis Cascade Alliance system in action: I appreciated receiving items from other Alliance members. The University has just joined BTAA, so the library will be participating in BTAA library initiatives in due course. Although I may be finished my term by then, I will be interested to see how this works out alongside the Orbis Cascade alliance, especially given my work with BTAA in a previous life.
The &aposaccess gap&apos between those able to use a well-resourced library and those who do not have such access is still very wide.
  • I enjoyed the opportunity to interact with library colleagues. I also admired how library liaison Alyssa Deutschler solicitously worked with iSchool colleagues, readily provided expert advice, participated in events and instruction, and generally modeled the value of the relational library.
  • I have written much about the friction in the library &aposdiscovery to delivery&apos (D2D) chain over the years. It is a challenge, bringing a heterogenous set of resources into a (more or less) unified environment of use, and creating required connections across discovery, authorization, and fulfilment options. A lot of plumbing is involved, and unfortunately some of this still shows. When things work well, it is very impressive (and the deployment of LibKey helps here), however the experience is occasionally not very well-seamed, let alone seamless. Given the state of the art of the involved technologies, the small number of available products to get this work done, and the general reliance on the same set of vendors, this experience is much the same in most libraries. It is not something specific to the University of Washington or to the Orbis Cascade Alliance. While I appreciate the complexities, and have experienced the heavy lift on the system/data provider side also, it is disappointing that things are not better by now.
  • This may be one of the places where AI might be helpful in terms of better connecting that D2D workflow, and I look forward to trying out the Primo Research Assistant in my current environment. In general I have been interested to observe the careful way in which AI is being introduced into discovery and other products offered to libraries.
  • This experience of the library D2D apparatus matters for a variety of reasons. One in particular is on my mind, as I have been thinking about perceptions of libraries in the academy in the context of LIS Forward (an initiative of iSchools looking at the future of Library and Information Science education and research within iSchools and the academy). While the D2D setup is actually quite impressive when you know a little about the moving parts (proxy, knowledge base, LibKey, etc), it does not necessarily seem that way to the non-initiate. It does not showcase the library as a technological leader or innovator; quite the reverse in fact, as the library discovery experience feels like it is from an earlier period. Faculty or students cannot be expected to know about the technologies or products that are available to the library to build this experience.
  • I was actually pleasantly surprised by Canvas. I thought it did a good job of supporting some complex workflows in an integrated way. While it may constrain the more adventurous, I appreciated the integrated approach and the continuity across courses. Of course, it can behave inconsistently (sometimes save is automatic, sometimes not). I appreciated that it got the job done and that once you were sufficiently high up the learning curve, you could put your energies elsewhere. As I noted above, I was grateful here to receive a lot of help from the Learning Technologies unit.
  • I really like Papers from ReadCube, a Digital Science company. I have used Mendeley and Zotero in the past, but my incentives were not strong enough to climb very far along the learning curve. In my current role, the incentives are stronger, but I have actually also found Papers more straightforward to use. It also works very nicely with the library systems infrastructure mentioned above, and, when all the connections work well, it is like magic to move from publisher page to well formatted citation and stored PDF smoothly. One frustration was the treatment of books. It does not automagically pick up a citation from Amazon or WorldCat, for example, which seems like a miss all around. There was recently an AI upgrade which added some capabilities, including asking questions of a PDF. (This is a small example of how we will interact in more ways with documents in the future.)
  • I found the annotation tool Hypothesis very useful, and while not universally loved by students, it did add a dimension to reading and discussion, especially if used in moderately sized groups. Again, it is nice the way it manages a workflow smoothly.
However, without the goalposts of experience, I was overwhelmed by the pedagogical firehose. ... I found developing my first course quite stressful as I did not know what I did not know. 
  • I found developing my first course quite stressful as I did not know what I did not know. I made the mistake of trying to build the initial outline in Canvas. This made it difficult to see the course as a whole, and difficult to work on trying to change staging of topics, speakers and exercises. Probably the best advice I received during the year was when Chance suggested I shifted this planning to a whiteboard. In fact I ended up using stickies stuck to the whiteboard at work, and duplicated on the bedroom wall in our small rental. It was simple ... and liberating.
Teaching: one year inSophisticated and streamlined course design process.

Education and career preparation

My experience so far has caused me to reflect quite a bit on library education.

The MLS has always been a challenge, given the variety of skills and specialties at play in libraries. This has become even more so as libraries continue to evolve from being transactional and collections-centered to being relational and community-centered. This means that they manifest interesting educational and research issues, from the technical, to the management, to the social and political.

In one program, how do you balance coverage of broad technical skills and appreciation, nurturing community – whether it is in busy urban settings or around student success and retention –, the management of complex social and political organizations, and the (inappropriately named) soft skills which are so central to so many aspects of work (teamwork, advocacy, empathy, self care, ...)?

It also prompts reflection on the relationship between research and practice, and on how the library community generates ideas and innovation. There are plenty of topics to return to.

It is a critical time for libraries, and so a critical time for library education and research. I am looking forward to year two!

Acknowledgements: Thanks to Alyssa Deutschler, Chance Hunt, Sue Morgan, Denise Pan, and Lauren Pressley for their helpful review of an earlier draft. I am especially grateful to Gabrielle Garcia (UW MLIS &apos24) for a thoughtful reading and helpful suggestions. While their feedback improved the final piece, all opinions are my own and they do not necessarily agree with all I say!

Pictures: I took all the pictures, and also made a lot of soda bread during the pandemic.

Other entries mentioned:

So-called soft skills are hard
So-called soft skills are important across a range of library activities. Existing trends will further amplify this importance. Describing these skills as soft may be misleading, or even damaging. They should be recognized as learnable and teachable, and should be explicitly supported and rewarded.
Teaching: one year in
Libraries and the curse of knowledge
It is important to know what you know, so that you can avoid the curse of knowledge and communicate effectively.
Teaching: one year in
Operationalizing a collective collection
While collective collections have been much discussed, less attention has been paid to how to operationalize them in consortial settings. This post introduces work done with the BTAA to explore this challenge.
Teaching: one year in

Author Interview: Danielle Trussoni / LibraryThing (Thingology)

Danielle Trussoni

LibraryThing is pleased to sit down this month with bestselling author Danielle Trussoni, who made her debut in 2006 with Falling Through the Earth, a memoir chronicling her relationship with her father that was chosen as one of the Ten Best Books of the Year by The New York Times Book Review. Trussoni’s first novel, Angelology, was published four years later, going on to become a New York Times and international bestseller. It was translated into over thirty languages, and was followed in 2013 by a sequel, Angelopolis, which was also a bestseller. Trussoni has also published a second memoir, The Fortress: A Love Story (2016), and a stand-alone novel, The Ancestor (2020), and writes a monthly horror column for the New York Times Book Review. The Puzzle Master, a thriller involving a brilliant puzzle maker and an ancient mystery, was published in 2023, and a sequel, The Puzzle Box, is due out shortly from Random House. Trussoni sat down with Abigail to answer some questions about this new book.

The Puzzle Box continues the story of puzzle maker Mike Brink, a savant who came to his abilities through a traumatic brain injury. How did the idea for this character and his adventures first come to you? Did you always know you wanted to write more about Mike, or did you find that you had more to tell, after finishing The Puzzle Master

The idea for this character didn’t arrive in a lightning flash. Mike Brink developed through slowly working backward from the puzzle that I wanted to be at the center of this novel. I had developed a puzzle that the character of Jesse Price, a woman who is in prison for 30 years for killing her boyfriend, draws. She hasn’t spoken to anyone for five years but creates a cipher. Mike Brink arrives to solve it. At first, Mike was just a regular puzzle solver. And then I began to research real people with extraordinary abilities and stumbled upon Savant Syndrome. He seemed like the perfect vehicle for solving complex and fun mysteries.

I always knew that I wanted to write more about Mike Brink. I feel that this character has an almost endless supply of fascinating angles to write about. I could see writing about him for a long time!

Your hero has Sudden Acquired Savant Syndrome. What does this mean, and what significance does it have, to the story you wish to tell?

Savant Syndrome is an actual disorder that has occurred only a handful of times (there are between 50-75 documented cases). It occurs when there is damage to the brain, and a kind of hyper plasticity occurs, allowing the person to develop startling mental abilities. Some people become incredibly good at playing music, for example. Other people develop an ability with languages. But Mike Brink develops an ability to see patterns, solve puzzles, and make order out of chaos. Once I began to read about this skill—it’s really a kind of superpower!—I knew that this ability would be perfect for a hero of a mystery novel.

The Puzzle Box involves the Japanese royal family, a puzzle created by Emperor Meiji, and a notable samurai family. What kind of research did you need to do to tell this story, and what were some of the most interesting things you learned, in the process?

First of all, I lived in Japan for over two years. That experience was in the back of my mind as I developed the characters and the story of this book. That said, as I wrote The Puzzle Box, I found I wanted to see the places that appear in the novel: the Imperial Palace in Tokyo, the puzzle box museum in Hakone, and the many locations in Kyoto. So, I went to Japan for two weeks in 2023 to do on the ground research at these locations.

The historical elements of the book, especially the storyline about the Emperor Meiji and the Empresses of Japan, were a different story. I read a lot about the Imperial family, their origins, the discussions and controversies surrounding succession. A big part of my process is to read as much as I can find about something in my work and then carve out the most striking details.

How do you come up with the central puzzles in your books? Are they wholly original creations, or are they taken from or inspired by known puzzles?

The ideas for the puzzles are completely original, and necessarily have to do with the story I’m trying to tell. Each of the puzzles in The Puzzle Master and The Puzzle Box act as gateways to information that helps move the story forward. So I start with story. Then, I speak with the REAL puzzle geniuses, who help me imagine what kind of puzzles are possible. I work with two constructors, Brendan Emmett Quigley and Wei-Hwa Huang, who have worked for The New York Times Games Page (Wei-Hwa is a four-time World Puzzle Champion). They are incredibly smart and really understand what I’m trying to accomplish with my storytelling. Because the puzzles are not just gimmicks or diversions: they are essential to the plot of the novel.

What is different about writing a sequel, when compared to the first book in a series? Were there particular writing or storytelling challenges, or aspects that you enjoyed?

The Puzzle Box is designed as a stand-alone novel and can be read without reading The Puzzle Master. Still, Mike Brink is the hero of both novels, and there are other characters and storylines that show up in both books. I loved being able to go back to characters that I’d already spent time with, and found that because they were familiar, I could go deeper into their minds and feelings. The complications of Mike Brink’s superpower are a challenge for him. How he lives with his gift—and how he can continue to solve puzzles and find happiness—is the primary question of this series.

What can we expect next from you? Do you think you’ll write more about Mike? Are there any other writing projects you are working on?

I hope to write more books in this series, and of course Mike would be returning. I always have three or four novels on the back burner, and sometimes it’s hard for me to know which one will be the next to be written. Sometimes I need to wait and see.

Tell us about your library. What’s on your own shelves?

I am a lover of hardcover books, and so my shelves are packed with contemporary fiction in hardcover. I live in San Miguel de Allende Mexico, and it isn’t easy to get new books, but I’ve managed to find a way!

What have you been reading lately, and what would you recommend to other readers?

I used to write a book column for The New York Times Book Review, and a lot of my reading was for the column. But since I stopped writing it last year, I have been reading for pleasure. I’m revisiting books I loved in my twenties—And Then There Were None by Agatha Christie, for example—and I’m reading contemporary thrillers such as The Winner by Teddy Wayne and Look in the Mirror by Catherine Steadman. I have Richard Price’s Lazarus Man, which is out in a few months, on my most anticipated list. There is never enough time to read everything I want, but what I’m reading is exactly what I love most in fiction: sharp, evocative prose that carries me through an engrossing, surprising story. Give me those two things and I’m hooked.

Warning: Slow Blogging Ahead / David Rosenthal

Vicky & I have recently acquired two major joint writing assignments with effective deadlines in the next couple of months. And I am still on the hook for a Wikipedia page about the late Dewayne Hendricks. This is all likely to reduce the flow of posts on this blog for a while, for which I apologize.

keyword-like arguments to JS functions using destructuring / Jonathan Rochkind

I am, unusually for me, spending some time writing some non-trivial Javascript, using ES modules.

In my usual environment of ruby, I have gotten used to really preferring keyword arguments to functions for clarity. More than one positional argument makes me feel bad.

I vaguely remembered there is new-fangled way to exploit modern JS features to do this with JS, including default values, but was having trouble finding it. Found it! It involves using “destructuring”. Putting it here for myself, and in case this text gives someone else (perhaps another rubyist) better hits for their google searches than I was getting!

function freeCar({name = "John", color, model = "Honda"} = {}) {
  console.log(`Hi ${name}, you get a ${color} ${model}`);
}

freeCar({name: "Joe", color: "Green", model: "Lincoln"})
# Hi Joe, you get a Green Lincoln

freeCar({color: "RED"})
# Hi John, you get a RED Honda

freeCar()
# Hi John, you get a undefined Honda

freeCar({})
# Hi John, you get a undefined Honda

The Open Data Editor is now ready for the pilot phase / Open Knowledge Foundation

This week saw the release of version 1.1.0 of the Open Data Editor (ODE), the new Open Knowledge Foundation’s app that makes it easier for people with little to no technical skills to work with data. The app is now ready to enter a crucial phase of user testing. In October, we are starting a pilot programme with early adopters that will provide much-needed feedback and report bugs before the first official stable release, planned for December 2024.

(See below how you can get involved)

The Open Data Editor helps you find errors in your datasets and correct them in no time – a process called “data validation” in industry jargon. It also checks that your spreadsheet or dataset has all the necessary information for other people to use. ODE increases the quality of the data that is produced and consumed, and guarantees, again in technical jargon, “data interoperability”.

Thanks to funding from the Patrick J. McGovern Foundation, our team has been working since the beginning of the year to create this no-code tool for making data manipulation easier for non-technical people, such as journalists, activists, and public administration. This work seeks to put into practice our new vision of open technologies that OKFN will present and discuss at the upcoming The Tech We Want Summit.

It’s been an intense journey, which we briefly recap in this post.

What is in the latest version

  1. A large number of functionalities were removed from the app to transform the Open Data Editor into a table validation tool.
  2. Key UX changes made the application simpler to use. Examples: new button layout, new logic for uploading data and new dialogue boxes explaining some complex things about the tool.
  3. Code improved: it’s now simplified, more accessible and documented to facilitate contributions from the community.
  4. Different data communities engaged in discussions about how the Open Data Editor can help them in their everyday work with data.

ODE in figures

8

months of work

~100

issues solved on Github

5

team members working together

3

presentations to strategic communities

What we have done so far

A February of Sharing
The project plan was shared with the Open Knowledge Network in the monthly call, to gather input and feedback.

A March of Listening
After testing the app and reviewing all the documentation, interviews were conducted with data practitioners to understand the challenges they face when working with data. 

An April of New Directions
The patterns and insights emerged from the interviews were organised to review the application’s concept note and define a new vision for the product. Initially, ODE provided a wide range of options for people: working with maps, images, articles, scripts and charts. From the interviews, we learned that people working with data spend a lot of time understanding the tables and trying to identify problems in them so that they can analyse the data at a later stage. Therefore, we decided to redirect the ODE to a tool for checking errors in tables.

A May of Cleaning
Through a survey, we started asking questions about certain terms used in the application, such as the word ‘Validate’. We realised that a translation for non-technical users was required instead of simply using the vocabulary from Frictionless, the framework used behind the scenes to detect errors in tables. 

During that month we also started to remove many features from the application that did not align with the new product vision. The road was not particularly easy. As is always the case in coding, several things were interconnected and we had to make many decisions at every step. The whole process led us to deeper reflections about how to build civic technology. 

As part of that reflection, we decided to openly share the mistakes, pitfalls, and key learnings from our development journey. The title of our talk at csv,conf,v8 in Puebla, Mexico, was ‘The tormented journey of an app’.

A June of Interfacing
At this time our UX specialist joined the team to focus on making adjustments to clearly communicate the functionalities of the Open Data Editor.

Intending to create a truly intuitive application that addresses existing UX issues, key workflows were redefined, such as processes like Launch and Onboarding, Validation, File Import, File Management, and Datagrid Operations. Leveraging prior user research and agile software methodology, we went through multiple iterations and refinements. This process involved brainstorming, validating ideas, rapid prototyping, updating UX copies, A/B testing, and technical feasibility reviews with the development team. 

Built on Google’s Material UI framework, a new design system was also developed – the single source of truth comprising vibrant colours and patterns aligned with the OKFN’s branding – delivering a fresh, modern, and cohesive user experience, seamlessly extending from our website to the application.

July-September for Rebuilding
The cleanup process of the application continued. But this time the changes in the user interface led to new complexities: changes in workflows, new bugs with the implemented changes, etc. It was a time strongly focussed on development. 

In August, we opened this process in the panel ‘Frictionless data for more collaboration’ at Wikimania 2024, in Katowice, Poland. The community of Wikimedians and open activists discussed data friction and learned how ODE can help enhancing data quality and FAIRness.

At the end of August, we started working with Madelon Hulsebos, professor at CWI Amsterdam and an expert in Table Representation Learning (TRL). She is currently helping us think about the integration of artificial intelligence (AI) in the Open Data Editor by raising great questions and providing key ideas.

What is next

👉🏼 Address two key and complex components of the app: the metadata and the error panels. Adapting both elements to non-technical users requires more in-depth conversations and decisions since the Frictionless Framework creates some constraints for customisation options.

👉🏼 Pilots: To further improve the ODE, we need to receive feedback and recommendations from real users. Therefore, from October until December, two external organisations will be incorporating ODE into their data workflow to test the application, documenting their experience and reporting challenges to improve it.

👉🏼 User testing sessions: In October, we will hold a series of sessions to receive feedback from our community and from other potential users of Open Data Editor. 

👉🏼Codebase testing: As an effort to bring more contributors in the project, in October and November we will have 4 external developers testing the codebase and solving some code issues selected by the core team.

👉🏼 Documentation review: In November, we will hold two sessions to review all the documentation with a selected group of people. This way we will make sure the documentation is as easy to understand as possible for a broad audience.

👉🏼 Translations: In December, the user interface and the documentation will be translated into three languages other than English. 

👉🏼 AI integration: We are now discussing ideas and having conversations on how to make the integration transparent to users. In addition, our AI consultant will provide guidance on how new integrations should look in the future.

👉🏼 Online Course: By December, we will also release a free online course on how to use the Open Data Editor to enhance data FAIRness.

Now we are counting on you! You can apply to take part in the Open Data Editor testing sessions. Please register using this form or by clicking the button below.

Are you a developer? We are also looking for developers interested in testing the codebase and contributing to the project pushing a couple of PRs to solve 3 issues selected by our core team. If you’re interested in open data tools, this is your chance to get involved and make a difference. You can read about the programme here.

You can also email us at info@okfn.org, follow the GitHub repository or join the Frictionless Data community. We meet once a month.

Read more

Are you a developer? Help us test the Open Data Editor! / Open Knowledge Foundation

The Open Knowledge Foundation is looking for four developers with Python and React JS skills to test the Open Data Editor (ODE) desktop application between October and November and help us improve its functionality. 

If you’re interested in open data tools, this is your chance to get involved and make a difference in an application that is in development and finalisation.

You will:

  • Test the app and report any issues (including documentation problems) via GitHub Issues.
  • Push PRs to solve three issues selected by our core team.
  • Have a follow-up call with the core team to report on their experience.

In return, you’ll receive a $1,000 mini-grant for your contributions!

Are you interested? Let us know by filling out this form:


About us

The Open Knowledge Foundation (OKFN) is the world’s ultimate reference in open digital infrastructure and the hub of the open movement. As a global not-for-profit, we have been establishing and advocating for open standards for the last 20 years. We provide services, tools and training for institutions to adopt openness as a design principle.

Our mission is to be global leaders for the openness of all forms of knowledge and secure a fair, sustainable, and open future for all. We envision an open world where all non-personal information is open and free for everyone to use, build on, and share, and creators and innovators are fairly recognised and rewarded. Together, we seek to unlock the knowledge and data needed to solve the most pressing problems of our times.

Learn more:


About the application

Open Data Editor is the new app developed by Open Knowledge Foundation that makes it easier for people with little to no technical skills to work with data. It helps users validate and describe their data in no time, increasing the quality of the data they produce and consume. It is being developed in the context of Frictionless Data, an initiative at OKFN producing a collection of standards and software for the publication, transport, and consumption of data.


Read more

Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 1 October 2024 / HangingTogether

The following post is one in a regular series on issues of Inclusion, Diversity, Equity, and Accessibility, compiled by a team of OCLC contributors.

Woman in colorful dress with her face painted smiles into the camera while marching in a parade.Library of Congress, Prints & Photographs Division, photograph by Carol M. Highsmith [LC-DIG-highsm-20611]. Image is in the public domain.

Changes to Title II, and impact on libraries 

On 24 April 2024, the United States Department of Justice published a final rule updating the regulations for Title II of the Americans with Disabilities Act (ADA). This rule emphasizes the need for web content and mobile applications provided by state and local governments, including public higher education institutions, to be accessible to people with disabilities. This rule change will affect several aspects of higher education online resources, including registration systems, online learning platforms, financial aid information, websites, among other services. Public higher education institutions must ensure compatibility with WCAG 2.1 Level AA standards for screen readers, alt text for images and making interactive elements accessible. This can include course materials and library resources. Institutions have two to three years to improve access in all digital spaces across campus, depending on the size of the community served by the institution. 24 April 2026 is the earliest date (two years after the ruling) when changes must be implemented. 

In a conversation with UX (User Experience) Librarian and Library Assessment colleagues recently, I learned that library staff are being called on to serve as university representatives on accessible web design by sitting on task forces and consulting with departments. While I find it encouraging that universities are acting on this mandate well before the April 2026 deadline, I know that accessible design and accessibility testing are not a one-person job and require marshalling resources far beyond UX. Working with students with disabilities is a meaningful way to engage in real change. I encourage my colleagues out there searching for support and buy-in to find student associations on their campus that can assist with design and test prototypes along the way. Not only do they have the most to gain, resources for designing for cognitive or learning disabilities can be lacking. Testers may be even better in instances like this. Contributed by Lesley A. Langa. 

Readings related to National Hispanic Heritage Month 

In the United States, National Hispanic Heritage Month is commemorated from 15 September through 15 October. That period covers the independence days of Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua on 15 September; of Mexico on 16 September; and of Chile on 18 September; as well as Indigenous Peoples’ Day, Día de la Raza on 12 October. For the 2024 celebration, Washington State’s Seattle Public Library (OCLC Symbol: UOK) has compiled two timely reading lists.  “Hispanic Heritage Month 2024: Recent Fiction for Adults” features twenty-nine novels and story collections published in 2023 or 2024. The companion list, “Latine/Latinx Nonfiction” consists of twenty-five histories, memoirs, and poetry collections. 

There are authors that are familiar, as well as new to me on the fiction list. The genres are as varied as the Hispanic world itself, from a compilation of translated Latin American horror stories to a fictionalized history of the construction of the Panama Canal to a Victorian-era historical romance. The nonfiction titles include the autobiography of dancer and actor Chita Rivera, Juan González’s history of Latinos in America, and the graphic memoir of artist and illustrator Edel Rodriguez. Contributed by Jay Weitz. 

Implementing DEI in stages for success 

Ella F. Washington’s article, “The Five Stages of DEI Maturity” (November-December 2022 issue of Harvard Business Review), outlines five stages companies usually follow when incorporating DEI programs: aware, compliant, tactical, integrated, and sustainable. Washington describes how “a typical journey through these stages includes connecting top-down strategy and bottom-up initiatives around DEI, developing and organization-wide culture of inclusion, and ultimately, creating equity in both policy and practice.” The author provides a description with examples of each stage, noting that in a 2022 survey almost one-third of companies were in the compliant stage and can become stuck in this stage without a change in organizational culture. 

I read this article a while ago and rediscovered it through a citation in another article. Washington’s article was exactly what I needed that day, as I think about DEIA in my work goals for the coming year. At the integrated stage, an organization asks what structures to create for sustainable efforts and challenges existing practices. This requires buy-in from the entire organization. As one person in a large organization, I need to integrate my work with others across the organization to create sustainable DEI programs. Understanding this gives me focus and reminds me that DEI is everyone’s work. Contributed by Kate James. 

The post Advancing IDEAs: Inclusion, Diversity, Equity, Accessibility, 1 October 2024 appeared first on Hanging Together.

October 2024 Early Reviewers Batch Is Live! / LibraryThing (Thingology)

Win free books from the October 2024 batch of Early Reviewer titles! We’ve got 197 books this month, and a grand total of 3,718 copies to give out. Which books are you hoping to snag this month? Come tell us on Talk.

If you haven’t already, sign up for Early Reviewers. If you’ve already signed up, please check your mailing/email address and make sure they’re correct.

» Request books here!

The deadline to request a copy is Friday, October 25th at 6PM EDT.

Eligibility: Publishers do things country-by-country. This month we have publishers who can send books to the US, Canada, the UK, Australia, Germany, France, Sweden, Poland, Netherlands, Ireland and more. Make sure to check the message on each book to see if it can be sent to your country.

Pictures of YouNorthThe Kiss of the NightingalePeople Are TalkingThe Girl from Raven IslandLemming's First ChristmasFrostfireAn Anishinaabe ChristmasBoy Here, Boy ThereTove and the Island with No AddressReasons to Look at the Night SkyWhat the Seahorse Told MeThe Phantom of Forest Lawn: Romance and Redemption in the City of the DeadThe Colors of April: Fiction on the Vietnam War’s Legacy 50 Years LaterA Bucket Full of MoonlightReclaiming Quiet: Cultivating a Life of Holy AttentionGive Them Grace: Leading Your Kids to Joy and Freedom Through Gospel-Centered ParentingThe Lady of the MineThe Boy & His ConscienceArmored HoursShaded GroveCrossing from Shore to ShoreThe Little Book of Quotes by Women: Inspiring Words to Live ByLies of a ToymakerAuntie D's RecipesRecovering from Purity Culture: Dismantle the Myths, Reject Shame-Based Sexuality, and Move Forward in Your FaithThe Seaside HomecomingCloaked in BeautyThe Leader's Devotional: 90 Days of Biblical Wisdom for Honoring God in All You DoMedEvacZamboni of LoveScaredy Squirrel Gets FestiveRize Novella Anthology, Volume 2Unmasked Moments: A Child and Adolescent Psychiatrist's Memoir of the COVID-19 PandemicWinnie Mandela, Stompie Moeketsi & Me: My Story of a Notorious Murder and the Events That FollowedOnce Will Be Better, or, My Life StoryThe Future of Technology and Society: A Guide for 2040Hope In HardshipAttic Rain: PoemsPieces of a MurderAngst in the Arms of MorpheusBilly and the Epic EscapeNot in My BookBoy vs. SharkMurtagh [Deluxe Edition]Paint with PloofPatsy Cline's Walkin' After MidnightRain In OctoberLila Said NoNana NanaTractor DanceThe Stars Inside Us23 and You and MeSecret FACTopia!: Follow the Trail of 400 Hidden FactsGalápagos Islands: The World's Living LaboratorySolstice: Around the World on the Longest, Shortest DayThe Forgotten SonBillu ButtonsPractical Money Skills for Teens: Personal Finance Simplified, with a Quickstart Guide to Budgeting, Saving, and Investing for a Stress-Free Transition to Financial Independence365 Inspirational & Motivational Quotes to Live By: Daily Wisdom to Inspire Personal Growth, Resilience, Positivity, and MindfulnessLandscapes & Landmarks Coloring Book for Adults: Scenic Beauty and Iconic Places from All 50 States of America for Mindful Relaxation and Stress ReliefHalloween Coloring Book for Adults: Spooky Fun, Stress Relief and Creative Expression with Dark Fantasy and Gothic ArtBlack MarketOur Comeback Tour Is Slaying MonstersA Once In A Lifetime OpportunityKevin Hops To LondonBirth of a GoddessA Simple Guide to Staying Healthy & Living LongerHadron's RunWhy Like Flies?The Lost KingThe Fairy Godmother's TalePontiac Performance 1960-1974: The Era of Super Duty, H.O., and Ram Air Drag and Muscle CarsStreetWhysMake a Little WaveThe Doll from DunedinFriends and Consequences: Tales from the Old Fort - 1973The Neurodiversiverse: Alien EncountersOn The QuietPeace on Earth & Mercy MildThe Truth About Greece That SummerThe Entrepreneur’s Edge: A 3-Book Compilation on AI, Cybersecurity, and AR/VRSanta Fe Uncovered: A Local's Insight into the Heart of New MexicoDenver Dossier: Themed Adventures for Every TravelerLet's Fix This: Cleaner Living in a Dirty WorldAllies, Arson, and Prepping for the ApocalypseHomemade Healthy Dog Food Guide: Discover the Science Behind Nutritional Solutions, Tailored to Your Dog's Health at Every Stage of Life, and for Chronic or Pathological ConditionsBestGhost: A NoveletteTeach Your Child to Read: A Mommy + Me Coloring BookThe Medici Maxim: Exploit the Power of the Matthew Effect to Achieve Exponential Success: The 9 Cardinal Principles to Activate, Amplify, and Accelerate an Accumulating Advantage (and How They Made the Medici the Richest Family in Europe)Keeper's ProphecyFaith in FoolsRescue Your Late ProjectNowhereBeyond Beliefs: The Incredible True Story of a German Refugee, an Indian Migrant and the Families Left BehindA Legend of the SailorsShikareeThe Horseman's TaleDarkness, DarknessVesselKnee-Deep in CindersThe MahdiJourney to 2125: One Century, One Family, Rising to ChallengesTalking about Adolescence: Book 2: Supercharge Your Body and Brain PowerStormflowerBob and Fluffy's First Adventure: A Story of Kindness and FriendshipThe Curse of the Smoky Mountain TreasureThe Little Hedgehog and the Very Windy DayRise of the Black CrossThe Smell of FallLifersThe Forgotten AlphabetA Coffee for TwoFatal FarmingLie Me Down Among The Cold Dark PinesShe Leads: Leadership Development for Women in BusinessPoinsettia LaneThe Art of a ButterflySuper 8The Garden TaleEvery Rule UndoneTHE SYNEquationHush the Cannon’s Roar: The Life & Times of Bennet Riley: Defender of BuffaloPerfectly YouThe Lightning SeedThe Eye of the SeaPhysics FablesIkigai and the Art of Keeping Your Dreams AliveFirst ContactDadding Poorly: Bad Parenting Advice for the First Decade of Your Child's LifeUnder a New and Brilliant SkyWhy So Blue: A StayCoppinKicks StoryThe Lost LampFission #4: An Anthology of Stories from the British Science Fiction AssociationStars, Clouds & ThornsAbout the BoyCows Can't Be ClownsUnreadable: Another Book You Probably Won't ReadFall, Sacred AppleStep OneThe Last Fairy Godmother: WishlessPoems From the End of Eternal SpaceThe Last Nuclear WarTab's Terrible Third EyeThe Matrix of the MindTemporary Beauty: A Memoir About Panic Disorder and Finding Purpose Through Art and MeditationA Cowboy's RunawayBear's Sick Day: A Story of Caring and FriendshipThe 4MIDABLES - How They Came To BeDo Not Take the TrainThe Christmas ProofFilm Noir Fate vs the Working Stiff: Film Noir in the Public Domain Vol IIThe ProjectionistBad Cop, Worse CopFelones de Se: Poems about SuicideApprenticed to the NightTender Paws: How Science-Based Parenting Can Transform Our Relationship with DogsAnomic Bombs: Five Sci-Fi Tales of Organisms Not Quite Fitting InWho Generated My Cheese?: What You Must Do Now to Survive and Thrive in an AI World—A Full-Color Illustrated PrimerWhere Demons ResideGreen Forest, Red Earth, Blue SeaMy Buddy Bali: A Tourist in Kisses and TearsThe Potent SolutionHolistic Retirement Planning: Being Intentional with Heart, Mind, and Money at Any AgeA Cultural History of America's Scots Irish: From Border Reivers of the Anglo-Scottish Border to Mountaineers in AppalachiaA Choir of WhispersDiane: True SurvivorMiranda FightsAlice Pemberton's Bureau of Scientific InquiryTalmadge FarmDesperate MeasuresThe Perfect PawnA Wilder WelcomeDelphiThe Ultimate Prepper's Survival Bible: Guide to Surviving Any Crisis.No Grid Survival Projects Bible. USA 2024-2025 EditionThe Mechanics of Changing the World: Political Architecture to Roll Back State & Corporate PowerUnnatural IntentA Fate Far Sweeter: Passion & Peril In UkraineHogs Head StewPeople of MemoryStashed in a JarThe Ultimate Guide to Rapport: How to Enhance Your Communications and Relationships with Anyone, Anytime, AnywhereGray WrathEchoes of the TombPreserving the PresentBetween the Lines: A Short StoryThe Focused Faith: Detox Your Digital Life, Reclaim Hijacked Attention, Build Habits for Focus & JoyA Love Worth Waiting ForTwo NecklacesThe Last Quest: Business Exit Strategy from an Unlikely SourceInside: A Visual Journey of Mindfulness for Curious KidsBig Love and War HorseTanglesRule #1 for Depression: How to Eliminate Negative Thinking and Rewire Your Anxious Brain with This Simple Depression BookMaya and Waggers: Mega Gossip

Thanks to all the publishers participating this month!

aka Associates Alcove Press Ashwood Press
Baker Books Bethany House CarTech Books
City Owl Press Entrada Publishing Fawkes Press
Harbor Lane Books, LLC. IngramSpark Inhabit Media Inc.
Ink & Quill Press Legacy Books Press Lerner Publishing Group
Middleton Books Modern Marigold Books New Vessel Press
NeWest Press Paper Phoenix Press Prosper Press
PublishNation Purple Moon Publishing Revell
Riverfolk Books RIZE Press Running Wild Press, LLC
Somewhat Grumpy Press Susan Schadt Press Thinking Ink Press
Three Rooms Press Tundra Books Tuxtails Publishing, LLC
Type Eighteen Books What on Earth! Wise Media Group
Yorkshire Publishing Zibby Books

DLF Digest: October 2024 / Digital Library Federation

A monthly round-up of news, upcoming working group meetings and events, and CLIR program updates from the Digital Library Federation. See all past Digests here

Happy Fall, DLF Community! We’re so excited for October because it’s the month the Virtual DLF Forum is happening. There are over 500 registrants who will experience 34 sessions led by 102 speakers, including Featured Speaker Andrea Jackson Gavin, happening over two days, October 22-23. Join us — registration ends October 15

— Team DLF


This month’s news:


This month’s DLF group events:

DLF Data and Digital Scholarship Working Group and Research Libraries UK — Critical AI Literacy Skills 

Tuesday 29 October 16:00 – 17:30 GMT, 12:00 – 13:30 EDT, 09:00 – 10:30 PDT; Register in advance here

This highly interactive session will follow previous successful joint meetings between members of CLIR’s Digital Library Federation, Data and Digital Scholarship working group (DDS) and RLUK’s Digital Scholarship Network (DSN).

The meeting will explore the topic of ‘Critical AI Literacy skills’ and will include speakers from across the US and UK research library and information communities.

Potential breakout topics for this session include:

  • Responsible AI in Libraries
  • Alternative AI techniques (Adversarial or Defensive AI)
  • Data Transparency in AI models

It will also include opportunities to meet with fellow professionals, share skills (and the Skills Directory) and knowledge, hear from skills experts, and receive updates regarding the continued collaboration between the DDS and DSN. 

Who should attend: You do not need to have attended a previous joint meeting in order to attend this session and the meeting is open to all members of the DLF and the DSN.

This event will be highly interactive and involve lots of delegate participation. Come energised to share your experiences, specialisms, and skills needs in a dynamic, transatlantic skills exchange. Although a free event, all delegates are required to register.   

Further information: Visit the collaboration’s OSF page for useful resources from previous meetings (inc. shared notes from our last meeting): click here.


This month’s open DLF group meetings:

For the most up-to-date schedule of DLF group meetings and events (plus NDSA meetings, conferences, and more), bookmark the DLF Community Calendar. Meeting dates are subject to change, especially if meetings fall on a holiday. Can’t find meeting call-in information? Email us at info@diglib.org. Reminder: Team DLF working days are Monday through Thursday.

AIG = Assessment Interest Group

  • Born-Digital Access Working Group: Tuesday, 10/01, 2pm ET / 11am PT
  • Digital Accessibility Working Group: Wednesday, 10/02, 2pm ET / 11am PT
  • AIG Cost Assessment Working Group: Monday, 10/14, 3pm ET / 12pm PT
  • AIG User Experience Working Group: Friday, 10/18, 11am ET / 8am PT
  • Digital Accessibility Policy and Workflows Subgroup: Friday, 10/25, 1pm ET / 10a PT
  • Digital Accessibility Working Group – IT Subgroup: Monday, 10/28, 1:15pm ET / 10:15am PT 
  • Committee for Equity and Inclusion: Monday, 10/28, 3pm ET / 12pm PT
  • Climate Justice Working Group: Wednesday, 10/30, 12pm ET / 9am PT

DLF groups are open to ALL, regardless of whether or not you’re affiliated with a DLF member organization. Learn more about our working groups on our website. Interested in scheduling an upcoming working group call or reviving a past group? Check out the DLF Organizer’s Toolkit. As always, feel free to get in touch at info@diglib.org


Get Involved / Connect with Us

Below are some ways to stay connected with us and the digital library community: 

The post DLF Digest: October 2024 appeared first on DLF.

NDSA Welcomes One New Member in Quarter 3 of 2024 / Digital Library Federation

As of September 2024, the NDSA Leadership unanimously voted to welcome one new applicant into the membership. Please join me in welcoming our new member! To review our list of members, you can see them here.

The Archive and Heritage Digital Curation Group

In their application, The Archive and Heritage Digital Curation Group noted that they are “a specialised consulting services that ensure that your archival processes meet regulatory compliance and standards.” They continued to note that, “We provide comprehensive support in archiving, records management, metadata management, disaster preparedness, and environment scanning to safeguard your valuable collections. Both from a IT systems and Archives and Records Management perspective. We also do digitisation from equipment to physical digitisation, collection building and storage.” 

 

The post NDSA Welcomes One New Member in Quarter 3 of 2024 appeared first on DLF.

Threats, hopes and tales from the Open Knowledge Network gathering in Katowice, Poland / Open Knowledge Foundation

On 5 August 2024, representatives from the Open Knowledge Network gathered at the Metallurgy Museum in Chorzów (Katowice, Poland) for a day of strategic thinking. The annual gathering is a special occasion, an opportunity for us to come together, share our knowledge, listen attentively, and forge meaningful connections. This gathering has been a way to embark on a journey of discovery, collaboration, and the creation of something greater than ourselves. 

Who was there? The attendees of the meeting were Haydée Svab from Open Knowledge Brazil, Nikesh Balami from Open Knowledge Nepal, Charalampos Bratsas and Lazaros Ioannidis from Open Knowledge Greece, Beat Estermann from Opendata.ch, Poncelet Ileleji from Jokkolabs Banjul, Dénes Jäger from Open Knowledge Germany, Susanna Ånäs from Open Knowledge Finland, Sandra Palacios from BUAP, the regional coordinator for Europe Esther Plomp, and for OKFN Renata Ávila, Patricio del Boca, Lucas Pretti, and Sara Petti. The meeting was facilitated by Jérémie Zimmermann

The date and location was strategically chosen so we could attend all together Wikimania 2024, which incidentally had collaboration as its main theme this year. 

How we can increase collaborations within the Network (and beyond!) and how we can make those collaborations more effective is indeed something we talked about extensively during the gathering. 

We are more and more convinced that in-person gatherings and celebrations of our movement provide excellent opportunities to break the silos and foster collaboration. Bring people together in a room, let them spend some time together, discuss the topics that are close to their heart, share their experience, and magic will happen. You don’t believe me? To date, projects and collaborations are born out of connections that were made at the mythical Open Knowledge Festivals a decade ago (Helsinki 2012, Berlin 2014). And blimey, how much do we need those celebrations in these gloomy days of decaying institutions, proliferation of disinformation, and corporate diktats! 

We also asked ourselves how we could make collaborations more strategic, for example taking advantage of emerging topics and opportunities. And as someone reminded at one of the last Network calls, threats can be opportunities, so it’s worth it to have a look at what is there. We spent a considerable amount of time in Katowice delving into what we feel is threatening open knowledge at the moment, and therefore requires our attention. 

We all agreed on the fact that large segments of people are still lacking the skills to build, use, or understand open data and open technologies, and are therefore excluded from benefiting from open resources, exacerbating existing inequalities and hindering the potential for widespread knowledge sharing. Limited access to quality education and digital literacy creates significant barriers to engaging with open knowledge. We have known this for a while, this is why the School of Data actually started more than a decade ago, but the gap is still there. We have discussed this during one of our last 100+ conversations. Can we do more? 

Of course we all know there’s only so much we can do without funding. We acknowledged that open knowledge initiatives often struggle with insufficient funding, which limits our ability to develop, maintain, and scale sustainable projects. Without adequate resources, many open projects fail to reach their potential, leaving ground to well-funded proprietary solutions that prioritise control over accessibility.

The problem with funding is also linked to the fact that the funders’ agenda is often dominated by trends set by Big Tech, so we sometimes end up doing things because of those trends, instead of other things that we would be more willing to do, but halas don’t attract money. This is something we extensively talked about during a digital commons gathering in Berlin last year, if you are interested you can read the report Problematizing Strategic Tension Lines in the Digital Commons.

Big Tech is also imposing non-sustainable business models that prioritise profit over sustainability and a human-centric development, leading to closed ecosystems that lock users into proprietary platforms, stifle innovation in open-source alternatives, and undermine the broader goal of equitable access to knowledge. These business models concentrate power and resources in the hands of few, at the detriment of the many. One example of this? The concentration of data ownership by a few global entities, currently leading to data colonialism, where resources are extracted from communities without benefiting them. This creates a monoculture of knowledge production, controlled by monopolies that dictate who can access, use, or benefit from data, undermining local autonomy and diverse perspectives, and ultimately exacerbating social injustice. The exact opposite of what open knowledge stands for.

And since we are talking about the opposite of what open actually stands for, we are all very worried by the mis-use of the word “open”, associated with practices that are far from being open (need a little reminder of what open really is? Go and have a look at the Open Definition), what we commonly call open washing. This false openness can be weaponized, spreading misinformation or serving as propaganda or reinforcing a moral hegemony while distorting the general understanding of open knowledge and undermining genuine efforts toward transparency and accountability in open knowledge. Failing to understand what open really is can result in widespread misconceptions, for example the false idea that openness conflicts with personal privacy, security or data protection.

Last but not least, once again for this meeting some of our Network members were not able to join us because of restrictive immigration policies and closed borders for some. These barriers create inequities in who can contribute and benefit from open knowledge initiatives, reinforcing global inequalities and restricting the exchange of ideas.

After discussing what we felt are the most alarming threats to open knowledge, we reminded ourselves that threats can actually be opportunities, and therefore indulged in dreaming about how we could solve some of those challenges as a collective. Telling each other stories under the Polish blue sky, we started realising that story-telling is an essential part of the work we have to do. If we want to stay relevant, and convince people open knowledge is key in solving the most pressing issues of our time, we need to communicate more effectively our values, and we need to communicate them to a broader audience too, reach out to new people out there and bring them into our community. We need to remind people outside our bubble the benefits of open knowledge, such as transparency, collaboration, and innovation. Actually remind is not the right word. We have to tell them, because some of those outside our community actually don’t know.

So here’s our story about how we in the open movement solved the problems and faced the threats highlighted above as a collective. Hope you enjoy it. Note that this story is open ended, and you can contribute to its making if you want to. 

The Tales of Jokkolandia

In 2024, Jokkolandia faced its darkest hour. Devastating floods, social unrest, and raging fires swept across the land, reducing everything to ruins—except for its people. Despite the chaos, the spirit of the Jokkolandians remained unbroken. They gathered together and decided to rebuild their country from scratch. This time, they would do it differently.

All resources were centralised, regulated by a diverse committee of young and old people. The elders, with their memories of what Jokkolandia once was, provided wisdom and perspective. The younger generation, brimming with fresh ideas, brought innovation and new energy. Together, they began a thorough process of revision, questioning what had worked in the past and what had failed. From these reflections, they designed a new society based on collaboration, where the community actively monitored technology, ensuring that it served the people rather than the other way around.

This crisis fostered an unshakeable bond of solidarity and accountability. In Jokkolandia, data governance became a collective responsibility, and every decision was made democratically. Institutions and infrastructure were rebuilt to nurture a participatory democracy, with the free movement of people across borders. There were no visas, and individuals from all over the world flocked to Jokkolandia, including members of the Open Knowledge Network, who found a welcoming home in this utopia. Health coverage was universal, and everyone had their basic needs met.

Rejecting the exploitative models of Big Tech, Jokkolandia built its own solutions. They developed their own digital public infrastructure (DPI), entirely homegrown, and collectively destroyed the stranglehold of tech monopolies. Open hardware and open knowledge became a way of life, and even the Zapatistas and Yanomami communities came together to build amazon.open, a digital interlocal infrastructure governed by local cooperatives. This network of interconnected nodes allowed people to trade resources and fulfill their needs in a decentralised and equitable way.

Meanwhile, the billionaires of the old world, now irrelevant, were sent to Mars, where their lives were broadcast as a satirical reality show. When their time on Mars was over, they returned to Earth with their fortunes devalued, joining the same cooperative nodes they had once dominated. Jokkolandia had achieved a society where knowledge was shared openly, and every individual could access what they needed to thrive. However, one challenge remained: how to dismantle the lingering power dynamics of “knowledge is power” and ensure true equality in the exchange of information.

Neighbouring Jokkolandia was Openlandia, a well-established democracy that had once been ruled by dinosaurs—figures clinging to outdated ideas and detached from the realities of their people. As they began losing popularity, the dinosaurs asked themselves a critical question: how could they stay relevant? The answer came through engaging young people. Inspired by the strategy of Humanitarian OpenStreetMap, they brought young minds into schools to map their local communities, addressing real, tangible problems like fixing broken infrastructure and improving connectivity.

The dinosaurs realised that to secure their future, they had to engage with the next generation and those with a multiplier effect—educators. They started working on the intersection of openness, education, and communication, creating compelling stories about how open knowledge could solve everyday issues. However, Openlandia still faced challenges in connectivity, particularly in rural areas, so they also embraced offline communication strategies, like local radio broadcasts, to reach everyone. Their focus on collective messaging helped restore their relevance, but a new question loomed: were they becoming the dinosaurs they had once fought to overcome?

As both Jokkolandia and Openlandia grappled with the future, the global community had made strides to overcome capitalism. The world had introduced a global basic income, curbing hyper-consumerism and redirecting military spending toward societal good. The rich were taxed heavily, and nationalistic spending was drastically reduced. Yet the question remained: how could we balance societal needs when it was impossible to bring everyone to the same level without overwhelming the planet’s resources?

Studies revealed that society was divided—some acted for the collective good, others for selfish reasons, and the majority simply followed the dominant trend. By establishing a cooperative society that managed resources transparently and equitably, Jokkolandia set a collective standard that most people naturally followed. But as this new world took shape, new questions emerged: What would be the nature of power in a society governed by cooperatives and open communities? And what kinds of problems would arise when collective governance met individual needs?

In this new age, Jokkolandia and its neighbours strived to answer these questions, continuously evolving as they sought to balance openness, fairness, and the complexities of human nature. The journey toward true equality, openness, and shared knowledge had only just begun.


Would you like to tell us your story? Drop us a line! And remember you can always join the Open Knowledge Network.

Would you like to be part of the Network?

Our Network connects activists and change-makers of the open knowledge movement from more than 40 countries around the world, who are advancing open and equitable access to knowledge for all everyday through their work.

We believe knowledge is power, and that with key information accessible and openly available, power can be held to account, inequality challenged, and inefficiencies exposed.

You can check all current members on the Network page and our Global Directory. Or browse through the Project Repository to find out what each member has been working on. For current updates, subscribe to Open Knowledge News, our monthly newsletter. 

Our groups can always benefit from more friendly faces and fresh ideas — we will be happy to hear from you! Please contact us at network[at]okfn.org if you, as an individual or organisation, would like to be a part of Open Knowledge and join our global network.