Planet Code4Lib

Evergreen 3.5.0 released / Evergreen ILS

The Evergreen community is proud to announce the release of Evergreen 3.5.0. Evergreen is highly-scalable software for libraries that helps library patrons find library materials and helps libraries manage, catalog, and circulate those materials, no matter how large or complex the libraries.

Evergreen 3.5.0 is a major release that includes the following new features of note:

  • Support for PostgreSQL 10.
  • Improvements to the billing and payment aging feature, including two new global flags and additional columns in the aged payment database table.
  • A new Action Trigger hook that fires on patron self-registration.
  • Various updates to the Angular staff interface, including:
    • Porting the Enriched/Full MARC editor.
    • Adding a Patron View tab to the experimental Angular catalog.
    • A new Catalog Preferences interface.
    • Search query highlighting in the experimental Angular catalog.
  • Various circulation improvements, including:
    • A new hold sort order.
    • Improvements to the Angular staff catalog hold placement form.
    • Adding the ability for staff and patrons to update existing hold requests when changing notification preferences for holds.
    • In Hatch, a new ability to “print” receipts to file.
  • The ability to add custom CSS to the OPAC via a library setting.

The release is available on the Evergreen downloads page. For more information on what’s included in Evergreen 3.5.0, please consult the release notes.

Evergreen 3.5.0 requires PostgreSQL 9.6 or later and OpenSRF 3.2.0 or later.

Evergreen 3.5.0 includes contributions from at least 18 individuals and 11 institutions.

The release managers Bill Erickson and Chris Sharp would like to extend their appreciation for everybody’s help and patience with the release.

Sinking our Teeth into Metadata Improvement / Library Tech Talk (U of Michigan)

Advertisement for porcelain teeth by S.S. White Dental Manufacturing company

Like many attempts at revisiting older materials, working with a couple dozen volumes of dental pamphlets started very simply but ended up being an interesting opportunity to explore the challenges of making the diverse range of materials held in libraries accessible to patrons in a digital environment. And while improving metadata may not sound glamorous, having sufficient metadata for users to be able to find what they are looking for is essential for the utility of digital libraries.

Bill Shannon RIP / David Rosenthal

Last Thursday my friend Bill Shannon lost a long battle with cancer. The Mercury News has his obituary. I thought to create a Wikipedia page for him as I did for my friend John Wharton. But, true to Bill's unassuming nature, he left almost no footprint on the Web. The lack of reliable sources attesting to his notability made such a page impossible. The brief account below the fold, compiled with invaluable assistance from many of his friends, will have to do instead. Comments with memories of Bill are welcome.

The image is Bill's card from the deck of playing cards the Usenix Association created for the 25th anniversary of the Unix operating system in 1994.

Bill started programming early, spending hours in the computer lab at his high school, and with a Boy Scout group that used computers at NASA's Lewis Research Center in Cleveland.

He graduated from Case Western with a Masters in Computer Engineering. His 1981 thesis involved a significant degree of difficulty: porting Unix V7 to the Harris/6, a 24-bit word addressed machine with a 18 bit address space and a 16 bit program counter. He used a compiler built by Sam Leffler for his companion thesis, and the shell ported by Rob Gingell.

He joined Digital Equipment's Unix Engineering Group with Armando Stettner. They worked on bringing up AT&T's UNIX/32V and 4BSD on Vaxen:
Shannon and Stettner worked on low-level CPU and device driver support initially on UNIX/32V but quickly moved to concentrate on working with the University of California, Berkeley's 4.0BSD. Berkeley's Bill Joy came to New Hampshire to work with Shannon and Stettner to wrap up a new BSD release, incorporating the UEG CPU support and drivers, and to do some last minute development and testing on other configurations available at DEC's facilities. As an aside, the three brought up a final test version on the main VAX used by the VMS development group. No comments were heard from the VMS developers whose terminals greeted them the next morning with a Unix login prompt... UEG's machine was the first to run the new Unix, labeled 4.5BSD.
Tim Bray was there:
The DEC Unix group was pretty small; it featured the talents of Armando Stettner and Bill Shannon. They ran this one big VAX that was really central in the UUCP what-came-before-the-Internet network, called decvax. Bill, a classically geeky redhead, held our hands and helped debug the customer's weird ioctl() calls and get the thing running,
decvax was really important and it was Bill and Armando's baby:
Name of site:
decvax

What the site is all about:
Main system for the DEC Unix Engineering Group

Name of contact person at site:
Bill Shannon

Electronic mail address of contact person:
decvax!shannon

U.S. Mail address of contact person:
Bill Shannon
MK1-1/D29
Digital Equipment Corporation
Continental Blvd.
Merrimack, NH 03054
decvax was key to the explosion of UUCP networking and USENET news groups:
The explosion was the direct responsibility of Armando Stettner and Bill Shannon of Digital Equipment Corporation. Someone at the USENIX meeting complained about the telephone bills run up by transcontinental calls. Armando and Bill said that if they could get a feed to decvax in New Hampshire, they'd pick up the Berkeley phone bill. (Stettner subsequently covered the news feeds to Europe, Japan, and Australia.)
Courtesy of Karen Shannon
This all led to the classic New Hampshire "Live Free Or Die" UNIX license plate:
Shannon's New Hampshire "Live Free or Die" Unix license plate has become not only a geek icon, but a rallying cry for people who related to the source-code-sharing ideas that helped Unix spread.
While working at DEC he met Bill Joy, and was eager to become employee #11 at Sun Microsystems. He was a fixture there through its entire history, and was one of the survivors when it was acquired by Oracle.

Courtesy of Karen Shannon
When Bill and Karen moved to the Bay Area they discovered that the license plate UNIX was taken, so he upgraded to the blue California VMUNIX license plate that adorned his M-series BMWs.

Starting in 1982 Bill Shannon was largely responsible for transforming Bill Joy's special pre-release of  4.2BSD (called 4.1c) into the first version of SunOS. He also made significant contributions to the virtual memory code of 4.2. Shannon had a VAX/750 to run 4.1c on while porting it running to the Sun/1. The rumor is that Shannon asked for the VAX/780 he was used to but Joy said a /750 is all you get - I want you incentivized to get a Sun/1 running it faster than the /750 ASAP. I was an early user of Shannon's work when I joined Carnegie-Mellon's Andrew project.

His work led to the immortal kernel panic() that Tim Bray encountered:
My only contact was once in 1988, standing in front of a Sun box at the University of Waterloo when it crashed, the console printing out Shannon says this can't happen.
Among Bill's contributions was C Style and Coding Standards for SunOS. The abstract provided an elegant justification for venturing into this often controversial area:
This document describes a set of coding standards and recommendations that are local standards for programs written in C for the SunOS product. The purpose of these standards is to facilitate sharing of each other’s code, as well as to enable construction of tools (e.g., editors, formatters) that, by incorporating knowledge of these standards, can help the programmer in the preparation of programs.
Bill was perhaps the central figure in the activities at Sun that evolved Sun UNIX to SunOS and onward. He was the anchor of the culture Sun developed that led to an extremely productive and creative period in which Sun emerged from start-up to industry leader. He was the primary author of few papers, but was an author/contributor to the work many of those papers described. At the summer 1987 Usenix conference this was captured best in his public acknowledgement by Howard Chartock as the "Sheriff of Kernel County", intended as a label of respect and admiration. He was a leading light of the group of SunOS Distinguished Engineers through the AT&T deal that, after years of struggle, merged it with System V to create Solaris. Bill then led the effort to re-target Java to be an application server environment as Java EE.

I have Scott McNealy's permission to quote what he wrote to Bill:
There are 235,000 former Sun folks and countless others who partnered with us in the amazing journey we had. There are a few who were there at the beginning and created the culture that was unique and special to the Valley and to Tech. You were a defining cog and driver of that culture. Your modest, soft spoken but determined voice for what was technically right, economically right, but more importantly what was humanly right was always there at just the right time and tone.

It is no exaggeration to say your influence on the Valley and Tech is oversized and under appreciated, though us Sun folks know what you have done for us all. And no one feels more gratitude for your contributions than me. Always in awe of your tech prowess and prolific work ethic. And your loyalty to all of us.

I am proud to say I worked for you. And learned from you.

Your efforts will live beyond us all. Sun culture has spread everywhere internationally. Our technology still drives the network computing landscape. And your work and impact will outlive all of us who worked at or with Sun.
Source
An enduring mystery surrounds the identities of the perpetrators of Sun's notorious early April Fool's pranks, such as:
  • Eric Schmidt's office in pond (1985)
  • Volkswagen in Eric Schmidt's office (1986)
  • Bill Joy's brand new Ferrari in pond (1987)
  • Golf course in Scott McNealy & Bernie Lacroute's offices (1988)
But there are press reports placing Bill at the "scenes of the crimes".

When I joined Sun in September 1985 to work on window systems with James Gosling I was immediately inducted into the Thursday "windows lunch". Bill was a founding member of this group of engineers, which was initially devoted to complaining about (and trying to improve) Sun's vestigial early user interface.

'86 trip courtesy of Rob Gingell
In the early days, the "windows lunch" also organized an annual ski week at Vail. Part of the ritual was the drive from Denver to Vail and back in a pair of rented Lincoln Continentals. Somehow we believed they were the ideal vehicles to tackle snow and ice. These trips were a testament to Bill's driving skill.

More that a third of a century later five of us are still meeting for lunch every Thursday, no longer talking much about user interface systems, and now greatly missing Bill.

The subject line of Bill's farewell email declared:

 public static final void goodbye() { /**NORETURN*/ }


Evergreen Community Spotlight: Benjamin Murphy / Evergreen ILS

The Evergreen Outreach Committee is pleased to announce that June’s Community Spotlight is Benjamin Murphy, who works for the State Library of North Carolina as the NC Cardinal Program Manager. Benjamin joined NC Cardinal in 2017, having previously worked as a Systems Integration Librarian at the State Library of North Carolina’s Government and Heritage Library. He participated in their migration to Evergreen and attended the 2016 Raleigh Evergreen Conference.

Benjamin has focused many of his efforts at NC Cardinal on growing the consortium. “We’ve seen continual growth in my time here,” he says. “We’re now at 50% of public libraries in North Carolina, and continuing to grow.” NC Cardinal has worked for several years on major consolidation projects, like cleaning up circ mods, locations, and permissions.

While Benjamin is at the beginning of his broader community involvement, he has jumped in with both feet. Attendees at the 2020 Online Conference might have participated in one of the sessions hosted by NC Cardinal. Benjamin and his colleague April Durrence hosted several conference sessions, including the keynote. He particularly wanted to thank April and his other colleagues Courtney Brown and Llewellyn Marshall for their support and hard work.

In his role as a consortial leader, Benjamin has connected with other consortial leaders in the Evergreen community to share ideas. “It’s always good to have mentors, and it’s also good to mentor people. It’s helpful to brainstorm with other consortial leaders, share experiences, lessons learned, and counsel each other in terms of how to approach things.” When Benjamin was new in his position, he reached out to other consortial leaders for guidance, and now is in a position where he offers his guidance to newer community consortial leaders.

Benjamin touts the in-person conference as a great place to connect with members of the community, and put faces with names and/or IRC handles. He notes that it’s easy to be hesitant as a new community member, and that getting to know people via the conference has opened up avenues for learning and discussion.

“Imposter syndrome is real, [but] don’t let the things that you don’t know prevent you from making use of the knowledge of other people,” Benjamin says. “A lot of people in tech and libraries are introverts – so that doesn’t come naturally. When you put them in that context of shared similarity, it provides a way to engage and make connections, relationships,and friendships.”

Do you know someone in the community who deserves a bit of extra recognition? Please use this form to submit your nominations. We ask for your email in case we have any questions, but all nominations will be kept confidential.

Any questions can be directed to Andrea Buntz Neiman via abneiman@equinoxinitiative.org or abneiman in IRC.

Samvera Community Manager / Samvera

Samvera is hiring for its inaugural Community Manager.  We are seeking a highly organized individual who wants to join this grass-roots, open source community that creates best in class repository solutions for digital content stewarded by Libraries, Archives, and Museums.  We are a vibrant and welcoming community of information and technology professionals who share challenges, build expertise, and create sustainable, best-in-class solutions, making the world’s digital collections accessible now and into the future.

The Community Manager will help support and manage the activities of the Samvera Community partners and adopters. The individual will be a highly effective coordinator, communicator, facilitator, and administrator working across the Samvera Community’s groups to ensure consistency and coordinated development initiatives.

This position will be employed by Emory University and report to the Chair of Samvera Steering Group. Please see more details about the position including min. qualifications for this 2-year term, fully remote, position and how to apply here: https://staff-emory.icims.com/jobs/53613/samvera-community-manager/job

The post Samvera Community Manager appeared first on Samvera.

The Open Human Genome, twenty years on / Open Knowledge Foundation

On 26th June 2000, the “working draft” of the human genome sequence was announced to great fanfare. Its availability has gone on to revolutionise biomedical research. But this iconic event, twenty years ago today, is also a reference point for the value and power of openness and its evolution.

Biology’s first mega project

Back in 1953, it was discovered that DNA was the genetic material of life. Every cell of every organism contains a copy of its genome, a long sequence of DNA letters, containing a complete set of instructions for that organism. The first genome of a free-living organism – a bacteria – was only determined in 1995 and contained just over half a million letters. At the time sequencing machines determined 500 letter fragments, 100 at a time, with each run taking hours. Since the human genome contains about three billion letters, sequencing it was an altogether different proposition, going on to cost of the order of three billion dollars.

A collective international endeavour, and a fight for openness

It was sequenced through a huge collective effort by thousands of scientists across the world in many stages, over many years. The announcement on 26th June 2000 was only of a draft – but still sufficiently complete to be analysed as a whole. Academic articles describing it wouldn’t be published for another year, but the raw data was completely open, freely available to all.

It might not have been so, as some commercial forces, seeing the value of the genome, tried to shut down government funding in the US and privatise access. However openness won out, thanks largely to the independence and financial muscle of Wellcome (which paid for a third of the sequencing at the Wellcome Sanger Institute) and the commitment of the US National Institutes of Health. Data for each fragment of DNA was released onto the internet just 24hrs after it had been sequenced, with the whole genome accessible through websites such as Ensembl.

Openness for data, openness for publications

Scientists publish. Other scientists try to build on their work. However, as science has become increasingly data rich, access to the data has become as important as publication. In biology, long before genomes, there were efforts by scientists, funders and publishers to link publication with data deposition in public databases hosted by organisations such as EBI and NCBI. However, publication can take years and if a funder has made a large grant for data generation, should the research community have to wait until then?

The Human Genome Sequence, with its 24-hour data release model was at the vanguard of “pre-publication” data release in biology. Initially the human genome was seen as a special case – scientists worried about raw unchecked data being released to all or that others might beat them to publication if such data release became general – but gradually the idea took hold. Dataset generators have found that transparency has generally been beneficial to them and that community review of raw data has allowed errors to be spotted and corrected earlier. Pre-publication data release is now well established where funders are paying for data generation that has value as a community resource, including most genome related projects. And once you have open access data, you can’t help thinking about open access publication too. The movement to change the academic publishing business model to open access dates back to the 1990s, but long before open access became mandated by funders and governments it became the norm for genome related papers.

Big data comes to biology, forcing it to grow up fast

Few expected the human genome to be sequenced so quickly. Even fewer expected the price to sequence one to have dropped to less than $1000 today, or to only take 24 hours on a single machine. “Next Generation” sequencing technology has led to million-fold reductions in price and similar gains in output per machine in less than 20 years. This is the most rapid improvement in any technology, far exceeding the improvements in computing in the same period. The genomes of tens of thousands of different organisms have been sequenced as a result.  Furthermore, the change in output and price has made sequencing a workhorse technology throughout biological and biomedical research – every cell of an organism has an identical copy of its genome, but each cell (37 trillion in each human) is potentially doing something different, which can also be captured by sequencing. Public databases have therefore been filling up with sequence data, doubling in size as much as every six months, as scientists probe how organisms function. Sequence is not the only biological data type being collected on a large scale, but it has been the driver to making biology a big data science.

Genomics and medicine, openness and privacy

Every individual’s genome is slightly different and some of those difference may cause disease. Clinical geneticists have been testing Individual genes of patients to find for cause of rare diseases for more than twenty years, but sequencing the whole genome to simplify the hunt is now affordable and practical. Right now our understanding of the genome is only sufficient to inform clinical care for a small number of conditions, but it’s already enough for the UK NHS to roll out whole genome sequencing as part of the new Genome Medicine Service, after testing this in the 100,000 genomes project. It is the first national healthcare system in the world to do this.

How much could your healthcare be personalised and improved through analysis of your genome? Right now, an urgent focus is on whether genome differences affects the severity of COVID-19 infections. Ultimately, understanding how the human genome works and how DNA differences affect health will depend on research on the genomes of large numbers of individuals alongside their medical records. Unlike the original reference human genome, this is not open data but highly sensitive, private, personal data. 

The challenge has become to build systems that can allow research but are trusted by individuals sufficiently for them to consent to their data being used. What was developed for the 100,000 genomes project, in consultation with participants, was a research environment that functions as a reading library – researchers can run complex analysis on de-identified data within a secure environment but cannot take individual data out. They are restricted to just the statistical summaries of their research results. This Trusted Research Environment model is now being looked at for other sources of sensitive health data.

The open data movement has come a long way in twenty years, showing the benefits to society of organisational transparency that results from data sharing and the opportunities that come from data reuse. The Reference Human Genome Sequence as a public good has been part of that journey. However, not all data can be open, even if the ability to analyse it has great value to society. If we want to benefit from the analysis of private data, we have to find a middle ground which preserves some of strengths of openness, such as sharing analytical tools and summary results, while adapting to constrained analysis environments designed to protect privacy sufficiently to satisfy the individuals whose data it is.

Professor Tim Hubbard is a board member of the Open Knowledge Foundation and was one of the organisers of the sequencing of the human genome.

Women designing / CrossRef


Those of us in the library community are generally aware of our premier "designing woman," the so-called "Mother of MARC," Henriette Avram. Avram designed the MAchine Reading Cataloging record in the mid-1960's, a record format that is still being used today. MARC was way ahead of its time using variable length data fields and a unique character set that was sufficient for most European languages, all thanks to Avram's vision and skill. I'd like to introduce you here to some of the designing women of the University of California library automation project, the project that created one of the first online catalogs in the beginning of the 1980's, MELVYL. Briefly, MELVYL was a union catalog that combined data from the libraries of the nine (at that time) University of California campuses. It was first brought up as a test system in 1980 and went "live" to the campuses in 1982.

Work on the catalog began in or around 1980, and various designs were put forward and tested. Key designers were Linda Gallaher-Brown, who had one of the first masters degrees in computer science from UCLA, and Kathy Klemperer, who like many of us was a librarian turned systems designer.

We were struggling with how to create a functional relational database of bibliographic data (as defined by the MARC record) with computing resources that today would seem laughable but were "cutting edge" for that time. I remember Linda remarking that during one of her school terms she returned to her studies to learn that the newer generation of computers would have this thing called an "operating system" and she thought "why would you need one?" By the time of this photo she had come to appreciate what an operating system could do for you. The one we used at the time was IBM's OS 360/370.

Kathy Klemperer was the creator of the database design diagrams that were so distinctive we called them "Klemperer-grams." Here's one from 1985:
MELVYL database design Klemperer-gram, 1985
Drawn and lettered by hand, not only did these describe a workable database design, they were impressively beautiful. Note that this not only predates the proposed 2009 RDA "database scenario" for a relational bibliographic design by 24 years, it provides a more detailed and most likely a more accurate such design.
RDA "Scenario 1" data design, 2009
In the early days of the catalog we had a separate file and interface for the cataloged serials based on a statewide project (including the California State Universities). Although it was possible to catalog serials in the MARC format, the systems that had the detailed information about which issues the libraries held was stored in serials control databases that were separate from the library catalog, and many serials were represented by crusty cards that had been created decades before library automation. The group below developed and managed the CALLS (California Academic Library List of Serials). Four of those pictured were programmers, two were serials data specialists, and four had library degrees. Obviously, these are overlapping sets. The project heads were Barbara Radke (right) and Theresa Montgomery (front, second from right).

At one point while I was still working on the MELVYL project, but probably around the very late 1990's or early 2000's, I gathered up some organization charts that had been issued over the years and quickly calculated that during its history the project the technical staff that had created this early marvel had varied from 3/4 to 2/3 female. I did some talks at various conferences in which I called MELVYL a system "created by women." At my retirement in 2003 I said the same thing in front of the entire current staff, and it was not well-received by all. In that audience was one well-known member of the profession who later declared that he felt women needed more mentoring in technology because he had always worked primarily with men, even though he had indeed worked in an organization with a predominantly female technical staff, and another colleague who was incredulous when I stated once that women are not a minority, but over 50% of the world's population. He just couldn't believe it.

While outright discrimination and harassment of women are issues that need to be addressed, the invisibility of women in the eyes of their colleagues and institutions is horribly damaging. There are many interesting projects, not the least the Wikipedia Women in Red, that aim to show that there is no lack of accomplished women in the world, it's the acknowledgment of their accomplishments that falls short. In the library profession we have many women whose stories are worth telling. Please, let's make sure that future generations know that they have foremothers to look to for inspiration.

Blogging is dead… here are some tips to manage your online working environment / Mita Williams

Blogging is dead. Blogging as an ecosystem of blogrolls, blog rings, blog planets, RSS readers, and writers who link and respond to each other… it is long gone. Most people don’t even know that this network once existed, once thrived, and then was lost.

That being said, I still believe blogging is good. Blogging can be personally meaningful and professionally useful and blogging can still be powerful. Small communities of bloggers still exist in niches, like food blogs.

But in many ways, the once mighty blog post has been reduced to being a fall-back longer form entry that is meant to be carried and shared by social media. Most of my own traffic comes indirectly. Last month a post of mine received over 1000 reads in a day – with almost all traffic coming from Facebook. But as I can’t follow back the trail, I have no idea who shared the link to my blog or why.

I have also seen blog posts being shared from author to reader to reader-once-removed via newsletter. When a particular article resonates, you can sometimes see it appear in a new newsletter every week, each recommendation like a ripple in a pond — a little bit of text pushing the readership of a piece of writing just a bit wider than the original audience.

While I get a rush of serotonin every time something I write resonates with readers who share my writing, I still want to write work that decidedly isn’t mean to resonate with a wide audience. I still want to have a place where I can write and share posts that might be useful to some readers.

What I’m trying to say is, I want to share a boring bit of writing now and I know it’s boring and I want you to know that I’m aware that it’s boring.

I have two recommended practices that I would like to share with those who might find it useful as many of us are now working in a always online environment. These practices have worked for me and they might work for you. (Your mileage may vary. All advice is autobiographical.)

The first practice is one that I saw recommended by Dave Cormier and I was so pleased to see his recommendation, because I do that thing and it felt very validating. That suggested practice is to always keep a window open to a screen – for you it might be a word document, but for me, it’s a Google Document – in which you keep available for any time you need to drop a note or a link or an idea to return to later.

There are many people who have amazing systems to manage their online ‘to do’ lists but I have found that creating a next action for every interest and facet of my person (as a librarian, as a mom, as a reader, as someone trying to eat healthier, as a gardener…) as too much for me. Instead, I have found sustained success in the much more low-key logbook. I have one for work and one for home.

On February 19, 2019, I created a Work Log google doc. I know this because I started with a H2 heading of February 19, 2019 and then added a series of bullet points of what I had done that day. Sometimes I drop links to matters that I need to read or follow up on. And when there’s something that I need to do and I don’t want to forget it, I add three asterisks *** so I can go back and Control-F my log into a Todo list. The next day, I add the new date at the top of the page and begin again. And that’s it. That’s my system. It’s like I’m perpetually stuck on step one of proper bullet journaling.

The second suggestion is a practice that I’m setting up right now, which is why I was inspired to write this blog post in the first place.

On July 1st, my workplace transitions to the next working year. For the last ten years now, I use the year’s roll over as an opportunity to create a new folder in my Inbox for the upcoming year’s work. This year the folder reads .2020-2021

I learned this technique when I accidentally saw the screen of my colleague and saw how she organized her email. I have to admit, I was first sort of shocked by this approach. Why create nesting folders of email by year? Why not work on creating folders by subject? ARE WE NOT LIBRARIANS?

But this is the thing. Even librarians cannot know a priori what categories are going to be useful in the future. Rather than create a file system that works for you for a while but then slowly, slowly grows to become, over the years, a misshapen file tree of deep sub-folders and dead main branches… consider starting new. Considering starting a new inbox from scratch every calendar year. And don’t create a single sub-folder within that folder until you receive an email that needs to be put away, and if doesn’t have a place already that makes sense, create a place for that kind of email.

At the very least, for a new short months, everything will feel findable and understandable and it will feel wonderful. That is, if you live a life as boring as mine.

Maybe this is the real feature that separates blogging from social media: it’s the place where we can be boring.

Deanonymizing Ethereum Users / David Rosenthal

In last January's Bitcoin's Lightning Network I discussed A Cryptoeconomic Traffic Analysis of Bitcoin’s Lightning Network by the Hungarian team of Ferenc Béres, István A. Seres, and András A. Benczúr. They demolished the economics of the Lightning Network, writing:
Our findings on the estimated revenue from transaction fees are in line with the widespread opinion that participation is economically irrational for the majority of the large routing nodes who currently hold the network together. Either traffic or transaction fees must increase by orders of magnitude to make payment routing economically viable.
Below the fold I comment on their latest work.

It has been clear for some time that the privacy of Bitcoin's and similar blockchains is illusory. Companies such as Chainalysis exist to pierce their shields, despite the availability of privacy enhancements such as mixers. Now Blockchain is Watching You: Profiling and Deanonymizing Ethereum Users by the same team plus Mikerah Quintyne-Collins points out that the same applies even more strongly to Ethereum:
Ethereum is the largest public blockchain by usage. It applies an account-based model, which is inferior to Bitcoin’s unspent transaction output model from a privacy perspective. As the account-based models for blockchains force address reuse, we show how transaction graphs and other quasi-identifiers of users such as time-of-day activity, transaction fees, and transaction graph analysis can be used to reveal some account owners. To the best of our knowledge, we are the first to propose and implement Ethereum user profiling techniques based on user quasi-identifiers.
Mixers appeared in the Bitcoin ecosystem in an attempt to mitigate its inadequate privacy by obscuring the history of transactions in a herd of unrelated ones. The programmability of Ethereum allows not just for mixers, but also more complex ways to obscure history:
Due to the privacy shortcomings of the account based model, recently several privacy-enhancing overlays have been deployed on Ethereum, such as noncustodial, trustless coin mixers and confidential transactions. We assess the strengths and weaknesses of the existing privacy-enhancing solutions and quantitatively assess the privacy guarantees of the Ethereum blockchain and ENS. We identify several heuristics as well as profiling and deanonymization techniques against some popular and emerging privacy-enhancing tools.
Because "the account-based models for blockchains force address reuse", addresses in Ethereum are more persistent, so there is a need to name them:
Ethereum Name Service (ENS) is a distributed, open,and extensible naming system based on the Ethereum blockchain. ... ENS maps human-readable names like alice.eth to machine-readable identifiers such as Ethereum addresses. Therefore, ENS provides a more user-friendly way of transferring assets on Ethereum, where users can use ENS names (alice.eth) as recipient addresses instead of the error-prone hexadecimal Ethereum addresses.
Béres et al Table 1
The research team collected addresses with which to experiment:
  • Twitter: By using the Twitter API, we were able to collect 890 ENS names included in Twitter profiles, and discover the connected Ethereum addresses.
  • Humanity DAO: human registry of Ethereum users, which can include a Twitter handle in addition to the Ethereum address.
  • TornadoCash mixer contracts: We collected all Ethereum addresses that issued or received transactions from TornadoCash mixers.
Béres et al Figure 2
And the transactions in which they were involved:
By using the Etherscan blockchain explorer API, we collected 1,155,188 transactions sent or received by the addresses in our collection. The final transaction graph contains 159,339 addresses. The transactions span from 2015-07-30 till 2020-04-04.
They used three quasi-identifiers to link multiple addresses in their collection to a single user:
  • Time-of-day transaction activity (Section 6.1):
    Ethereum blockchain transaction timestamps reveal the daily activity patterns of the account owner
    The idea being that the more similar the daily pattern of address usage, the more likely the addresses belong to the same user.
  • Gas price distribution (Section 6.2):
    Ethereum transactions also contain the gas price, which is usually automatically set by wallet softwares. Users rarely change this setting manually. Most wallet user interfaces offer three levels of gas prices, slow, average, and fast where the fast gas price guarantees almost immediate inclusion in the blockchain.
    The idea being that the more similar the pattern of gas price selection, the more likely the addresses belong to the same user.
  • Transaction graph analysis (Section 6.3):
    The set of addresses used in interactions characterize a user. Users with multiple accounts might interact with the same addresses or services from most of them. Furthermore, as users move funds between their personal addresses, they may unintentionally reveal their address clusters.
    As with Bitcoin, slight slips in operational security lead to deanonymization. In practice, few users can maintain adequate OpSec.
The authors don't expect these techniques to deliver complete deanonymization:
Exact identification is an overly ambitious goal in our experiments, which aim to use very limited public information to rank candidate pairs and quantify the leaked information as risk for a potential systematic deanonymization attack. For this reason, we quantify non-exact matches, since even though our deanonymizing tools might not exactly find a mixing address, they can radically reduce the anonymity set, which is still harmful to privacy.
Béres et al Figure 15
What their techniques deliver is a list of addresses ranked from most to least likely to belong to the same user. They compare time-of-day, gas price and two forms of graph analysis in Figure 15. It shows the fraction of their 129 address pairs from ENS names with exactly two addresses that are in the top X of the ranked list. Graph analysis is clearly better than the alternatives. Combining the two graph analysis techniques gets more than 75 of the top 100 pairs to be in their test set. This isn't great, but it is way more than enough to force anyone using Ethereum for nefarious purposes to resort to privacy-enhancing technology.

Thus in Section7 the authors attack the most popular Ethereum mixer:
The Tornado Cash (TC) Mixers are sets of trustless Ethereum smart contracts allowing Ethereum users to enhance their anonymity. A TC mixer contract holds equal amounts of funds (ether or other ERC-20 tokens) from a set of depositors. One contract typically holds one type of asset. In case of the TC mixer, anonymity is achieved by applying zkSNARKs [22]. Each depositor inserts a hash value in a Merkle-tree. Later, at withdraw time, each legitimate withdrawer can prove unlinkably with a zero-knowledge proof that they know the pre-image of a previously inserted hash leaf in the Merkle-tree. Subsequently, users can withdraw their asset from the mixer whenever they consider that the size of the anonymity set is satisfactory.
As usual in cryptocurrencies, the technology depends upon impractically perfect OpSec by users. The authors base three address-linking heuristics on this observation:
  • A user uses the same address for both a deposit and the subsequent withdrawal.
  • A user manually sets the same unique gas value for both a deposit and the subsequent withdrawal.
  • A user uses addresses between which a transaction can be found for both a deposit and the subsequent withdrawal.
Their Table 2 shows that these three heuristics linked nearly 18% of withdrawals to their deposits in the most popular 0.1ETH mixer. This not just bad for the depositors involved, but for all users; it means the anonymity set is at most 82% as big as they think it is.

They observe endemic OpSec failures:
In Figure 17, we observe that most users of the linked deposit-withdraw pairs leave their deposit for less than a day in the mixer contract. This user behavior can be exploited for deanonymization by assuming that the vast majority of the deposits are always withdrawn after one or two days.
This is really bad, as they point out:
For example, for the 0.1ETH mixer the original average anonymity set size of 400 could be reduced to almost 12 by assuming that the deposit occurred within one day of the withdraw.
But it isn't the worst:
Even worse, in Figure 19 we observe several addresses receiving more than one withdraws from the 0.1 ETH mixer contract. For instance, there are 85 addresses with two withdraws and 27 addresses with three withdraws. Withdraw clusters cause privacy risk not just for the owner but for all other mixer participants as well. Note that proper usage requires withdraw always to fresh addresses.
In Blockchain: What's Not To Like? I wrote:
In practice the security of a blockchain depends not merely on the security of the protocol itself, but on the security of the core software and the wallets and exchanges used to store and trade its cryptocurrency. This ancillary software has bugs, such as the recently revealed major vulnerability in Bitcoin Core, the Parity Wallet fiasco, and the routine heists using vulnerabilities in exchange software.
But I missed an important point. Almost 21 years ago, in Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0, Alma Whitten and J.D. Tygar showed that PGP did not in practice deliver the excellent security it promised in theory because:
User errors cause or contribute to most computer security failures, yet user interfaces for security still tend to be clumsy, confusing, or near-nonexistent. Is this simply due to a failure to apply standard user interface design techniques to security?  We argue that, on the contrary, effective security requires a different usability standard, and that it will not be achieved through the user interface design techniques appropriate to other types of consumer software.

To test this hypothesis, we performed a case study of a security program which does have a good user interface by general standards:  PGP 5.0. ... The analysis found a number of user interface design flaws that may contribute to security failures, and the user test demonstrated that when our test participants were given 90 minutes in which to sign and encrypt a message using PGP 5.0, the majority of them were unable to do so successfully.

We conclude that PGP 5.0 is not usable enough to provide effective security for most computer users, despite its attractive graphical user interface, supporting our hypothesis that user interface design for effective security remains an open problem.
Fourteen years ago Steve Sheng et al revisited the issue in Why Johnny Still Can’t Encrypt: Evaluating the Usability of Email Encryption Software:
We ran a pilot of the study with six novice users using PGP 9 and Outlook Express 6.0. Even though we only performed a pilot study, several patterns emerged early to indicate major problems in PGP 9.
...
In summary, compared with Whitten’s study of PGP 5, PGP 9 made strides in automatically encrypting emails. The key certification process becomes the key to the issue in PGP 9 has not made any improvements. PGP 9’s presents multiple instances where the interface does not provide enough cues or feedback for the user.
Three years ago, in When the cookie meets the blockchain:Privacy risks of web payments via cryptocurrencies, Steven Goldfeder, Harry Kalodner, Dillon Reisman and Arvind Narayanan showed how difficult it was for users to make purchases on the Web using cryptocurrencies without sacrificing privacy:
We show how third-party web trackers can deanonymize users of cryptocurrencies. We present two distinct but complementary attacks. On most shopping websites, third-party trackers receive information about user purchases for purposes of advertising and analytics. We show that, if the user pays using a cryptocurrency, trackers typically possess enough information about the purchase to uniquely identify the transaction on the blockchain, link it to the user’s cookie, and further to the user’s real identity. Our second attack shows that if the tracker is able to link two purchases of the same user to the blockchain in this manner, it can identify the user’s entire cluster of addresses and transactions on the blockchain, even if the user employs blockchain anonymity techniques such as CoinJoin.
I'm sure both versions of PGP, the Bitcoin software in use for the Goldfeder et al study, and the Ethereum software in use during Béres et al's study had vulnerabilities. But none of the security lapses in these studies exploited any of them. The user interfaces of security-critical software must be designed so that the user cannot perform actions that impair the security or anonymity the infrastructure is designed to deliver.

For example, it is essential that a mixer such as Tornado Cash not preserve the connection between a deposit and the corresponding hash value in the Merkle tree. Thus it cannot know whether a user is withdrawing to the same address they used for the deposit. This check can only be performed by the user interface software, but Béres et al's study shows it isn't. Similarly, they show that anonymity requires randomized intervals between deposit and withdrawal, which again can only be implemented by the user interface software.

We're going to DrupalCon Global! / Islandora

We're going to DrupalCon Global! manez Wed, 06/24/2020 - 16:58
Body

DrupalCon LogoOn July 14, the Islandora Foundation will be presenting the session "Drupal as a Digital Asset Management System" at DrupalCon Global, and joining the Libraries Summit for a lightning talk and Q&A session. We had been planning to give this same session as a live presentation at DrupalCon Minneapolis in May, so we're thrilled to still have an opportunity to connect with the broader Drupal community, now on an even more global scale.

DrupalCon Global will unite an international community for three days of live events hosted daily July 14 - 16, 2020. If you'd like to join us, and expand your own Drupal network and knowledge base, tickets are $249 USD and include:

  • Full access to all conference events July 14 - 17
  • Live events including interactive sessions, mini summits, and professional development sessions
  • On-demand access to the video library
  • Special attendee-only offers from sponsors

Creating a Library Wide Culture and Environment to Support MLIS Students of Color: The Diversity Scholars Program at Oregon State University Libraries / In the Library, With the Lead Pipe

In Brief

The work of social justice, equity, and inclusion is not a short-term investment by a limited number of people; instead, it should be a part of every library’s and librarian’s work. At the Oregon State University Libraries (OSUL), we felt that in order to create a program dedicated to employing MLIS students of color, it was essential to understand the systems and histories of oppression, as well as the culture of Whiteness, within our state, our university, our library, and ourselves. While the bulk of this article is dedicated to an in-depth explanation of the development and implementation of our Diversity Scholars Program (DSP) to support MLIS students of color, we first share information about our local context, specifically the ongoing equity, diversity, and inclusion work within our library, as well as the professional literature that addresses these issues. The purpose of our case study is to provide a roadmap of our program, with lessons learned, for other academic libraries to consider creating a program like ours at their institution. We cover why and how the OSUL created the DSP, how the program functions, as well as current assessment practices used by the DSP Committee to surface the already visible impacts of the program while we work towards the long-term goals of culture and systems change. Within the article we have integrated the perspectives of the Diversity Scholars and the OSUL University Librarian to create a more robust and thorough accounting of the work required to create and launch such a program.

By Natalia Fernandez and Beth Filar Williams

Introduction 

Master of Library Science students, particularly those getting online degrees, need experiences in libraries to better prepare them for their post-MLIS careers. Offering a concurrent opportunity to gain experience working in a library setting while earning an online degree provides this needed experience to not only obtain a holistic understanding of libraries, but also to focus and discover areas of interest. Within the library and archives profession there are various programs and initiatives dedicated to supporting new generations of racially and ethnically diverse librarians and archivists, programs such as the Association of Research Libraries (ARL) Kaleidoscope Program, the ARL/Society of American Archivists Mosaic Program, the American Library Association Spectrum Scholarship Program, and the University of Arizona Knowledge River Program. Each of these programs, and other programs like these, offer MLIS students of color scholarships or paid employment, mentoring, leadership and professional development opportunities, and career placement assistance. Additionally, the Association of College and Research Libraries (ACRL) Diversity Alliance is a group of institutions with post-MLIS residency programs also dedicated to supporting librarians and archivists of color to succeed and thrive within the profession. These programs exist as part of various equity, diversity, and inclusion initiatives within the library and archives profession on the national level, through both the American Library Association and the Society of American Archivists, to address the Whiteness of the profession, both in demographics and culture (Strand, 2019). For these initiatives to fully develop and be impactful, institutions on the local level need to understand their culture and environment in order to consider implementation of a program that supports MLIS students of color.  

On the local level, at the Oregon State University Libraries (OSUL), we have hosted numerous MLIS students over the years in paid positions, for credit internships, and practicums to offer a variety of work experiences, learning opportunities, and mentorship as they enter into the profession. In the mid-2010s, as part of the OSUL’s 2012-2017 strategic plan our institution recognized diversity as a core value with a goal to “Sustain an intentional and inclusive organization” and an action item to “increase the diversity of the OSU Libraries and Press workforce.” We recognized that our library had never proactively and systemically engaged in the recruitment, employment, and retention of MLIS students of color. However, we also recognized that this was not a box to be checked, and that we needed to think holistically about our libraries’ environment and culture and how it would impact MLIS students of color working with us, especially considering that our own library is a reflection of a majority White profession (ALA Diversity Counts). We knew that we could not ignore the implicit and explicit systemic racism, the White supremacy narrative, and Whiteness as a culture that exists within our library and is a reflection of our profession and our society.   

The work of social justice, equity, and inclusion is not a short-term investment by a limited number of people; instead, it should be a part of every library and librarian’s work. Though “it is clear that the information professions are now in the midst of a conversation about Whiteness…not everyone is participating, and many remain unaware that the conversation is happening” (Espinal et al., p. 149). We each need to do our part. Hence, it was essential for the OSU Libraries, especially those of us involved in the process to create a program dedicated to employing MLIS students of color, to understand the systems and histories of oppression, as well as the culture of Whiteness, within our state, our university, our library, and ourselves. While the bulk of this article is dedicated to an in-depth explanation of the development and implementation of our Diversity Scholars Program to support MLIS students of color, we will first share information about our local context, specifically the ongoing equity, diversity, and inclusion work within our library, as well as the professional literature that addresses these issues.    

Who We Are

Both the state of Oregon and Oregon State University have a dark history in their treatment of people of color as well as LGBTQIA communities. Past state and local laws excluded people of color from land ownership, prevented marriage between Whites and those of other races and ethnic backgrounds, and discouraged immigration and permanent settlement by non-Whites. (Millner and Thompson, 2019). However, in resistance to the societal and governmental racism endured, Indigenous peoples and people of color in Oregon formed community and organizational networks to retain and share their cultural heritage. Within Oregon, there are community archives, such as the Portland Chinatown Museum and the Gay and Lesbian Archives of the Pacific Northwest, as well as community led groups to research and share history, such as the Oregon Black Pioneers. There are a number of advocacy groups including, but by no means limited to, the Native American Youth and Family Center, the Urban League of Portland, and Pineros y Campesinos Unidos del Noroeste. For us, it is essential to understand the history of our state, as well as the current community initiatives occurring in our state, because this is the environment in which our institution and our library exists. The history of and ongoing systemic injustices and White supremacy on the state level is deeply embedded and active on the local level. Within the state’s context, OSU is a PWI (a Predominantly White Institution is an institution of higher learning in which people who identify as White account for 50% or greater of the student enrollment or the institution is understood as historically White) within a predominantly White state. In the 2019 academic year, students of color accounted for just over 26% of the population of just over 31,000 students. This number mirrors Oregon’s 2019 population estimate of 26% people of color living in the state. However, the faculty and staff from underrepresented groups is very low in comparison. As examples, of the tenure track, instructor, and research faculty, individuals from underrepresented groups range between 4.8-6.7% and of the professional faculty and classified staff, the range is only slightly higher at about 9%. (Oregon State University Strategic Plan 4.0 Metrics 2018-2019). The OSU Libraries on the main campus in Corvallis—with two branch libraries, one on the coast and one in central Oregon—employs about 90 faculty and staff, along with about 140 student employees. Our library matches the university’s demographics in being a predominantly White identifying library staff, though we often have a majority of underrepresented groups in our student employees. 

Both the OSU campus and the library have, and continue to be, engaged in actions to change the Whiteness culture. One of the actions Brook, Ellenwood & Lazzaro suggest libraries can take is to “provide library staff with ongoing opportunities to participate in trainings and other professional development activities that build knowledge of their own cultural backgrounds and assumptions, the racial and ethnic diversity of the campus community, and the history of oppression, power, and privilege experienced by various groups” (p. 276). In recent years, OSU has been engaging with and revealing its history through educational initiatives, such as a building names evaluation and renaming process that renamed buildings originally named after individuals who were White supremacists, as well as the university’s Social Justice Education Initiative (SJEI) that includes SJEI workshops which examine of the existing systemic and institutionalized racism in Oregon and at OSU, and the workshops ask participants to understand “how did we get here, how do you locate yourself in this story, and why does social justice matter?” OSU also has a Search Advocate Program that trains individuals to participate in search committees to promote equity, validity, and diversity on OSU searches. “The goals for diversity and inclusion in librarianship must be expanded to include recruitment, retention, and promotion” (Espinal et al.,  p. 155) and hence why this Search Advocate program is critical to make changes holistically on our campus. In connection to the OSU Libraries, the Special Collections and Archives Research Center was deeply involved in buildings evaluation and renaming process, our library director strongly encourages all library staff and faculty to participate in the SJEI workshops as a part of their work, and though not required by the university, the OSU Libraries strives for all of its searches to have a search advocate and many librarians are search advocates. The continual offerings of social justice trainings, invited library speakers such as Dr. Safiya Umoja Noble, and the search advocate community of practice all help continual growth and learning as the majority in our library have participated. We often follow up with discussions at library meetings on how to apply what we learned, helping us to “work collectively to understand racial microaggressions and to mitigate their impact” (Brook, et al., p. 276). We have also hosted numerous book groups to discuss and grow as Espinal et al. states that we must educate our (White) selves through readings (p. 159). Titles discussed have included Waking Up White by Debby Irving and White Fragility by Robin D’Angelo. Both book clubs have allowed us to discuss and be self reflective on our own Whiteness and changes we could make in our institution and personal work. OSU also knows that it has a lot more work to do and recently launched a campaign – We Have Work To Do – pushing this messaging throughout campus, acknowledging there is not one solution or checkbox, but a need for constant reflective practice and concrete actions. Additionally, the OSU Difference, Power, and Discrimination (DPD) Program works with faculty across all fields and disciplines at OSU to develop inclusive curricula that address institutionalized systems of power, privilege, and inequity in the United States. Several OSU librarians have completed this program as well as work collaboratively with professors who teach DPD courses. Within the library, librarians on staff have been observant and intentional in making systematic changes within our library classification, working on adding local headings to change controversial and outdated and often racist subject headings. And, librarians have also been collaborating with community groups to host events such as Wikipedia Editathon: Writing Pacific Northwest African American history into Wikipedia – another way the library is attempting to make systematic changes to our inherent Whiteness in libraries. 

While these initiatives show ways the OSU Libraries is growing and working towards combating its Whiteness, it is essential for the members of any group thinking of beginning a program to support MLIS students of color, to not only participate in these initiatives, but to be very self-reflective in their own identities and privilege. Before engaging in a process to research, develop, and implement our program, we had to make sure that we did the work to educate ourselves.      

Beth: As a young child growing up in the Baltimore area, I recall my mom saying she was embarrassed to be White and how terrible it was to be Black in this county, reflecting on the injustices people of color face daily, taking me to marches or protesting. She was getting her degree while teaching in a Head Start program in Baltimore city schools, learning and being mentored there as one of only two White people in the school. I don’t remember ever not thinking about racism as a problem in America, but I was hopeful others were like me and my family, accepting people for who they are, not thinking about skin color, helping your community and those in need, and accepting that change was slowly happening. As I got older, I began to realize that racism was deeply embedded in ALL systems, including librarianship. I learned that it is unhelpful to be colorblind, ignoring the hidden systems of Whiteness and racism, and instead, action is needed to speak up, call people out, and continually grow myself. As a White cisgendered female tenured administrator and head of the Library Experience and Access Department at Oregon State University Libraries since 2015, I have more power and influence to actually make inroads to changes in our systems. In my over 20 years in libraries I have worked in various places and positions, but mentoring students, especially MLIS students, has been part of whatever job I had and is my passion. Based on many experiences throughout my career, and especially during my time at the University of North Carolina at Greensboro with the diversity resident program, I was able to help create our Diversity Scholars Program and continue on as a committee to mentor the scholars, grow the program, and advocate for both. 

Natalia: As a Latinx cisgender woman interested in pursuing a career in librarianship, specifically within special collections and archives, I was overjoyed to learn that in my home area of southern Arizona, the University of Arizona Knowledge River Program specialized in educating information professionals regarding the needs of Latinx and Native American communities. My experiences as an MLIS student in the Knowledge River Program, including the mentorship I received from both librarians of color and White allies, the paid job opportunities offered through the program, the professional development funds to attend conferences, and the overall experience of being in a cohort of supportive peers, all effectively prepared and empowered me to begin my post-MLIS career. My primary job as the curator of the Oregon Multicultural Archives and OSU Queer Archives, a position I have held since late 2010, is to collaborate with LGBTQIA and communities of color to empower them to preserve, share, and celebrate their stories. Within my position, I have supervised numerous graduate students on various archival projects. In 2015, I co-founded the Diversity Scholars Program Committee, and I am the supervisor of the Diversity Scholars. In order to create an environment in which MLIS graduate students can thrive, I use both the lessons learned from others within the profession via conference presentations and publications, as well as reflection upon my own experiences as a Knowledge River Scholar to inform the ways in which I shape the Diversity Scholars Program. Over the course of my life I have been both othered and experienced privilege, I have experienced microaggressions and have made mistakes myself. I actively engage in social justice trainings and conversations, as well as recognize that fully understanding my identities is a process and a life-long journey.       

Due to our previous professional experiences and personal passions, a significant role for both of us is to ensure that the next generation of librarians includes more people of color who are well supported as they start their careers. As there will always be more MLIS students, we also see our role as ensuring that the Diversity Scholars Program is holistically integrated into our library so that even if we moved on to other positions in our careers, the program would remain.  

Literature Review

There is a great deal of literature on programs similar to the DSP, as well as the need for the profession to recruit and support more librarians of color. While decades worth of literature exists, for the purposes of our review, we will focus on the publications that most inspired and helped shape our program, and we will specifically highlight a few key publications from within the last five years that we feel are must reads for those considering implementing a similar program. 

In order to have a foundation of knowledge for ourselves and to effectively advocate for the need for the DSP, we read publications that addressed the profession’s overwhelming Whiteness, not just in staffing demographics, but in the profession’s culture of Whiteness and the various systems of oppression working in tandem that continue to perpetuate Whiteness. As April Hathcock aptly states, “It is no secret that librarianship has traditionally been and continues to be a profession dominated by Whiteness.” (Hathcock, 2015) Additionally, to learn more and see statistics on this read any of the following: Galvan, 2015; Bourg, 2014; Beilin, 2017; Roy, et al., 2006; Boyd, et al., 2017; Pho & Masland, 2017; McElroy & Diaz, 2015; Chang, 2013. Whiteness permeates numerous aspects of our profession. Scholars such as Angela Galvan (2015) and April Hathcock (2015) bring to light the myriad ways Whiteness is embedded more implicitly within our profession through our recruitment and job application processes, and they offer excellent methods to interrogate and interrupt Whiteness within those processes. Jennifer Vinopal (2016) builds upon their work by offering various methods for the profession to go “from awareness to action” as her article title notes.  She advocates for libraries, specifically library leaders, to take on action items such as, but not limited to, creating opportunities for meaningful conversations about equity, diversity, and inclusion; include diversity initiatives in strategic plans and ensure time and support for staff to accomplish them; and proactively recruit job candidates and then follow through with mentoring and professional development opportunities. All of these scholars reference how the race and ethnicity demographics of the profession do not match many of the communities we serve and the profession’s continued failure to address institutional cultures that maintain this dynamic. In response to the ongoing imbalance in our professional culture, Boyd. et al. (2017) states, “Deliberate and strategic action must be taken to recruit, mentor, and retain new librarians from diverse backgrounds to further increase these numbers in the profession.” (p. 474) 

There are various publications detailing the “how tos” of designing residency programs and positions dedicated to recruiting, supporting, and retaining  people of color as part of diversity initiatives to change the demographics of the profession (Boyd, Blue, & Im; McElroy & Diaz; Brewer;  Chang; Pho &  Masland; Dewey & Keally; Cogell & Gruwell and many more), so we highlight only a few key pieces. While Beilin notes that even with the many diversity initiatives of the past and present “the demographics of librarianship have hardly shifted over the last generation,” he follows that statement by saying, “though their absence would presumably make things much worse.” (p. 78) However, it’s not just about doing it, it’s about doing it right, so that when we recruit and hire individuals for positions to specifically support people of color, we want to ensure their work environments are such that they can thrive and choose to remain within the profession. If you are going to read one book, the 2019 book Developing a Residency Program (Practical Guides for Librarians) is a go-to guide for practical advice on how to develop and manage a library residency program. The book covers the processes to successfully develop, build support for and structure a program; recruitment, hiring, and onboarding; and program assessment as well as ideas for post program support for individuals who continue on in their library careers (Rutledge, Colbert, Chiu, and Alston, 2019).    

Additionally, there are two must-read research studies that analyze the experiences of diversity residents using both qualitative and quantitative methods to determine overarching recommendations when developing programs like the DSP.  In the first piece “Evaluation of Academic Library Residency Programs in the United States for Librarians of Color,” the authors, Boyd, Blue, and Im, implemented two nationwide surveys, one for residents and the other for coordinators, to determine what aspects of their positions and programs were most helpful. The survey respondents included individuals who were currently residents as well as those who had participated in a residency program in decades past and were able to reflect how their experiences shaped their careers. Based on the data gathered and analyzed, the authors state that the need for institutional buy-in, a structured and formal mentoring program, the use of cohorts to transfer knowledge, and the need to facilitate socialization for residents, especially to create a sense of belonging and value, are all essential program components. The authors state that it is “[t]hese components [that] benefit the residents in priming them for a career in academic libraries and all of the impending challenges librarians of color face.” (Boyd, et al. 2017, p. 497) The second must-read publication is Jason Alston’s 2017 “Causes Of Satisfaction and Dissatisfaction For Diversity Resident Librarians–A Mixed Methods Study Using Herzberg’s Motivation-Hygiene Theory.” Alston’s dissertation is a deep dive into what works and what doesn’t for a residency program that is post-MLIS. Alston poses eleven research questions about the quality of experience of the residency with the purpose of the study and results being so current or future residency programs can be improved. His results were similar to the previous study, stressing the need for buy-in from the institution by ensuring a knowledge of who the residents are, as well as what the program is and why it was established; appropriate guidance, support, and mentorship from coordinators, supervisors, and administrators; opportunities for individuals to perform meaningful, challenging, and innovative work that enables them to grow professionally, especially in preparation for future positions; and the need for assessment of the position and program. Even though the DSP is not a post-MLIS program, the results of both of these studies are still very much applicable to our program.

A recurring theme in the literature is the need to create a professional culture and environment for people of color to thrive through mentorship and strong professional networks of support (Hankins & Juarez, 2015; Boyd, et al., 2017; Vinopal, 2016; Pho & Masland, 2014; McElroy & Diaz, 2015; Dewey & Kelly, 2008; Black & Leysen, 2020; Brewer, 1997). Mentoring can help with the “culture shock” (Cogell & Gruwell, 2001) and “otherness” (Boyd, et al., p. 475) MLIS students of color often feel and it helps them build bridges and connections (Dewey & Kelly, 2008). The chapters in the book Where are all the Librarians of Color? The Experiences of People of Color in Academia (2015) provide an amazing compilation of the shared experiences of academic librarians of color, but there are two chapters in particular, chapters 2 and 3, that address this need. In both chapters the authors stress the need for mentorship and continued support from professional networks so the profession can retain librarians of color who grow and succeed throughout their careers. Since the DSP focuses of MLIS students of color, we were especially moved by the words of Loriene Roy (2015) in the book’s preface when she states, “…little attention is given to the experiences of librarians of color as they transition from student to information professional” (vii) and notes that “[m]entorships are often offered as the best answer for facilitating a smooth adjustment into the workplace and further advancement within the field” (p. viii). While Roy shares that “[t]here is no single route to changing the characteristics of the workforce” (p. vii), a program like the OSUL Diversity Scholars Program is one of many routes that academic libraries can pursue as part of their various initiatives to change our professional culture of Whiteness so it is more diverse and inclusive.     

Overview of the Diversity Scholars Program (DSP)

After much research and conversation, the Oregon State University Libraries (OSUL) decided to create a program to support a cohort of MLIS students of color who were enrolled in an online degree program. The reasons for making this decision were context-dependent and informed through conversations within the larger academic librarian community, consulting the literature, and determining what was fiscally feasible. After nearly three years of research, committee meetings, and planning, the OSUL Diversity Scholars Program started with its first scholar in January 2018, hosted its second scholar beginning in October of that same year, and is currently hosting its third scholar who began in October 2019.   

Established in 2015 and implemented in 2018, the Diversity Scholars Program provides its Diversity Scholars with experiences in the librarianship areas of their choosing, along with opportunities for professional development, scholarship, and service within an academic library setting. The DSP at our academic library aims to contribute to creating a more diverse and inclusive Library Sciences field by providing MLIS students of color career opportunities in academic and research libraries and archives. The DSP committee works to provide extensive support and mentorship for scholars who are pursuing their Master of Library and Information Science degree online while additionally providing paid, hands-on experience within the profession to broaden their professional opportunities after completion of their graduate degree. The Diversity Scholars are expected to engage in the primary assignment duties of an academic librarian. Scholars are given the opportunity to experience the full scope of an academic library, working in all of our departments – from technological and public services to archives and meeting with administrators – to then be able to determine their area(s) of focus.  

Our scholars have engaged in a variety of experiences. They have worked with students in the library’s undergraduate research and writing studio, taught library information sessions and workshops, tabled at events such as student welcoming and OER faculty initiatives, worked the reference desk and online chat, compiled and analyzed library data, and participated in library-wide as well as relevant departmental and project meetings. As a part of developing their scholarship, the scholars have attended and presented at local Oregon conferences, national ones like ALA, and even an international conference. They have also served on a variety of library committees such as the library awards committee, search committees, and the library employee association. We make sure the scholars know that their MLIS studies come first and they are strongly encouraged to use their work experiences for class projects. The flexibility in their schedules allows for support when and how they need it. As a conclusion to their position appointments, we mentor the scholars through the job search process. Additionally, each scholar experiences the annual review process, which includes self-reflection and goal setting, and they are asked to assess their experience of the program itself. 

We have strived to be mindful of Isabel Espinal’s statement that, in our case, the Diversity Scholars, “should not have to choose between technological focus [or any area of interest to them] and a diversity focus: both are future oriented and work well together. Open access projects are a good example, as are digital/data curation roles and media/digital literacy efforts.” (p. 158). While encouraged, like all faculty and staff in our library, to participate in equity, diversity, and inclusivity projects, trainings, and initiatives, it is always the scholars’ specific interests that determine which projects they choose. There are cases in which their interests and this work overlap. For example, one scholar interested in the work of archivists asked to participate in the Wikipedia edit-a-thons, and the other scholars interested in teaching and engagement were excited for the opportunity to participate in the university’s Mi Familia Day for the Latinx community. If the opportunities align with the scholars’ interests and project capacity, we support it, otherwise, they do not participate and are not asked to participate. It is essential for this to be communicated and emphasized by the supervisor. Natalia, as their supervisor, shares her own personal experiences with the scholars to express that because of her job, she is often invited to participate in numerous initiatives, and though she appreciates being asked, she will sometimes choose to decline involvement – and that’s okay. However, it is important to recognize the vulnerable position an MLIS student employee may be in, feeling like an invitation is a directive or wanting to get as much experience as possible, even when it is overwhelming. Therefore, consistent and regular conversations are key to talk with scholars about their interests, especially as they change or focus over time, and it is imperative for the scholars to know that their supervisor is their advocate and can say “no” on their behalf if that is helpful.  

A part of our program that is still in development, in part because it is still relatively new, is creating a robust cohort, one in which the scholars have opportunities to work together and act as peer mentors. In our particular experience so far, with only two scholars hired at one time, due to non-overlapping schedules and differing areas of professional interest, an active cohort has not yet come to fruition. Additionally, in a recent remodel of the library, we decided – with input from the scholars – that instead of creating a shared workspace for the scholars to work together, they should receive individual cubicle spaces as do our other library faculty and staff. While we want the scholars to have flexibility in their schedules and agency in their own professional development, based on their feedback, we are considering ways to create a more formal structure, such as set regular group meetings and shared readings for discussion, in which collaborations and relationships can develop. Notably, we do know that each new scholar contacted the previous scholar to chat with them about the program prior to applying. 

The purpose of our case study is to provide a roadmap of our program, with lessons learned, for other academic libraries to consider creating a program like ours at their institution. Our case study describes the research, program development, implementation, and future plans for the DSP. We will cover why and how the OSUL created the DSP, how the program functions, as well as current assessment practices used by the DSP Committee to surface the already visible impacts of the program while we work towards the long-term goals of culture and systems change. Within the article we have integrated the perspectives of the Diversity Scholars and the OSUL University Librarian to create a more robust and thorough accounting of the work required to create and launch such a program.  

Charge & Research, 2015

It is important to note that our program stemmed from the top down, as getting administration buy-in is one critical piece and we had an advocate in our leadership. In February 2020, we met with our library director, Faye Chadwell, Donald and Delpha Campbell University Librarian, and asked her to reflect upon her reasons for championing a program like the DSP five years ago. Reflecting on the start of her own career in her first position as reference librarian in the late 1980s managing a MLIS graduate fellowship for underrepresented groups at the University of South Carolina, she noted that the issues are still existing today. Over the years, Chadwell continued to see the positive impacts of the USC Fellows Program, and other programs like it. When she became library director of the OSU Libraries in 2011, she finally had the power to implement a program to support students of color within the library profession, and she sought to do so. In the spring of 2015, our University Librarian charged a team of three librarians with investigating the options that the library had to create a diversity resident librarian position. We sought to create a position to promote diversity within the profession, reflect the changing demographics among our students, and to increase opportunities for diverse candidates to explore academic librarianship. Beth, a newly hired department head at OSUL, had come from an institution with an established diversity resident program and had worked with three different residents while there. Her experience and connections at the University of North Carolina at Greensboro helped get the team going with researching the concept. 

The team began with an environmental scan of diversity residency programs within academic libraries. Luckily, through the gracious sharing of the ACRL Residency Interest Group who had already compiled a spreadsheet of academic library residencies, the team quickly got started. Using the spreadsheet, we each dove into a section to research more information we needed from the list of schools and programs, both looking online as well as contacting librarians at those institutions directly. We noticed most residencies are post-MLIS with a few exceptions, such as the University of Arizona Knowledge River Program that focuses on current MLIS students. We also discovered two interesting initiatives we could glean from: NUFP and Kaleidoscope. The nationwide student affairs program NUFP (NASPA’s Undergraduate Fellows Program) states “by mentoring students from traditionally underrepresented and historically disenfranchised populations, this semi-structured program diversifies and broadens the pipeline of our profession.” Established in 2000 as the ARL Initiative to Recruit a Diverse Workforce, renamed ARL Kaleidoscope in 2019, its goal is “diversifying the library profession by providing generous funding for MLIS education and a suite of related benefits, including mentoring, leadership and professional development, and career placement assistance.” The short term IMLS funded project ALA ran in 2010-2013 called Discovering Librarianship selected early career librarians as field recruiters, to recruit ethnically diverse high school and college students to careers in libraries. We realized, recruitment must begin with underrepresented groups into an LIS program (McElroy & Diaz, 2015, p. 645; Pho & Masland, 2014, p. 272). This research and these programs helped guide us in our research to think beyond a post-MLIS position. 

From our research, we realized that talking to current and former residents themselves about their experiences was crucial. Having personal connections with former residents from UNCG, Beth reached out and set a few virtual conversations. The team also reached out to residents, as well as some residency coordinators. These conversations offered a variety of perspectives on barriers potential programs might face, and also helped illuminate ways the residents and institutions benefitted from the programs. Many residency programs, alliances, and interest groups were examined to inform the team about the typical structure and components of such programs. We also read blog posts, book chapters, and articles written by former diversity residents to provide insight into the varied experiences of individuals who have participated in programs like these.  

After our six months of research, and as part of our initial charge, the team wrote a short report for the University Librarian and Library Administration Management and Planning group to share their findings and offer recommendations about what might work best for our library. Although we offered two options—a post MLIS Diversity Resident Program and a concurrent MLIS student Diversity Resident Program—we recommended the latter based upon feedback from current and former resident scholars, along with the makeup of already existing opportunities within librarianship. The recommendation would work to both encourage OSU undergraduates to consider an MLIS degree as well as find and support local MLIS students of color, not post-graduates, to apply. Because Oregon has no in-state library masters programs, we could offer a praxis opportunity for those locally getting an online master’s degree, and focus recruitment on our local community, especially within our own undergraduate library student employees. As Roy said in the summary of Spectrum Scholars experience, “The single most predictive indicator for choosing to enter a LIS program was prior experience working in a library.” (Roy, et al., 2006) Additionally, because the literature states, “Solo library residents can find their residencies to be overwhelming and isolating experiences, especially in the case of diversity library residents” (Boyd, et al., 2017, p. 478) and other scholars mention the need for cohorts rather than solo experiences as well (Alston;  Hankins & Juarez; Perez & Gruwell; Dewey & Keally), we strongly recommended that the program be cohort based; and, if not more than one person could be hired at time, the hires’ appointments would at least overlap to offer opportunities for peer mentorship and collaboration. Our library administration agreed, and a call went out to recruit volunteers for the next phase of the DSP creation process. By November 2015, a DSP Committee had been formed; it consisted of two members from the original team that wrote the report, as well as three new members, including Natalia. 

As part of our recent interview with the university librarian we asked her the following two questions: What advice would you offer administrators who are unsure about starting a program like the DSP? What advice to librarians would you offer so they can advocate a program like the DSP to their administrators? Based on our conversation, as well as our own experiences in the research phase, below are some lessons learned: 

  • Determine the library’s priorities regarding Equity, Diversity, and Inclusion (EDI) work: 
    • A commitment to EDI initiatives cannot be a box that is checked off or a one-off program or workshop; the work needs to be integrated into all departments with a systematic and cultural shift.
    • With EDI initiatives as a priority, then the entire library administration and staff need to dedicate resources and time to concrete action items to move those initiatives forward. Administrators can charge and support a group to conduct research and offer options for what would work best in their institutional context to support MLIS students of color.  
    • If there is pushback from some within the library that ask why the entire library is spending so much time and energy on a few people who are not permanent, there needs to be administrative support and an overall library culture that understands and advocates for these positions because they are for the greater good of the institution and the profession. 
  • Do your research:
    • Seek out literature specifically written by scholars of color; and, beyond reading the literature, try reaching out to people who have been in residencies for advice. Attend webinars or panels of residents/scholars and talk with library program coordinators. Review the ACRL Diversity Standards for cultural competencies for academic libraries.
    • Ask yourselves: What is happening in your campus community? What resources, partners, funding already exist?
    • Consider all possible options and potentially a phased approach if funding or buy-in is not completely there yet. Don’t be afraid to pilot it or experiment.  
  • Seek administrative support as well as advocates within your library staff:
    • Whether you are library staff or an administrator, informally chat with colleagues about your research to gauge their interest and capacity, as well as plant the seeds for them to support future scholars. It is not a glorified internship; a scholar is to be treated as a colleague. Getting advocates and buy-in from all departments is critical since not only are administrations involved in the decision making but library staff will be working with the scholars. 
    • Determine what motivates your administrator – is it data? Is it values? What does it mean for the library, campus, community? Administrators tend to be competitive; one approach can be to frame the creation of a program at your institution as the opportunity for them to be the “first” or a “model” for other institutions.    
    • Ask your administrator to talk to other library administrators about their approaches, what worked and what did not, for creating and funding these positions. 

Development, November 2015 – December 2017

During the research phase, we were especially inspired by April Hathcock’s 2015 article “White Librarianship in Blackface: Diversity Initiatives in LIS” in which she explains how diversity programs, especially the application process, are coded to promote Whiteness, and the need to mentor early career librarians in both navigating and dismantling Whiteness within the profession. The full cycle of our program was critically important: our recruitment and application process to encourage people of color to pursue a career in librarianship, the program experience itself to include a strong mentorship competent, support in the job search for program participants, and continued support in the post-MLIS experience. With this insight, the DSP committee officially launched in November 2015, with weekly meetings beginning in January 2016. The committee’s task was to pick up where the previous group’s work left off and develop a plan to make the proposal for a program a reality. The main “to dos” included brainstorming the program logistics, creating a position description, and planning recruitment strategies. Committee members reviewed the previous group’s report, read key pieces of literature on residency programs, and reviewed a variety of existing residency program position descriptions. We also spoke with our university’s Office of Equity and Inclusion and Human Resources department about the creation of this type of position, especially for someone who would have been enrolled in an out of state graduate program while employed for OSU. We created a space on our library’s wiki to document the committee’s work. Beyond the administrative aspects of the program, we also used time in meetings to allow for discussion, growth, understanding and sometimes emotional releases as we supported each other to unpack the systematic Whiteness found embedded in so much we do. 

Together, we brainstormed the ways in which we could best frame and implement our program to address the issues Hathcock addresses, both in the short- and long-term vision of the program. We asked ourselves “What would success look like for this program, in both the short- and in the long-term?” We knew that 10-15 years from now, we would still want the program to exist, for the program participants to be connected, and for the program to be so embedded in our library that it would outlive us in our positions. 

In order to more fully develop our program ideas, the committee decided to develop a one-time paid 10-week undergraduate student internship during the summer of 2016. The internship experience served as a pilot for our proposed program and based on the questions raised and discussions we had, the DSP committee further developed the program structure and developed recruitment ideas. Some initial insights included: 

  • We learned that it would be ideal to have more than one scholar at a time. However, we knew that we would have to balance this desire with our budget and attempt at least some overlap in the position time periods. 
  • We determined that if we wanted to hire MLIS students, we could realistically only hire them to work 20 hours per week so they could also attend school full time if they chose to do so. 
  • Additionally, knowing that graduate students often want to take an internship, catch up on classes, or vacation in the summer, we did not want to have them locked into a 12-month position so we considered a shorter time frame with the ability to come back for a second year. 

We settled on a position that would be a 9-month appointment, but only 30 weeks of work during that time period, that could be renewed for a second 9-month appointment for a total program length of 18 months, with the option for an extension. This year and a half could potentially have a 3-month break in between if scholars chose to do a summer internship elsewhere or potentially do a special project internship in our library. We aimed for flexible schedules for the varying needs of our scholars—and spoke with our University Librarian to also be able to add an extra 3 months if needed to assist scholars until graduation. In addition to their salary and full health care benefits, they receive $2500 in professional development funds to attend conferences or other relevant activities. 

The DSP Committee also had a lengthy discussion about offering benefits with a half-time position. Our University Librarian gave us a set amount of funds for the positions using soft money that could be spent at her discretion. We had to consider that since benefits through the university would mean 33% of the salary, the take home pay we could offer the scholars would be lower than a position without full benefits. It was disappointing to lower the salary but offering benefits seemed the socially just thing to do; and the fact that our scholars would be taking online degree programs not within our state, they generally would not be offered health care benefits from their schools. The scholars are part time; if they were full time, they would make less than an entry level position within the OSUL. We hoped with added benefits of our program, it would outweigh the lower salary even though the cost of living in Corvallis, Oregon is fairly high. We also hoped once the program was running, we could get more permanent funds and offer a higher salary. 

To develop the Diversity Scholar position description, we used the template for library faculty positions. The DS position description is formatted as it is for our tenure-track librarians; the scholars would have a “primary assignment” but also service and scholarship components, divided at 75%, 10%, and 15% respectively. The expectation was for them to attend library-wide and relevant departmental meetings, serve on library committees and searches, attend and present at conferences, and participate in other relevant professional development activities. We would offer the scholars adequate funds toward these professional development activities such as traveling to conferences or workshops. As Diversity Scholars, they would each have their own cubicle space and be treated as colleagues.

The next portion of our program development, which was the most time consuming, was working with the university human resources team to determine what classification our scholars would be. Over the course of the spring and summer of 2017, we researched classification options and spoke with various HR folks to ensure the classification we selected included health care coverage options, were paid via a stipend to offer scheduling flexibility, had a streamlined hiring and reappointment process, and could include additional money for professional development activities via the library. 

We created an internal report with our new information and began sharing our idea of the program with library administration and other colleagues, to grow an understanding of the goals for the program, and to seek advice and ideas to strengthen the program. The goal was to have the majority of the departments in the library represented by members of the committee, who serve as advocates for the program, as well as mentors and personal contacts for the scholars. The committee would assist in recruiting potential scholars and send weekly updates to the library’s administrative group to keep them excited and updated about the program. We began attending library-wide administrative meetings and library management team meetings during the late fall. We especially sought the support of department heads to ensure communication to their departments and hear any concerns. With a finalized budget, we received approval from the University Librarian in fall 2017 to move forward with the recruitment and hiring process for our first scholar.   

We developed an application process that focused on relationship building with potential applicants and presented as few barriers as possible. Rather than a competitive process, we wanted to cultivate mutual interest. We developed a pre-application requirement to have an in-person or video call meeting with a member of the Diversity Scholars Program Committee to share information about the program, answer any questions the potential applicant may have, offer our assistance with applications for MLIS programs, and importantly, give the potential applicant an opportunity to get to know us. The application process requires a resume and cover letter with reference contact information, but no letters of recommendation since obtaining letters can be prohibitive for potential applicants and the committee preferred to have the opportunity to speak directly with references. References can be professors, employers, and/or community mentors, broadly defined.       

All libraries conduct their budgeting differently; in our case we did not have a set budget for the program (other than the salary and professional development). Because we devoted the time and energy to speaking with department heads one-on-one, presenting at faculty and staff gatherings, and updating the library management team to share information about the DSP before the program began, a significant amount of buy-in existed to support the program. Therefore, when we made particular asks to use existing departmental budgets that aligned with what we needed, departments were willing and eager to be supportive. Our Emerging Technologies and Services department bought the scholars’ laptops and other equipment; our Teaching and Engagement Department provided office supplies and cubicle space; our Library Administration covered the costs of printing promotional brochures; and our Library Experience and Access Department covered nametags and business cards. Budgeting in this way adds to the buy-in for all departments—now the DSP is integrated into all departments. 

Promotion and recruitment were the next steps, and for us, that meant local. We started simply and inexpensively, using word-of-mouth marketing to recruit through the library staff, library student employees, campus partners who work with students of color, and reaching out to OSU library alums, such as former student workers. We reached out to the Emporia State University MLIS hybrid program in Portland to ask if there were any students coming into the program who would be a good match for the DSP and lived within a commutable to Corvallis area. Using an easily editable LibGuide from Springshare as our DSP website, along with our current internal wiki space for the communication and documentation of the committee, we began our recruitment and promotion. We also began creating a brochure in-house with student designers. Because we do not have the funds to assist with relocation costs, the committee felt it would be a disservice to ask someone to move to Corvallis with no promise of assistance with moving costs. At least for the start of the DSP, we purposely refrained from advertising the program too broadly, and instead focused on geographically local promotion and recruitment. Therefore, our recruits have been students who are already living in the Corvallis commuter area. We wanted to start small, develop effective strategies and models for the first few years, with the plan to expand our recruitment as the program gr0ws, and more broadly promote the program through various networks such as the Oregon Library Association and the REFORMA Oregon chapter. Another challenge to recruitment is that because there is no in-state MLIS program in Oregon, the students we are recruiting into the profession pay out-of-state tuition costs. Therefore, it is essential for us as a committee to not only let students know of scholarship opportunities, but to actively help them in the application process—which we have done with some success. So far, the first two Diversity Scholars have been selected as ALA Spectrum Scholars, and the third scholar has received several scholarships. 

Lessons Learned 

  • Be prepared to have conversations with HR. The HR process on campus takes a long time—plan for it, including talking to multiple people in HR, doing your own research around campus for position types, and being creative! Though the role of HR will vary at different institutions, this is as much a critical piece as other phases, as for a truly socially just position you must make sure you get the right category in your institution’s structure; and also stick to your values and push back when you need to and can. 
  • Connect with in-state library school masters programs for a potential collaborative partnership and help advertise your program when people are applying to their program; also learn how they recruit. If your state does not have an in-state library school master’s program, connect with online programs; determine if any of their students are local to your geographic region or if they can pass the word to their students directly. 
  • Consider your existing campus partnerships, especially those who work with undergraduate students of color, who can serve as advocates and recruiters for your program. Your current and former library student employees are perfect for these conversations too. 
  • Benefits and professional development funding matters. Be consistent with the EDI values of the program so it does not seem like an exploitation; for us that meant not creating a part-time position with no benefits and no professional development funds. Even if your administration is on board with the position, you might still have to push for these specifics.     

Implementation, January 2018 – present

As we shifted into the implementation phase of the program in January 2018, we recruited our first scholar via word of mouth – she was a local, former OSU student, and she was already accepted into an online library master’s degree program. We heard about our first scholar, Marisol Moreno Ortiz, through a contact in the university’s Educational Opportunities Program. We reached out to invite her to meet up and talk about this new program we were growing. Knowing it was a program we were just developing and might need iterations, we were looking for our first scholar to take the plunge with us. Having existing relationships and trust already established from Marisol’s use of the library as an OSU alum made it an easy transition for us all. She knew and loved our libraries and was excited for the opportunity to work with us as she learned and grew through her online program. 

An essential part of the program implementation was to identify the point person for the program. It made sense for Natalia, as the committee co-chair who was already tenured, to serve in the role. In preparation for the role, she attended manager and supervisor trainings offered by the university and had numerous conversations with colleagues who are supervisors to learn from them as well. As a tenure track faculty member, she participated in the library’s formal mentoring program as a mentee, and after being tenured, served as a mentor. She received a pay raise for supervisory work, and now helps facilitate the day to day details of the program like working with HR, facilitating committee meetings, and supervising the scholars. As program coordinator she also leads the way with the mentorship, meeting weekly with the scholar and helping guide them, pulling in the committee as needed. This mentorship takes time, with a lot of informal conversations to help the scholar navigate the system of a large library. Since the overall goal of the program is to allow flexibility for the scholar while they get to sample the library as a whole, seeing all parts and pieces to help determine areas they are more interested in learning more about, developing departmental buy-in has been key to the success of this program. The program coordinator is also the key communicator and advocate. Natalia keeps the library’s administration, including department heads, updated regularly on the program, and meets both formally and informally with them to ensure the projects and activities of the scholars in other departments are going well. She sometimes meets directly with the University Librarian, which sometimes includes an “ask” for special funding or other changes.  

The program is set up on a rotation for the first quarter through about six departments (instruction, public services, emerging technologies, acquisitions and cataloging, special collections and archives, and administration). As we are on a 10-week quarter system, we divide the first term for the scholars so that the first week or two the scholar starts their onboarding, and then they rotate through a department for either one or two weeks. The goal of these weeks is to soak in what each department does, how individual staff or units play a role, to observe and shadow, and to reflect and ask questions. As they get to know the departments and the staff, they inherently learn about projects, processes, tasks and activities of interest to them. Then, throughout the rest of their appointment, the scholars have the autonomy to determine which projects, and in which departments, they would like to pursue. A scholar is not tied to one department or project for the rest of their time at OSU, so while the initial rotation period may seem relatively short, they have adequate time to dive deep into various areas over their time at OSU Libraries. Until their official email and calendar is set up, we use a Google Doc to create a schedule for the department heads to choose a week, and the staff to invite the scholar to meetings, appointments, visits, shadowing, Q&A, observing, or events. We use the DSP committee to help advocate in our individual departments with support from the library leadership team. Getting all department heads on board is critical. The scheduling begins before the scholar starts so we have many learning opportunities set up in advance. Scholars typically meet one-on-one with staff and faculty within a department to learn more about what they do, as well as attend unit and departmental meetings. 

After this first term of rotation, the scholars begin picking projects or areas they want to immerse more heavily into for future terms. The DSP supervisor chats with the scholar about their project preferences, as well as colleagues and department heads to determine capacity, and then facilitates conversations to ensure a mutually beneficial experience. For example, if the scholar wants instruction and outreach experience, we have conversations with the Teaching and Engagement department about opportunities that could match each scholar’s interest. Because the scholar is on a 9-month appointment with the option for a reappointment, we discuss the timing of opportunities not only for projects, but for service and professional development as well. 

While the program is structured to treat the scholars as colleagues of our academic librarians, the reality is they are not being paid at that level, so while we want them to have the same experiences as academic librarians, it is essential for us to not use them to cover the duties of someone at a much higher pay scale. We try to find the balance to this by making sure that the activities and projects the scholars take on are of their choosing and help them in building the resume they want that will benefit them in their future career. We discuss what types of positions they would like to have, look at job postings to determine what qualifications are required and preferred, and set out to develop opportunities to create relevant experiences for them. Additionally, one of the main priorities of the DSP committee is to be their advocate while also empowering them to advocate for themselves. We have conversations with them about the politics of not only the inner workings of our library, but of the profession as a whole. 

Something that occurred with our first Diversity Scholar that we have begun to replicate, and intend to continue to do with future scholars, is to assist with the job search process. Our first scholar graduated in the month of May and her appointment with the DSP was set to end in the month of June. Together, we determined that the best use of the her time during her last 10 weeks in the program was to search for and apply for jobs. Essentially, her job became to find a job. We discussed what types of jobs she desired, sent her postings, reviewed her resume and cover letters, prepped her for phone and on-campus interviews, and debriefed interview experiences. As their supervisor, Natalia wrote letters of recommendation and served as a reference. She is currently employed at a community college library. Our second scholar’s appointment ended several months prior to her graduation, but the same process applied. Even after moving out of state, the DSP has kept in communication to support her job search process as she completes her MLIS program later this year. The current Diversity Scholar will graduate in 2021. While there is the possibility of our scholars’ positions turning into permanent positions, the DSP Committee has discussed how this could be accomplished in a more proactive matter. To date, we have had to balance the OSUL positions available at the time of the Diversity Scholars’ appointment end date and the interest of a Diversity Scholar in those positions. 

Assessing the DSP and Measuring its Success

There are many ways to measure success. When we spoke with our University Librarian about her view of success, she expressed that since our program is so new, we need time to truly assess its value and its effect on the multi-generations within our library setting; we need to ask ourselves if our library culture is shifting and growing along with the scholars. Additionally, she posed questions such as: Is success just a good experience in the program? Is it a high number of interviews for a job? Is it about quick job placement? Is it whether or not they find employment in an area of their choosing? Is it long-term retention in the profession? What about how the program impacts each individual scholar: how do they measure success for themselves? Moreover, how does the program, specifically the scholars’ projects and accomplishments, add value to the library? Is it all of these elements combined? Because the systematic Whiteness of our profession has been ongoing for so long, the difficulty in assessing the impact on the field of librarianship literally will just take time (Alston, 2017, p. 212) 

In order to document the many measures of success of our program we are continuously working on developing and implementing meaningful assessment. As of now, we ask the scholars to maintain reflective journals and write self-evaluations of their work, and as their supervisor, Natalia seeks input from their peers. We survey the scholars’ project supervisors and department heads who observed or worked with the scholar while in their units, both about the program and about the scholar. The scholars also give a presentation at the end of their appointment to the entire library staff about their experiences in the program. We use all the feedback gathered to evolve and improve the program experience for our next scholars. 

The DSP Scholars and Their Perspectives on the Program

Our first Diversity Scholar completed her 18-month appointment in the program in June of 2019, our second scholar wrapped up her appointment in March of 2019, and our third scholar started in October of 2019. At least 2 scholars overlap each other in their appointments. All three of the Diversity Scholars – Marisol Moreno Ortiz, Bridgette Flamenco (née Garcia), and Valeria Dávila Gronros – are Latinx women in their mid-to-late 20s, and two of the three scholars were library student employees and OSU undergrads. A section of the DSP website titled “Meet the OSUL Diversity Scholars” includes short biographies of each scholar. The first two scholars chose to focus on teaching and engagement, as well as public services activities, and our third scholar has an interest in archives, specifically audio/visual materials. 

In mid-March of 2020, we conducted a focus group with the three scholars to assess the DSP, from their collective perspective. It was the first time all three were together to provide feedback about the DSP. While our third scholar was only six months into her appointment, the first scholar had already finished up the program and graduated and the second was ending her time with us in two weeks to relocate and wrap up her online degree. Even though we had already asked them to reflect on the DSP as part of their individual self-reflections, we wanted an opportunity for the three of them to connect and have ideas flow between them while we listened first and then conversed together about their experiences. We explained that their collective responses would be used as part of this article. We asked them to share their thoughts on the positive aspects of the program, what could be improved, and what “success” looks like for the DSP. We took notes and compiled their collective responses. 

It is essential for us to acknowledge that there was a power differential between us and the scholars that more than likely hindered their responses, especially any negative feedback they may have had but did not feel comfortable sharing. Because of our roles, we are in a position to act as references and write letters of recommendation for them. While it may have worked better to have someone else conduct the focus group, the scholars would still know that what they expressed would be shared with us and due to their unique experiences within the DSP, their responses could still have been identifiable. While we wanted to include their perspectives as a part of this article and the focus group was the method we used, moving forward we will work on different approaches to gathering feedback. Additionally, this is why it is so important for anyone who coordinates a program like the DSP or would like to start a program, to read the previous literature as well as qualitative and quantitative studies on a larger sample of scholars that does not identify them. By reading other perspectives outside of your institutions, you can gain a better understanding of the issues that may be impacting the people within programs like the DSP that for many reasons, may not be able to fully share their experiences and thoughts with their colleagues and supervisors.    

For the focus group discussion, we asked three questions: What were some of your positive experiences about the program? What do you wish would have been different about the DSP and should be changed? What do you consider “success” for the DSP? Below are their collective responses: 

What were some of your positive experiences about the program?

One scholar expressed her appreciation that the program is structured so that each department is willing and ready to support the scholars and the program: she recognized the buy-in from all of the departments and how willing people were to work with her and train her. She also appreciated the opportunity to meet with our University Librarian, to be able to talk with her to receive career advice from someone in a high-level administrative position. Two of the scholars agreed that the autonomy and scheduling flexibility offered by the program enabling them to choose and develop their own projects, and for colleagues to offer them projects, was a positive for them. To expand on this idea, one scholar noted how helpful it was to be able to connect her DSP work to her MLIS courses and vice versa; both experiences were enriched. An unexpected positive was how they appreciated access to OSUL resources, interlibrary loan for example, that they were not able to obtain from the libraries connected to their online MLIS programs. All of the scholars noted how invaluable the professional development opportunities were to them, especially the opportunity to travel to regional and national conferences, and in one case, an international conference. They indicated that they would not have had the resources to attend conferences without the funds provided by the DSP. They expressed how much they learned in terms of navigating professional conferences, networking, and experiencing new cities.     

What do you wish would have been different about the DSP and should be changed? 

All three scholars noted that the monthly stipend is low but did state that a paid position helped them cover the costs of their graduate programs. Additionally, all three scholars had recommendations for improving the structure of the program including: a recommendation that the program be extended, perhaps to a 21-month appointment or even a full two years to coincide with the time it takes to complete their MLIS degree; the request to be paired with an official mentor within a department of their choosing to receive more dedicated support in their areas of interest; the idea to create a visual timeline of a scholar’s appointment with expectations, goals, and outcomes, broken down showing the program as a whole. 

It was pleasing to hear that some of the recommendations offered were already in place. For example, our first scholar noted that her first ten weeks were very overwhelming—something she expressed during her time in the program. For our next two scholars we took great care to ensure their onboarding period was much more manageable. Our most recent scholar requested that we offer them more opportunities to not only attend conferences, but to present at them. Our first two scholars indicated that the program does encourage this, but more so in the second year and that this was beneficial since by their second year, they had more experience and confidence.      

Before moving on to the final question, Natalia stated that she and Beth always envisioned the DSP being a cohort program, but that the focus group was the first time all three scholars were together. She stated that now that there are three DSP scholars who have completed or are currently in the program, and as the program continues to expand, we can create more of a cohort environment. She asked how they would like to see that accomplished. They offered a number of great suggestions including: developing more structured meeting opportunities, especially as part of the onboarding process; offering opportunities to connect with past scholars, via conference calls if in-person gatherings is not an option; and creating a mentorship program within the DSP itself so that each scholar mentors the scholar hired after them. 

What do you consider “success” for the DSP?

Perhaps not surprisingly, all three scholars described success in relation to their employment: this includes mentorship for navigating the job search process, securing employment in their areas of interest, and long-term retention in the profession. One of the scholars expressed a part of the program’s success is how, through experience, the program gives the scholars an understanding of an academic work environment. Additionally, she noted that the scholars enter the profession with an extensive network of individuals they can call upon when needed. And lastly, and perhaps most touching to us, one of the scholars shared that the program helped her build her professional library identity and helped her see herself as a librarian. 

Plans for the Future 

Even in just a few years, more opportunities exist than when we started, for us as program coordinators and for our scholars to build community. The ARL Diversity Alliance is in full swing and as members of that group, we are slowly learning the benefits (e.g. our scholars are now part of a Slack channel just for current residents), and we have seen the Residency Interest Group of ACRL grow. The opportunity to connect with other resident coordinators was a big plus in August 2019, when Natalia attended the first ever Library Diversity and Residency Studies (LDRS) Conference in Greensboro, North Carolina. The conference focused on Diversity, Equity, and Inclusion in libraries, including but not restricted to Library Diversity Residency programs. The conference was hosted by UNC Greensboro in collaboration with the ACRL Diversity Alliance and the Association of Southeastern Research Libraries (ASERL). The LDRS brought together individuals from academic and public libraries, LIS programs, and other interested groups. Natalia gave a presentation on the DSP as part of the panel “Best Practices in Establishing Library Diversity Residency Programs.” In the spring of 2020, the group that organized the conference published the first issue of the new journal The Library Diversity and Residency Studies Journal which will no doubt become an excellent resource now and in the years to come.  

Our plan is to continue to support and mentor our past and current Diversity Scholars, and we look forward to seeing what comes next for them and are excited to begin recruitment for our fourth scholar. As more people participate in the program, we hope to build a strong network among our Diversity Scholars. Notably, we—the two of us and the three Diversity Scholars—were accepted to write a chapter about the OSUL DSP for the upcoming book Learning in Action: Designing Successful Graduate Student Work Experiences in Academic Libraries. Additionally, we are in conversation with our University Librarian to secure permanent funding for the positions and raise the salary. We plan to work on ways to re-envision and expand the assessment of the program’s impact both for the library and for the scholars themselves. We also need to continue to practice as well as expand strategic and proactive recruitment; we have plans this year to connect with various groups on campus to speak directly with undergraduate students about the possibility of working in libraries, archives, and other cultural heritage institutions as a potential career path. In order to ensure the program’s sustainability, we will not take our existing buy-in from colleagues for granted and will continue to advocate for the program. A long-term vision is to grow our program as a model that can be replicated in other academic libraries in Oregon and the PNW, perhaps through the Orbis Cascade Alliance, to form a much larger cohort. Through poster presentations by our scholars and committee members at Oregon and Pacific Northwest library conferences we are slowly increasing awareness. 

Conclusion

As Angela Galvan powerfully states, “While recruiting initiatives and fellowships are reasonable starting points, they become meaningless gestures for institutions which screen on performing Whiteness. These actions are further undermined by framing diversity as a problem to be solved rather than engaging in reflective work to dismantle institutional bias” (Galvan, 2017). On its own, the DSP cannot not solve the larger problem of a culture of Whiteness in the field—but it’s a contribution as part of our library and university’s various equity, diversity, and inclusion initiatives that tie into the broader profession’s work. If your library is considering a program like this, you must look at the cultural environment of your institution and consider where your institution is with changing this culture of Whiteness. The environment has to be such that equity, diversity, and inclusion work is encouraged and celebrated – and continuous. It is vital to remember that social justice, equity, and inclusion should be everyone’s work. It is not a one-time endeavor, a box to be checked, but a process of continual growth and reflection of the library and its campus community. As the DSP committee flows from inception to new iterations, with new scholars and new committee members, we reflect on what we did and why, rethinking, learning and growing as individuals and a committee, and hopefully an institution as well. The questioning along with enthusiasm of new members and new scholars helps us grow a better program and also make shifts while checking our own perceptions. And most importantly to our DSP, is that our scholars are getting the experiences they desire, in an environment where they can be themselves, and a culture that supports them.


Acknowledgements

We would like to thank our two peer reviewers Denisse Solis and Dr. LaTesha Velez for their incredibly thoughtful suggestions, insights, and additions to reframe and strengthen our article. Special thanks to our colleagues Kelly McElroy and Anne-Marie Deitering for offering their feedback, to Lindsay Marlow who helped us get started with the article, and to our publishing editor Ian Beilin. And, a big thank you to the OSU Libraries Diversity Scholars so far Marisol Moreno Ortiz, Bridgette Flamenco, and Valeria Dávila – this program is what it is because of you! 


Works Cited

Alston, J. K.(2017). Causes Of Satisfaction And Dissatisfaction For Diversity Resident Librarians – A Mixed Methods Study Using Herzberg’s Motivation-Hygiene Theory. (Doctoral dissertation). Retrieved from https://scholarcommons.sc.edu/etd/4080

Boyd, A., Blue, Y., & Im, S. (2017). Evaluation of Academic Library Residency Programs in the United States for Librarians of Color. College & Research Libraries, 78(4), 472. https://crl.acrl.org/index.php/crl/article/view/16642/18088

Beilin, Ian. (2017). The Academic Research Library’s White Past and Present. In Gina Schlesselman-Tarango (ed),  Topographies of Whiteness: Mapping Whiteness in Library and Information Science.

Black, W. K., & Leysen, J. M. (2002). Fostering Success: The Socialization of Entry-Level Librarians in ARL Libraries. Journal of Library Administration, 36(4), 3—27. doi:10.1300/J111v36n04_02 https://www.tandfonline.com/doi/abs/10.1300/J111v36n04_0 

Bourg, Chris (2014, March 3).The Unbearable Whiteness of Librarianship. https://chrisbourg.wordpress.com/2014/03/03/the-unbearable-Whiteness-of-librarianship/

Brewer, J. (1997). Post-Master’s Residency Programs: Enhancing the Development of New Professionals and Minority Recruitment in Academic and Research Libraries. College & Research Libraries, 58(6), 528—537.  http://crl.acrl.org/index.php/crl/article/download/15247/16693 

Bridges, L. M., Park, D., & Edmunson-Morton, T. K. (2019). Writing African American History Into Wikipedia. Oregon Library Association Quarterly, 25(2), 16-21. https://doi.org/10.7710/1093-7374.1987

Brook, F., Ellenwood, D., & Lazzaro, A. E. (2015). In Pursuit of Antiracist Social Justice: Denaturalizing Whiteness in the Academic Library. Library Trends, 64(2), 246–284. https://doi.org/10.1353/lib.2015.0048

Chang, H. F. (2013). Racial and Ethnic Librarianship in Academic Libraries: Past, Present and Future. ACRL 2013 Conference Proceedings.

Cogell, Raquel V., & Cindy A. Gruwell, eds. Diversity in Libraries: Academic Residency Programs. Westport, CT: Greenwood Press, 2001.

Dewey, B., & Keally, J. (2008). Recruiting for Diversity: Strategies for Twenty-First Century Research Librarianship. Library Hi Tech, 26(4), 622—629. https://trace.tennessee.edu/cgi/viewcontent.cgi?article=1000&context=utk_libpub 

Espinal, I., Sutherland, T., & Roh, C. (2018). A Holistic Approach for Inclusive Librarianship: Decentering Whiteness in Our Profession. Library Trends, 67(1), 147–162. https://doi.org/10.1353/lib.2018.0030

Galvan, Angela. (2015). Soliciting Performance, Hiding Bias: Whiteness and Librarianship. In the Library with the Lead Pipe. Retrieved from http://www.inthelibrarywiththeleadpipe.org/2015/soliciting-performance-hiding-bias-Whiteness-and-librarianship/

Hankins, R., & In Juárez, M. (2015). Where are All the Librarians of Color?: The Experiences of People of Color in Academia. Library Juice Press.

Hathcock, A. (2015). White Librarianship in Blackface: Diversity Initiatives in LIS. In the Library with the Lead Pipe. Retrieved from http://www.inthelibrarywiththeleadpipe.org/2015/lis-diversity/ 

McElroy, Kelly &  Diaz, Chris, (2015). Residency Programs and Demonstrating Commitment to Diversity.  (46) Faculty Publications. https://digitalcommons.nl.edu/faculty_publications/46

Millner, D. & Thompson, C. (Eds.). (2019). White Supremacy & Resistance [Special issue]. Oregon Historical Quarterly, 120(4).

Perez, M. Z., & Gruwell, C. A. (2011). The New Graduate Experience: Post-MLS Residency Programs and Early Career Librarianship. Santa Barbara, Calif: Libraries Unlimited.

Pho, A. & Masland, T. (2014). The Revolution Will Not Be Stereotyped: Changing Perceptions through Diversity. In Nicole Pagowsky & Miriam Rigby, (Eds), The Librarian Stereotype: Deconstructing Perceptions and Presentations of Information Work. Chicago, IL: Association of College & Research Libraries (pp. 257-282). 

Roy, Loriene (2015). Preface. In Rebecca Hankins & Miguel Juárez (Eds), Where are All the Librarians of Color?: The Experiences of People of Color in Academia. (pp. vi-vii). Library Juice Press.

Roy, Loriene, et. al (2006).  Bridging Boundaries to Create a New Workforce: A Survey of Spectrum Scholarship Recipients, 1998-2003. http://www.ala.org/advocacy/sites/ala.org.advocacy/files/content/diversity/Spectrum/BridgingBoundaries.pdf

Rutledge, L., Colbert, J. L., Chiu, A., & Alston, J. K. (2019). Developing a Residency Program: A Practical Guide for Librarians. Rowman & Littlefield.

Strand, Karla, J. (2019). Disrupting Whiteness in Libraries and Librarianship: A Reading List Bibliographies in Gender and Women’s Studies, (89)   https://www.library.wisc.edu/gwslibrarian/bibliographies/disrupting-Whiteness-in-libraries/

Vinopal, Jennifer. (2016). The Quest for Diversity in Library Staffing: From Awareness to Action. In the Library with the Lead Pipe. http://www.inthelibrarywiththeleadpipe.org/2016/quest-for-diversity

Islandora Online: First event now open for registration! / Islandora

Islandora Online: First event now open for registration! manez Wed, 06/24/2020 - 14:50
Body

We are very pleased to announce that the first of our four Islandora Online event has a full schedule and is ready for your registration:

https://islandora.ca/events/islandora-online-2020

Islandora Online is a series of four online events, each around five hours long (including breaks), focused on a specific topic of interest to the Islandora community. Each event contains a mix of presentations, panel discussions, and small group discussions, with optional social events during breaks. Some sessions will be recorded and you are welcome to determine your own level of participation and join/leave the event as needed.

Registration is free, but we are asking that you consider a small donation if you can. Donations will be used to offset the cost of the platform we're using for the events, with any profit above that going to fund the Islandora Foundation and its mission to steward the Islandora platform and community. The Islandora Foundation does not rely on events for our operational funding, but we operate on very small margins and our usual calendar of face-to-face events provides a small safety net that we'll be doing without for a while. 

No donation? No problem! We made registration free because we want to throw open the doors and welcome as much of our community as possible.

 

Could you make history? / Mita Williams

It started out with a dab. My son let me know that he dabs on the haters. I retorted that the dab is old news. It’s sooooo old… wait, how old is it now?

I looked up the origins of the dab. And then I made a version of Timeline of dance moves using index cards for for my kids to play.

The game didn’t take long to make and it didn’t take long to play. My kiddos now know that the Macarena is very old but not nearly as old as The YMCA.

Timeline (Diversity) – from Board Game Geek

Timeline is a great game that I recommend to pretty much anyone looking for a simple card game that can be played by a group of people. Unlike many trivia games, Timeline allows players to guess and as most of us are not historians, there is a lot of guessing involved. I have had much success playing Timeline as a casual and fun game with university students. There is some risk that a player might tease another for a particular gap in their knowledge, but all games based on shared knowledge comes with this risk.

The rules are very straightforward. Each card in a Timeline deck has a description of an event on one side and a description and a date on the other. To start the game, players are dealt four cards with their dates sides hidden. Then a card from the deck is played on the table with the date side revealed.

The youngest player begins the game and their task is to select a card from their four and then to place that card either as ‘before’ or ‘after’ the card on the table. After their decision is made, their card is turned over so that the date will show whether the player was correct. If they are correct, the card remains and the next player starts their turn. If the player is incorrect, the card is sent to a discard pile and the player draws a new card from the deck. As the game progresses, the timeline of cards on the table gets longer and playing cards can be more difficult. The first player who successfully plays all their cards wins the game.

Even if you don’t own the game, you can play Timeline for free as there is short demo version of Timeline Classic is available from this collection of Print and Play games made freely available for these unprecedented times.

You can make your own version with pen and paper. Or you can get fancy and using card making software such as nanDeck which allows you to create PDFs of printable cards using a spreadsheet of data and some code to format the cards.

screenshot of my nanDeck generated deck of Windsor-Timeline

I think asking students to make their own version of a game using the Timeline mechanic would make for a good history assignment. I think this for two reasons. First, like many educational games, the person who often learns the most from the experience is the game designer.

The family that playtests together…

And secondly, I think combining all the students different decks of their various history projects would make for a remarkable game of Timeline. That’s because what a good game of Timeline does is to help us integrate our various understandings of knowledge together and surprising us when history brings together disparate events into the same moment of time…

When pilgrims were landing on Plymouth Rock, you could already visit what is now Santa Fe, New Mexico to stay at a hotel, eat at a restaurant and buy Native American silver.

Prisoners began to arrive to Auschwitz a few days after McDonald’s was founded.

The first wagon train of the Oregon Trail heads out the same year the fax machine is invented.

Nintendo was founded in 1888. Jack the Ripper was on the loose in 1888.

1912 saw the maiden voyage of the Titanic as well as the birth of vitamins, x-ray crystallography, and MDMA.

1971: The year in which America drove a lunar buggy on the moon and Switzerland gave women the vote.

from Unlikely Simultaneous Historical Events, kottke.org

Timeline knows this, which is why their packaging asks these questions: Could Darwin drink champagne? Could Queen Victoria take the London Underground? Did Einstein wear jeans? And perhaps, most importantly, Did Cleopatra play cards?

I feel I could create an entire Timeline deck of what happened in 2020 and I still think I would get most of the cards misplaced.

No more tracking / William Denton

Today I upgraded to the latest version of Matomo (moving up from an older version from when it was called Piwik): that’s the open, non-proprietary self-controlled more private equivalent of Google Analytics. The upgrade had been on my to do list for over a year. It didn’t take long, even with the renaming, which meant I needed to change some URLs in Javascript footers that put a tracker on every page.

I got it all working and looked at the fresh Matomo interface. It tells me: not many people look at my web site; the three most popular pages are an out of date post from 2012 (Counting and aggregating in R), Twists, Slugs and Roscoes: A Glossary of Hardboiled Slang and this list of definitions and principles from Ranganathan’s Prolegomena to Library Classification; and Freedom of information request for York University eresource costs completed has had over 400 views since posted two weeks ago, which is very nice to see.

Screenshot of Matomo report on this site Screenshot of Matomo report on this site

I hadn’t looked at the stats in over a year. I don’t use them. I don’t need them. Why am I tracking users on my site anyway? There is no reason. Becky Yoose and other experts would ask me: Why are you recording personal information you’re not using?

So I turned it off. I went even further: I disabled logging on the web server.

I added a privacy statement to the sidebar: “Zero logging: As of 23 June 2020, no tracking is done on this web site and no logs are kept. I know absolutely nothing about how the site is used.” I also turned off logging on Listening to Art (which I didn’t even know I’d set up: I thought it was like GHG.EARTH and STAPLR, where there’s no tracking).

Matomo is an excellent application! It’s under the GPL, the code is on GitHub, it’s easy to install and use … I like everything about it. I just don’t need it. (And now I don’t have to ever upgrade it again.)

Zero logging is punk.

Infrastructure for heritage institutions – change of course / Lukas Koster

Permalink: https://purl.org/cpl/3069


In July 2019 I published the first post about our planning to realise a “coherent and future proof digital infrastructure” for the Library of the University of Amsterdam. In February I reported on the first results. As frequently happens, since then the conditions have changed, and naturally we had to adapt the direction we are following to achieve our goals. In other words: a change of course, of course.

 Projects 

I will leave aside the ongoing activities that I mentioned, and focus on the thirteen short term projects, which were originally planned like this:

  • Licensing
  • Object PID’s
  • Controlled Vocabularies
  • Metadata Set
  • ETL Hub
  • Object Platform
  • Digital Objects/IIIF
  • Digitisation Workflow
  • Access/Reuse
  • Data Enrichment
  • Linked Data
  • Alma
  • Georef

In my first results post these were already grouped together based on status and dependencies:

  • Object PID’s
  • Object Platform/Digital Objects/IIIF
  • Licensing
  • Metadata Set/Controlled Vocabularies
  • Data Enrichment/Georeference
  • Other projects (dependent on the results in the main projects):
    • ETL Hub
    • Digitisation Workflow
    • Access/Reuse
    • Linked Data

Investigating the options of Alma as a separate project was abandoned, because it became very clear that Alma fulfils a central role in almost all other aspects of the digital infrastructure.

 Developments 

In the mean time the exploratory study into the options for a digital object platform have resulted in a recommendation to procure a long term digital preservation (DP) solution, in compliance with the OAIS reference model, which takes descriptive metadata from Alma and other systems and also serves as the source for publication of digital objects through various channels (Digital Asset Management – DAM). Given the expected procurement and implementation time for such a system, a working digital object platform will not be available until the end of 2021 the earliest. Since the digital object focused projects are all closely interlinked with the availability of a digital object platform, and also because of a number of experiences in the other projects, we have decided to restructure the original planning completely.

 Adapted planning 

Firstly we have defined two separate main project clusters, a data cluster and a digital object cluster. This involved joining and splitting some of the existing project ideas. Secondly, we have separated both clusters in time. We will implement the data cluster first, as far as possible in 2020, and after that the digital objects cluster starting in 2021.

Two projects have a bit of both, they have been grouped together and will be assessed separately. Finally a new project was defined, focusing on streamlining the full digital infrastructure system and database landscape, with the objective of eliminating redundancies in both systems and data.

  • Data Cluster (2020)
    • Data Licences
    • Data Quality sub cluster
      • Object PID’s
      • Controlled Vocabularies
      • Metadata Set
    • Data Publication sub cluster
      • Data Access and Reuse
      • ETL
      • Linked Data
  • Digital Objects Cluster (2021-2022)
    • Object Licenses
    • Digital Objects Platform
    • Digital Object Representations
    • Digitisation Workflow
    • Digital Objects Access and Reuse
  • Data + Digital Objects (2020-2022?)
    • Data Enrichment
    • Georeferencing
  • Digital Infrastructure Streamlining (2020-2022)

 Dependencies 

In the Data Cluster, results of the Data Licences and Data Quality projects must be available for implementing Data Publication options. Linked Data can only be implemented if there is already a data publication facility available, including ETL procedures.

In the Digital Objects Cluster the Digital Objects Platform (DP/DAM) must be available in order to implement a full blown Digitisation Workflow. Access and reuse of digital objects depend on the availability of the platform with relevant object representations and licenses.

The Data Enrichment and Georeferencing projects are both aimed at generating additional metadata for digitised maps based on the digital objects themselves. For a full and serious implementation high quality digital object representations in relevant formats should be available on a fully functioning digital object platform, and this will not be available before the end of 2021. In the mean time a pilot could be executed with currently available offline digital maps. Planning this will be considered independently of the main project clusters.

Streamlining the digital infrastructure is obviously targeted at existing and future systems and data, and dependent on developments in the digital infrastructure program. The project will start as soon as possible nonetheless, with an exploratory and definition phase.

 Current status 

In the Data Cluster we are ready to start implementing persistent identifiers for collection objects in the broadest sense. This PID project will be the subject of another more detailed post. In brief: we will adopt a pragmatic approach and maintain a hybrid environment, keeping our existing handles and DOI’s and implementing ARK as the new default PID system, using rule based PID assignment based on identifiers available in the target systems. This entails copying the identifiers used to new systems in case of future migrations in order to keep the identifiers persistent.

For Data Licences we are inclined to use a public domain ODC PDDL licence as the default licence for data. An exception will have to be made for data originating from OCLC WorldCat, which applies to the bulk of our data in Alma and derivatives thereof. For WorldCat data an ODC-BY licence must be used acknowledging the OCLC WorldCat origin. It will be a bit of a challenge how to use both licences simultaneously for our Alma instance, since part of the Alma data does not derive from WorldCat.

The results of both Data Licences and the Data Quality projects (Object PID’s, Controlled Vocabularies, Metadata Set) will go into the new Data Publication project, which will be undertaken in the second half of 2020. This project is aimed at publishing our collection data as open and linked data in various formats via various channels. A more detailed post will be published separately.

As mentioned before, the Digital Objects Platform and related projects will take some time. In the mean time a IIIF pilot has already been completed successfully. IIIF is available for the current online image repository.Last but not least, the exploratory phase of the Infrastructure Streamlining project will start in the second half of 2020.

Core Virtual Happy Hour Social ~ June 26 / LITA

Our Joint Happy Hour social at Midwinter was such a success that next week we’re bringing Happy Hour to you online—and registration is free!

We invite members of ALCTS, LITA, and LLAMA to join us on Friday, June 26, 5:00-7:00 pm Central Time for Virtual Happy Hour networking and/or play with your peers in a game of Scattergories.
 
Wear your favorite pop culture T-shirt, bring your best Zoom background, grab a beverage, and meet us online for a great time! Attendees will automatically be entered to win free registration to attend the Core Virtual Forum.
 
Winner must be present to redeem prize.
 
Registration is required.

Register now at: bit.ly/2NeNprH

Open Data Day 2020: it’s a wrap! / Open Knowledge Foundation

 

On Saturday 7th March 2020, the tenth Open Data Day took place with people around the world organising over 300 events to celebrate, promote and spread the use of open data.

Thanks to the generous support of this year’s funders – Datopian, the Foreign & Commonwealth OfficeHivos, the Latin American Open Data Initiative (ILDA)MapboxOpen Contracting Partnership and Resource Watch – the Open Knowledge Foundation was able to give out more than 60 mini-grants this year.

Sadly several events had to be cancelled or delayed as the COVID-19 pandemic affected countries around the world but some of our grantees were able to swiftly adapt their plans in order to deliver engaging virtual Open Data Day celebrations.

The community registered a total of 307 events on the Open Data Day map with events taking place in every timezone and the Open Knowledge Foundation team captured some of the great conversations across Asia/Oceania, Africa/Europe and the Americas by using Twitter Moments.

Mini-grant scheme

This year’s tracks for the Open Data Day 2020 mini-grant support scheme were:

  • Environmental data: Using open data to illustrate the urgency of the climate emergency and spurring people to take a stand or make changes in their lives to help the world become more environmentally sustainable.
  • Tracking public money flows: Expanding budget transparency, diving into public procurement, examining tax data or raising issues around public finance management by submitting Freedom of Information requests.
  • Open mapping: Learning about the power of maps to develop better communities.
  • Data for equal development: How can open data be used by communities to highlight pressing issues on a local, national or global level? Can open data be used to track progress towards the Sustainable Development Goals or SDGs?

Below you can read reports from all of the events which took place thanks to these mini grants:

Environmental data

Tracking public money flows

Open mapping

Data for equal development

Thanks to everyone who organised or took part in these celebrations and see you next year for Open Data Day 2021!

Breaking: Peer Review Is Broken! / David Rosenthal

The subhead of The Pandemic Claims New Victims: Prestigious Medical Journals by Roni Caryn Rabin reads:
Two major study retractions in one month have left researchers wondering if the peer review process is broken.
Below the fold I explain that the researchers who are only now "wondering if the peer review process is broken" must have been asleep for more than the last decade.

Retraction Watch has a detailed account of the two retractions. First The Lancet:
The Lancet paper, “Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis,” which relied on data from a private company calledSurgisphere and had concluded that hydroxychloroquine was linked to a higher risk of death among some COVID-19 patients, has been dogged by questions since its publication in late May. ... now three of the four authors of the article have decided to pull it entirely. The abstaining author, Sapan Desai, is the founder of Surgisphere,
Second, NEJM:
The New England Journal of Medicine retraction followed a little more than an hour later, with Desai agreeing to the move
Rabin writes:
The reputation of these journals rests in large part on vigorous peer review. But the process is opaque and fallible: Journals generally do not disclose who reviewed a study, what they found, how long it took or even when a manuscript was submitted. Dr. Horton and Dr. Rubin declined to provide those details regarding the retracted studies, as well.

Critics have long worried that the safeguards are cracking, and have called on medical journals to operate with greater transparency.
"Long" is an understatement, and this isn't just about medical journals. In the very first post to this blog, more than 13 years ago, I summarized part of a fascinating paper by Harley et al. entitled The Influence of Academic Values on Scholarly Publication and Communication Practices thus:
I'd like to focus on two aspects of the Harley et al paper:
  • They describe a split between "in-process" communication which is rapid, flexible, innovative and informal, and "archival" communication. The former is more important in establishing standing in a field, where the latter is more important in establishing standing in an institution.
  • They suggest that "the quality of peer review may be declining" with "a growing tendency to rely on secondary measures", "difficult[y] for reviewers in standard fields to judge submissions from compound disciplines", "difficulty in finding reviewers who are qualified, neutral and objective in a fairly closed academic community", "increasing reliance ... placed on the prestige of publication rather than ... actual content", and that "the proliferation of journals has resulted in the possibility of getting almost anything published somewhere" thus diluting "peer-reviewed" as a brand.
Since then, the oligopoly publishers have continued the brand-stretching process, and I've continued to observe it in, for example, 2011's What's Wrong With Research Communication, 2015's Stretching the "peer reviewed" brand until it snaps and 2016's More Is Not Better.

Despite its deleterious effects, brand-stretching isn't the fundamental problem. In 2013's Journals Considered Harmful I pointed to the conclusions of Deep Impact: Unintended consequences of journal rank by Björn Brembs and Marcus Munaf:
The current empirical literature on the effects of journal rank provides evidence supporting the following four conclusions: 1) Journal rank is a weak to moderate predictor of scientific impact; 2) Journal rank is a moderate to strong predictor of both intentional and unintentional scientific unreliability; 3) Journal rank is expensive, delays science and frustrates researchers; and, 4) Journal rank as established by [Impact Factor] violates even the most basic scientific standards, but predicts subjective judgments of journal quality.
The idea that journals can be ranked in terms of "quality", that higher quality journals perform more rigorous peer review, and thus that the papers they publish are of higher quality is just wrong. For example, Rabin quotes the editor of The Lancet:
Dr. Horton called the paper retracted by his journal a “fabrication” and “a monumental fraud.” But peer review was never intended to detect outright deceit, he said, and anyone who thinks otherwise has “a fundamental misunderstanding of what peer review is.”

“If you have an author who deliberately tries to mislead, it’s surprisingly easy for them to do so,” he said.
The higher the perceived quality of the journal, the greater the incentives for hype and fraud. The evidence that pre-publication peer review rarely detects fraud is overwhelming. But post-publication peer review, as in these cases, is better:
The retracted paper in The Lancet should have raised immediate concerns, [Dr. Peter Jüni] added. It purported to rely on detailed medical records from 96,000 patients with Covid-19, the illness caused by the coronavirus, at nearly 700 hospitals on six continents. It was an enormous international registry, yet scientists had not heard of it.

The data were immaculate, he noted. There were few missing variables: Race appeared to have been recorded for nearly everyone. So was weight. Smoking rates didn’t vary much between continents, nor did rates of hypertension.

“I got goose bumps reading it,” said Dr. Jüni, who is involved in clinical trials of hydroxychloroquine. “Nobody has complete data on all these variables. It’s impossible. You can’t.
Probably no-one suspected fraud because Harvard:
Both retracted studies were led by Dr. Mandeep R. Mehra, a widely published and highly regarded professor of medicine at Harvard, and the medical director of the Heart and Vascular Center at Brigham and Women’s Hospital.
It is difficult for reviewers to be appropriately critical of studies led by prominent researchers, let alone accuse them of fraud. Especially since they're likely to be on the editorial board of journals in which the reviewer aspires to publish. Choosing the best reviewers is popularly supposed to be part of the value that elite journals add. But:
“This got as much, if not more, review and editing than a standard regular track manuscript,” Dr. Rubin, the editor in chief of the N.E.J.M., said of the heart study appearing in the N.E.J.M., which was based on a smaller set of Surgisphere data. “We didn’t cut corners. We just didn’t ask the right people.”
Rabin sums up with a quote that, except for the pandemic mention, could have come anytime in the last decade:
“We are in the midst of a pandemic, and science is moving really fast, so there are extenuating circumstances here,” said Dr. Ivan Oransky, co-founder of Retraction Watch, which tracks discredited research.

“But peer review fails more often than anyone admits,” he said. “We should be surprised it catches anything at all, the way it’s set up.”

Experimentations with Wikidata/Wikibase / HangingTogether

That was the topic discussed recently by OCLC Research Library Partners metadata managers, initiated by Daniel Lovins of Yale, John Riemer of University of California, Los Angeles, Melanie Wacker of Columbia, and Stephen Hearn of University of Minnesota. Many libraries are looking toward Wikidata and Wikibase to solve some of the long-standing issues faced by technical services departments, archival units, and others. There is no shortage of interesting ideas how these tools could be employed: linked open data, bridging silos, multilingual support, an alternative to traditional authority control, and highlighting special collections are just come examples. Wikibase, the software platform underlying Wikidata, provides a robust infrastructure for knowledge graphs, triple stores, and SPARQL queries, which would be very difficult for libraries to build on their own. One can contribute to or draw from Wikidata itself as a knowledge graph or use the Wikibase software to develop library-specific data models and entity relationships.

On the network level, examples of exploratory work include: Project Passage, a linked data Wikibase sandbox put together by OCLC Research which allowed 16 institutions to experiment in 2017-2018; OCLC’s CONTENTdm Linked Data Pilot (2019-2020); and the Mellon-funded Shared Entity Management Infrastructure (2020-2021). All three use a separate instance of Wikibase.  The PCC Task Group on Identity Management in NACO is investigating how Wikidata could fit into libraries’ regular workflow and allow them to take advantage of pre-existing work. In Europe, the Bibliothèque national de France is collaborating with ABES (Agence bibliographique de l’enseignment supérieur) to create a French entities file and the Deutsche Nationalbibliothek is working with Wikimedia Deutschland to create a local Wikibase instance to support its authority file, Gemeinsame Normdatei. Two Wikidata projects mentioned focus specifically on archival use cases: WikiProject Archives Linked Data Interest Group and the Repository Data (RepoData) for United States Archives.

A surprising number of individual libraries are experimenting with Wikidata, among them: Stanford University Library creating descriptions of persons affiliated with Stanford University; Harvard’s Guido Adler Collection project; the proof of concept combining Library of Congress Prints & Photograph collection records with Wikidata; Lori Robare’s project exploring Wikidata use for identity management thereby raising the profile of people and organizations important to Oregon (see her presentation at midwinter ALA 2020, Exploring Wikidata and Its Potential Use for Library Data); York University’s project to create metadata on Indigenous communities and collections using Wikidata (Surfacing Knowledge, Building Relationships: Indigenous Communities, ARL, and Canadian Libraries); the Bibcard project at University of Wisconsin-Madison Library; and Yale’s Black Bibliography Project. The Koninkijke Bibliotheek in the Netherlands has a demonstration, Using Wikidata for entity search in historical newspapers, which illustrates applying Wikidata to enrich its archive of digitized newspaper articles by linking persons, corporate bodies, and geographic names.

Individual institutions’ experimentations with Wikidata are often focused on using the Wikidata identifiers for one of two use cases: for names, corporate bodies, and geographic names for digital collections; or for  researchers and documents in their institutional repositories. In both cases, goals often focus on raising the profile of persons important locally or in under-represented groups. Several institutions highlight their special collections by adding the “archives at” property to Wikidata entries of persons or organizations. The University of Nevada at Las Vegas has found Wikidata helpful to express parts of their archival collections that are not in subject headings or other controlled vocabularies, such as unique roles for people in the entertainment industry. The University of Toronto is using Wikidata as a tool to describe the Law Library’s Indigenous Perspectives Collection with alternative subject schemas. Several institutions have recruited a Wikimedian-in-residence to support these efforts.

Some institutions are experimenting with their own local instances of Wikibase exploring use cases such as: creating authorities for local names to bridge internal organizational silos; pushing local data out to Wikidata to reach new audiences; and making use of multilingual discovery capabilities. The Smithsonian Institution, for example, has 19 museums and nine research centers with affiliates around the world, each with its own system and content standards. It hopes that a local Wikibase instance can improve discovery of all the resources held by the Smithsonian for both their internal and external audiences. Yale has received Mellon funding for a project looking to use Wikidata or a local Wikibase instance to reconcile data—linking named entities and concepts that are present, albeit with different labels—across the catalogs of its library and three museums.  The British Library is collaborating with Wikimedia UK on a Wikibase project for its Turkish manuscripts and Kurdish printed collections. The goal is to create a database of objects found within the metadata (authors, titles, dates of birth and death, publishing houses, scriptoria, place names, etc.), and then correlate them among various languages so that titles of works from various institutions could all be linked together, making the collection more discoverable in different languages. The ability to display labels in different languages and scripts fits in with institutions’ commitment to equity, diversity, and inclusion.

Given the increase of local instances of Wikibase in development, the work now underway to create a federated ecosystem of local Wikibase instances is critical or each Wikibase instance may end up being “marooned”.

Metadata managers noted that institutional buy-in is needed to support ongoing Wikidata/Wikibase work. Among the reasons for using Wikidata or Wikibase in the library environment:

  • Expose institutions’ resources to the larger web community
  • Support institutional outreach to local communities
  • Can create an entity description with a stable, persistent identifier immediately that can be re-used by others
  • Create labels in multiple languages and scripts and more respectful to marginalized communities
  • Infrastructure supports collaboration across communities and countries
  • Relatively low-barrier way to contribute to linked data and gain experience with “entifying”
  • Tools are available such as the Reasonator, which displays Wikidata entries as well as related data and generates timelines that current library systems cannot

Among the barriers to using Wikidata or Wikibase in the library environment:

  • Steep learning curve
  • Uncontrolled metadata could result in inconsistent data quality
  • Modeling and entities differ from library standards and practices
  • The data you enter could be over-written by someone else
  • Duplicates or overlaps authority work
  • Concern about scalability and long-term sustainability
  • Installing a local Wikibase instance requires IT effort

Where to start learning about Wikidata? People referred to the ARL White Paper on Wikidata: Opportunities and Recommendations (2019). The recordings from the LD4P Wikidata Affinity Group calls and resources cited there have been helpful to many. Some have taken the Wikidata Professional Development Training Modules. Learn by doing!

The post Experimentations with Wikidata/Wikibase appeared first on Hanging Together.

Lessons learned from organising the first ever virtual csv,conf / Open Knowledge Foundation

This blogpost was collaboratively written by the csv,conf organising team which includes Lilly Winfree and Jo Barratt from the Open Knowledge Foundation. csv,conf is supported by the Sloan Foundation as part of our Frictionless Data for Reproducible Research grant. The original post can be found here: https://csvconf.com/going-online

A brief history

csv,conf is a community conference that brings diverse groups together to discuss data topics, and features stories about data sharing and data analysis from science, journalism, government, and open source. Over the years we have had over a hundred different talks from a huge range of speakers, most of which you can still watch back on our YouTube Channel.

csv,conf,v1 took place in Berlin in 2014 and we were there again for v2 in 2016 before we moved across the Atlantic for v3 and v4 which were held in Portland, Oregon in the United States in 2017 and 2019. For csv,conf,v5, we were looking forward to our first conference in Washington DC, but unfortunately, like many other in-person events, this was not going to be possible in 2020.

People have asked us about our experience moving from a planned in-person event to one online, in a very short space of time, so we are sharing our story with the hope that it will be helpful to others, as we move into a world where online events and conferences are going to be more prevalent than ever.

The decision to take the conference online was not an easy one. Until quite late on, the question csv,conf organisers kept asking each other was not “how will we run the conference virtually?” but “will we need to cancel?“. As the pandemic intensified, this decision was taken out of our hands and it became quickly clear that cancelling our event in Washington D.C. was not only the responsible thing to do, but the only thing we could do.

Weighing the decision to hold csv,conf,v5 online

Once it was clear that we would not hold an in-person event, we deliberated on whether we would hold an online event, postpone, or cancel.

Moving online – The challenge

One of our main concerns was whether we would be able to encapsulate everything good about csv,conf in a virtual setting – the warmth you feel when you walk into the room, the interesting side conversations, and the feeling of being reunited with old friends, and naturally meeting new ones were things that we didn’t know whether we could pull off. And if we couldn’t, did we want to do this at all?

We were worried about keeping a commitment to speakers who had made a commitment themselves. But at the same time we were worried speakers may not be interested in delivering something virtually, or that it would not have the same appeal. It was important to us that there was value to the speakers, and at the start of this process we were committed to making this happen.

Many of us have experience running events both in person and online, but this was bigger. We had some great advice and drew heavily on the experience of others in similar positions to us. But it still felt like this was different. We were starting from scratch and for all of our preparation, right up to the moment we pressed ‘go live’ inside Crowdcast, we simply didn’t know whether it was going to work.

But what we found was that hard work, lots of planning and support of the community made it work. There were so many great things about the format that surprised and delighted us. We now find ourselves asking whether an online format is in fact a better fit for our community, and exploring what a hybrid conference might look like in the future.

Moving online – The opportunity

There were a great many reasons to embrace a virtual conference. Once we made the decision and started to plan, this became ever clearer. Not least was the fact that an online conference would give many more people the opportunity to attend. We work hard every year to reduce the barriers to attendance where possible and we’re grateful to our supporters here, but our ability to support conference speakers is limited and it is also probably the biggest cost year-on-year. We are conscious that barriers to entry still apply to a virtual conference, but they are different and it is clear that for csv,conf,v5 more people who wanted to join could be part of it. Csv,conf is normally attended by around 250 people. The in-person conferences usually fill up with just a few attendees under capacity. It feels the right size for our community. But this year we had over 1,000 registrations. More new people could attend and there were also more returning faces.


Attendees joined csv,conf,v5’s opening session from around the world

Planning an online conference

Despite the obvious differences, much about organising a conference remains the same whether virtual or not. Indeed, by the time we by the time we made the shift to an online conference, much of this work had been done.

Organising team

From about September 2019, the organising team met up regularly every few weeks on a virtual call. We reviewed our list of things and assigned actions. We used a private channel on Slack for core organisers to keep updated during the week.

We had a good mix of skills and interests on the organising team from community wranglers to writers and social media aces.

We would like to give a shout out to the team of local volunteers we had on board to help with DC-specific things. In the end this knowledge just wasn’t needed for the virtual conf.

We recruited a group of people from the organising team to act as the programme committee. This group would be responsible for running the call for proposals (CFP) and selecting the talks.

We relied on our committed team of organisers for the conference and we found it helpful to have very clear roles/responsibilities to help manage the different aspects of the ‘live’ conference. We had a host who introduced speakers, a Q&A/chat monitor, a technical helper and a Safety Officer/Code of Conduct enforcer at all times. It was also helpful to have “floaters” who were unassigned to a specific task, but could help with urgent needs.

Selecting talks

We were keen on making it easy for people to complete the call for proposals. We set up a Google form and asked just a few simple questions.

All talks were independently reviewed and scored by members of the committee and we had a final meeting to review our scores and come up with a final list. We were true to the scoring system, but there were other things to consider. Some speakers had submitted several talks and we had decided that even if several talks by the same person scored highly, only one could go into the final schedule. We value diversity of speakers, and reached out to diverse communities to advertise the call for proposals and also considered diversity when selecting talks. Also, where talks were scoring equally, we wanted to ensure we we’re giving priority to speakers who were new to the conference.

We asked all speakers to post their slides onto the csv,conf Zenodo repository. This was really nice to have because attendees asked multiple times for links to slides, so we could simply send them to the Zenodo collection.

Though it proved to not be relevant for 2020 virtual event, it’s worth mentioning that the process of granting travel or accommodation support to speakers was entirely separate from the selection criteria. Although we asked people to flag a request for support, this did not factor into the decision making process.

Creating a schedule

Before we could decide on a schedule, we needed to decide on the hours and timezones we would hold the conference. csv,conf is usually a two-day event with three concurrently run sessions, and we eventually decided to have the virtual event remain two days, but have one main talk session with limited concurrent talks.

Since the in-person conference was supposed to occur in Washington, D.C., many of our speakers were people in US timezones so we focused on timezones that would work best for those speakers. We also wanted to ensure that our conference organisers would be awake during the conference. We started at 10am Eastern, which was very early for West Coast (7am) and late afternoon for non-US attendees (3pm UK; 5pm Eastern Europe). We decided on seven hours of programming each day, meaning the conference ended in late afternoon for US attendees and late evening for Europe. Unfortunately, these timezones did not work for everyone (notably the Asia-Pacific region) and we recommend that you pick timezones that work for your speakers and your conference organisers whilst stretching things as far as possible if equal accessibility is important to you. We also found it was important to clearly list the conference times in multiple timezones on our schedule so that it was easier for attendees to know what time the talks were happening.

Tickets and registration

Although most of what makes csv,conf successful is human passion and attention (and time!), we also found that the costs involved in running a virtual conference are minimal. Except for some extra costs for upgrading our communication platforms, and making funds available to support speakers in getting online, running the conference remotely saved us several thousand dollars.

We have always used an honour system for ticket pricing. We ask people pay what they can afford, with some suggested amounts depending on the attendees situation. But we needed to make some subtle changes for the online event, as it was a different proposition. We first made it clear that tickets were free, and refunded those who had already purchased tickets.

Eventbrite is the platform we have always used for registering attendees for the conference, and it does the job. It’s easy to use and straightforward. We kept it running this year for consistency and to ensure we’re keeping our data organised, even though it involved importing the data into another platform.

We were able to make the conference donation based thanks to the support of the Sloan Foundation and individual contributors and donations. Perhaps because the overall registrations also went up, we found that the donations also went up. In future – and with more planning and promotion – it would be feasible to consider a virtual event of the scale of csv,conf funded entirely by contributions from the community it serves.

Code of Conduct

We spent significant time enhancing our Code of Conduct for the virtual conference. We took in feedback from last year’s conference and reviewed other organisations’ Code of Conduct. The main changes were to consider how a Code of Conduct needed to relate to the specifics of something happening online. We also wanted to create more transparency in the enforcement and decision-making processes.

One new aspect was the ability to report incidents via Slack. We designated two event organisers as “Safety Officers”, and they were responsible for responding to any incident reports and were available for direct messaging via Slack (see the Code of Conduct for full details). We also provided a neutral party to receive incident reports if there were any conflicts of interest.

Communication via Slack

We used Slack for communication during the conference, and received positive feedback about this choice. We added everyone that registered to the Slack channel to ensure that everyone would receive important messages.

We had a Slack session bot that would announce the beginning of each session with the link to the session and we received a lot of positive feedback about the session-bot. For people not on Slack, we also had the schedule in a Google spreadsheet and on the website, and everyone that registered with an email received the talk links via email too. For the session bot, we used the Google Calendar for Team Events app on Slack.

Another popular Slack channel that was created for this conference was a dedicated Q&A channel allowing speakers to interact with session attendees, providing more context around their talks, linking to resources, and chatting about possible collaborations. At the end of each talk, one organiser would copy all of the questions and post them into this Q&A channel so that the conversations could continue. We received a lot of positive feedback about this and it was pleasing to see the conversations continue.

We also had a dedicated speakers channel, where speakers could ask questions and offer mutual support and encouragement both before and during the event.

Another important channel was a backchannel for organisers, which we used mainly to coordinate and cheer each other on during the conf. We also used this to ask for technical help behind the scenes to ensure everything ran as smoothly as possible.

After talks, one organiser would use Slack private messaging to collate and send positive feedback for speakers, as articulated by attendees during the session. This was absolutely worth it and we were really pleased to see the effort was appreciated.

Slack is of course free, but its premium service does offer upgrades for charities and we were lucky enough to make use of this. The application process is very easy and takes less that 10 mins so this is worth considering.

We made good use of Twitter throughout the conference and there were active #commallama and #csvconf hashtags going throughout the event. The organisers had joint responsibility for this and this seemed to work. We simply announced the hashtags at the beginning of the day and people picked them up easily. We had a philosophy of ‘over-communicating’ – offering updates as soon as we had them, and candidly. We used it to to share updates, calls-to-action, and to amplify people’s thoughts, questions and feedback

Picking a video conference platform

Zoom concerns

One of the biggest decisions we had to make was picking a video conferencing platform for the conference. We originally considered using Zoom, but were concerned about a few things. The first was reports of rampant “zoombombing”, where trolls join Zoom meetings with the intent to disrupt the meeting. The second concern was that we are a small team of organisers and there would be great overhead in moderating a Zoom room with hundreds of attendees – muting, unmuting, etc. We also worried that a giant Zoom room would feel very impersonal. Many of us now spend what is probably an unnecessary amount of our daily lives on Zoom and we also felt that stepping away from this would help mark the occasion as something special, so we made the decision to move away from Zoom and we looked to options that we’re more of a broadcast tool than meeting tool.

Crowdcast benefits

We saw another virtual conference that used Crowdcast and were impressed with how it felt to participate, so we started to investigate it as a platform before enthusiastically committing to it, with some reservations.

The best parts of Crowdcast to us were the friendly user interface, which includes a speaker video screen, a dedicated chat section with a prompt bar reading “say something nice”, and a separate box for questions. It felt really intuitive and the features were considered, useful and we incorporated most of them.

From the speaker, participant and host side, the experience felt good and appropriate. The consideration on the different user types was clear in the design and appreciated. One great function was that of a green room, which is akin to a speakers’ couch at the backstage of an in-person conference, helping to calm speakers’ nerves, check their audio and visual settings, discuss cues, etc. before stepping out onto the stage.

Another benefit of Crowdcast is that the talks are immediately available for viewing, complete with chat messages for people to revisit after the conference. This was great as it allowed people to catch up in almost real time and so catch up quickly if they missed something on the day and feel part of the conference discussions as the developed. We also released all talk videos on YouTube and tweeted the links to each talk.

Crowdcast challenges

But Crowdcast was not without its limitations. Everything went very well, and the following issues were not deal breakers, but acknowledging them can help future organisers plan and manage expectations.

Top of the list of concerns was our complete inexperience with it and the likely inexperience of our speakers. To ensure that our speakers were comfortable using Crowdcast, we held many practice sessions with speakers before the conference, and also had an attendee AMA before the conference to get attendees acquainted with the platform. These sessions were vital for us to practice all together and this time and effort absolutely paid off! If there is one piece of advice you should take away from reading this guide it is this: practice practice practice, and give others the opportunity and space to practice as well.

One challenge we faced was hosting – only one account has host privileges, but we learned that many people can log into that account at the same time to share host privileges. Hosts can allow other people to share their screen and unmute, and they can also elevate questions from the chat to the questions box. They can also kick people out if they are being disruptive (which didn’t happen for us, but we wanted to be prepared). This felt a bit weird, honestly, and we had to be careful to be aware of the power we had when in the hosts position. Weird, but also incredibly useful and a key control feature which was essential for an event run by a group rather than an individual.

With Crowdcast, you can only share four screens at a time (so that would be two people sharing two screens). Our usual setup was a host, with one speaker sharing their screen at a time. We could add a speaker for the talks that only had a single other speaker but any more that this we would have had problems.

It was easy enough for the host to chop and change who is on screen at any time, and there’s no limit on the total number of speakers in a session. So there is some flexibility, and ultimately, we were OK. But this should be a big consideration if you are running an event with different forms of presentation.

Crowdcast was also not without its technical hiccups and frustrations. Speakers sometimes fell off the call or had mysterious problems sharing their screens. We received multiple comments/questions on the day about the video lagging/buffering. We often had to resort to the ol’ refresh refresh refresh approach which, to be fair, mostly worked. And on the few occasions we were stumped, there’s quite a lot of support available online and directly from Crowdcast. But honestly, there were very few technical issues for a two-day online conference.

Some attendees wanted info on the speakers (ex: name, twitter handle) during the presentation and we agree it would have been a nice touch to have a button or link in Crowdcast. There is the “call to action” feature, but we were using that to link to the code of conduct.

Crowdcast was new to us, and new to many people in the conference community. As well as these practices we found it helpful to set up an FAQ page with content about how to use Crowdcast and what to expect from an online conference in general. Overall, it was a good decision and a platform we would recommend for consideration.

#Commallama

Finally, it would not be csv,conf if it had not been for the #commallama. The comma llama first joined us for csv,conf,v3 in Portland and joined us again for csv,conf,v4. The experience of being around a llama is both relaxing and energising at the same time, and a good way to get people mixing.

Taking the llama online was something we had to do and we were very pleased with how it worked. It was amazing to see how much joy people go out of the experience and also interesting to notice how well people naturally adapted to the online environment. People naturally organised into a virtual queue and took turns coming on to the screen to screengrab a selfie. Thanks to our friends at Mtn Peaks Therapy Llamas & Alpacas for being so accommodating and helping us to make this possible.

A big thank you to our community and supporters

As we reflect on the experience this year, one thing is very clear to us: The conference was only possible because of the community to speak, attend and supported us. It was a success because the community showed up, was kind, welcoming and extremely generous with their knowledge, ideas and time. The local people in D.C. who stepped up to offer knowledge and support on the ground in D.C. was a great example of this and we are incredibly grateful or the support, though this turned out not to be needed.

We were lucky to have a community of developers, journalists, scientists and civic activists who intrinsically know how to interact and support one another online, and who adapted to the realities of an online conference well. From the moment speakers attended our practice sessions on the platform and started to support one another, we knew that things we’re going to work out. We knew things would not all run to plan, but we trusted that the community would be understanding and actively support us in solving problems. It’s something we are grateful for.

We were also thankful to Alfred P. SLOAN Foundation and our 100+ individual supporters for making the decision to support us financially. It is worth noting that none of this would have been possible without our planned venue, hotel and catering contracts being very understanding in letting us void our contracts without any penalties.

Looking ahead – the future of csv,conf

Many people have been asking us about the future of csv,conf. Firstly it’s clear that the csv,conf,v5 has given us renewed love for the conference and made it abundantly clear to us of the need for a conference like this in the world. It’s also probably the case that the momentum generated by running the conference this year will secure enthusiasm amongst organisers for putting something together next year.

So the questions will be “what should a future csv,conf look like?”. We will certainly be considering our experience of running this years event online. It was such a success that there is an argument for keeping it online going forward, or putting together something of a hybrid. Time will tell.

We hope that this has been useful for others. If you are organising an event and have suggestions or further questions that could improve this resource, please let us know. Our Slack remains open and is the best place to get in touch with us.

• The original version of this blogpost was published on csvconf.com and republished here with kind permission.

a pledge: self-examination and concrete action in the JMU Libraries / Bethany Nowviskie

“The beauty of anti-racism is that you don’t have to pretend to be free of racism to be an anti-racist. Anti-racism is the commitment to fight racism wherever you find it, including in yourself. And it’s the only way forward.” — Ijeoma Oluo, author of So You Want to Talk About Race.

Black lives matter. Too long have we allowed acts of racism and deeply ingrained, institutionalized forces of white supremacy to devalue, endanger, and grievously harm Black people and members of other minoritized and marginalized groups. State-sanctioned violence and racial terror exist alongside slower and more deep-seated forces of inequality, anti-Blackness, colonization, militarization, class warfare, and oppression.

As members of the JMU Libraries Dean’s Council and Council on Diversity, Equity, and Inclusion, we acknowledge these forces to be both national and local, shaping the daily lived experiences of our students, faculty, staff, and community members. As a blended library and educational technology organization operating within a PWI, the JMU Libraries both participates in and is damaged by the whiteness and privilege of our institutions and fields. Supporting the James Madison University community through a global pandemic has helped us see imbalances, biases, and fault lines of inequality more clearly.

We pledge self-examination and concrete action. Libraries and educational technology organizations hold power, and can share or even cede it. As we strive to create welcoming spaces and services for all members of our community, we assert the fundamental non-neutrality of libraries and the necessity of taking visible and real action against the forces of racism and oppression that affect BIPOC students, faculty, staff, and community members.

Specifically, and in order to “fight racism wherever [we] find it, including in [ourselves],” we commit to:

  • Listen to BIPOC and student voices, recognizing that they have long spoken on these issues and have too often gone unheard.
  • Educate ourselves and ask questions of all the work we do. (“To what end? To whose benefit? Whose comfort is centered? Who has most agency and voice? Who is silenced, ignored, or harmed? Who is elevated, honored, and made to feel safe? Who can experience and express joy?”) 
  • Set public and increasingly measurable goals related to diversity, equity, inclusion, and anti-racism, so that we may be held accountable.
  • Continue to examine, revise, and augment our collections, services, policies, spending patterns, and commitments, in order to institutionalize better practices and create offerings with enduring impact.
  • Learn from, and do better by, our own colleagues.

We are a predominantly white organization and it is likely that we will make mistakes as we try to live up to this pledge. When that happens, we will do the work to learn and rectify. We will apologize, examine our actions and embedded power structures, attempt to mitigate any harm caused by our actions, and we will do better.

Signatories 

Dr. Bethany Nowviskie
Dean of Libraries, Professor of English, & Senior Academic Technology Officer, JMU

Dr. Brian Flota
Associate Professor, Humanities Librarian, Library Faculty Assembly Representative, JMU Libraries

Kristen Shuyler
Director of Communications and Outreach, Associate Professor, JMU Libraries

Dr. Aaron Noland
Assistant Dean of Libraries, Assistant Professor, JMU

Zach Sensabaugh
Music Library Assistant, Outgoing Staff Advisory Council Representative, JMU Libraries

Mark Lane
Digital Preservation Librarian, Assistant Professor, Libraries Leadership Group Representative, JMU Libraries

Stefanie Warlick
Interim Associate Dean of Libraries, Professor, JMU

Kelly Miller-Martin
Director of Facilities Operations, JMU Libraries

Andrea Adams
Interim Associate Dean of Libraries, Associate Professor, JMU

Liana Bayne
Libraries Administrative Assistant, JMU

Bill Hartman
Director of Technology, JMU Libraries

Kevin Hegg
Director of Digital Projects, Council on Diversity, Equity, and Inclusion Member, JMU Libraries

Jess Garmer
Educational Technology Instructor, Council on Diversity, Equity, and Inclusion Member, JMU Libraries

Karen Snively
JMU Music Library Services Manager, Council on Diversity, Equity, and Inclusion Member

April Beckler
Reserves Coordinator & Interlibrary Loan Borrowing, Council on Diversity, Equity, and Inclusion Member, JMU Libraries

Hillary Ostermiller
Communication & Media Studies Librarian, Council on Diversity, Equity, and Inclusion Vice Chair, JMU Libraries

Alyssa Valcourt
Science & Math Librarian, Council on Diversity, Equity, and Inclusion Chair, JMU Libraries

Note: This post is being shared from the JMU Libraries web page where it first appeared on 9 June 2020. Internal Libraries discussions, programming, and action related to dismantling white supremacy is ongoing. I’m replicating this on my own blog as a self-reminder that these are personal, as well as organizational commitments.

What I’m telling family about COVID-19 / Coral Sheldon-Hess

I’ll preface this whole thing with a reminder that I’m an engineer and librarian, not a biologist. I have spent months reading articles, posts, and tweet threads written by doctors, epidemiologists, and public health experts, but I am not, myself, an expert. I’ll post a version of this online and let you know if someone drops in to correct anything I’ve said here. And I’ll update you if our understanding changes again–you know how science is, especially when dealing with something like a novel virus: we keep learning new things that change our approach.

If this is all too much to read, a very short summary: the things that increase risk are proximity to other people, total amount of time with other people, lack of masks, and lack of ventilation. Being outside with a mask on and six feet of distance between you and other people who are all wearing masks: kind of best-case, especially if it’s only a short time. Being in a small space that isn’t well-ventilated for several hours with other people: real bad. (A tweet thread on this, with articles, is here.) Also, more people are sick than our governments are acknowledging or admitting.

Now, here’s a more complete summary:

OK, so. How bad a case of COVID-19 a person gets seems to depend at least partially on the amount of virus they are exposed to. A person with a properly functioning immune system can be exposed to a very small amount of the virus and not get sick at all; if it’s a small enough amount, their body just washes it away, and I don’t believe they even test positive on an antibody test later. Let’s say it takes something like 1000 viral particles (that’s a great article!), for most people, to induce illness—the jury is out about autoimmune diseases’ effects on this number, though my educated guess says “it is probably smaller,” so we have to be more careful than the average person, and of course 1000 is just an estimate. Also, it seems to be generally accepted as fact that the difference in how sick someone gets will vary greatly with their exposure, so someone exposed to 1001 virus particles is likely to get a lot less sick than someone exposed to 100,000.

The virus is spread by droplets which come out of the nose and mouth during breathing, talking, shouting, sneezing, coughing, etc. Some of those activities spread droplets further and higher into the air than others, as you’d imagine. The article I linked above suggests that coughing might release 3000 droplets, a sneeze might be 10 times that, and a single breath could vary between 50 and 5000 droplets, but they would fall more quickly and not spread as far as a cough or a sneeze. Masks will keep a lot of that contained! Maybe not all of it, especially with a homemade mask instead of an N95, but both serve to significantly decrease the number of droplets leaving a person’s airways. That’s why people who decide to go into public without masks on are assholes, with the exception of the relatively few people who literally cannot wear them. (Note: a droplet coming from someone who has a virus is likely to have more than a single virus particle in it. It seems, from the article, that a single droplet from someone with COVID-19 might contain between 5000 and 70,000 virus particles, and some of that variation has to do with where they are in the course of their illness.)

Now, the droplets with virus particles can get into your body in a number of ways. You could breathe them in–and your mask helps prevent that, of course, so it’s worth wearing your mask even if you’re pretty sure you’re not a carrier of COVID-19. The droplets could fall onto a surface, and you could touch that surface and then your own nose, mouth, eyes, or something you’re about to eat or drink. The droplets could fall on one surface, be moved by your or someone else’s hands onto another surface, and then get on your hands and into your body. And so on. Washing your hands for 20 full seconds helps us safe, because COVID-19 has a lipid outer shell: soap destroys the virus. Refusing to touch your face except immediately after washing your hands: also helpful! (The mask also helps with this; you can’t touch your nose or mouth if they’re covered.)

It does mean you need to wash your mask with soap between wearings and treat it as if the outside of it is covered in virus particles when you get it home.

This particular virus stays alive on most surfaces for an incredibly long time, which is part of why I emphasized that they can be transmitted between surfaces before they get onto your hands. There was a study that showed it can live for 4 hours on copper, 24 hours on cardboard, and 72 hours on plastic, although I don’t know that that study has been replicated. Humidity and temperature have some effect, too. Speaking practically and for myself: since nobody who delivers my mail or packages (aside from the CSA box) seems to wear masks now, I just assume nothing’s safe to touch for at least 48 hours, and I wash my hands after bringing things in and putting them in our quarantine containers.

The thing that a lot of places are doing, where they do temperature checks at the door, is helpful, but not foolproof. Something like 95% of people who will end up sick with COVID-19 will have a fever by day 11 after exposure (don’t quote me on that statistic, it’s a vague memory from March and may have changed by a few percentage points since then). However, with some coronaviruses, you’re actually giving off more viral particles per hour immediately before your symptoms kick in than you are a couple of days afterward. So, when the World Health Organization announced that the bulk of COVID-19 transmission is not, in fact, due to asymptomatic carriers, they had to put out an almost immediate clarification that they meant people who tested positive for antibodies but never developed symptoms, as opposed to people who were pre-symptomatic (no symptoms yet, but they will be developing them). I’m still pretty frustrated with them for that, because your average person (including me, the day before their announcement) doesn’t know that “pre-symptomatic” is not a subset of “asymptomatic”; anyone who only saw the first announcement and not the clarification might think the temperature checks are foolproof, and that frightens me.

Anyway, back to limiting risk. You need to stay under 1000 total viral particles (probably fewer), which isn’t a lot, granted. Now, we’re assuming you’re washing your hands (or if soap and water aren’t available, using hand sanitizer) often enough to prevent that mode of infection, so we’re mostly just worried about particles in the air. In the course of your day, your goal needs to be limiting how many viral particles get in your nose, mouth, and eyes.

Obviously, the mask helps a lot–yours and, especially, everyone else’s. Good ventilation is useful since it helps disperse droplets more quickly. There are studies showing that COVID-19 dies quickly in sunlight, so being outdoors is beneficial, beyond the whole ventilation issue. Maintaining physical distance from other people helps as long as they’re breathing normally, not coughing and sneezing (which can project particles a lot further than six feet) or talking (which projects it further than just breathing but not nearly as far as coughs/sneezes)—honestly, I wouldn’t go within 20 feet of someone who isn’t masked, nowadays: they’re showing poor judgment, and I’d just assume they are a carrier. (There are legitimate conditions that prevent mask-wearing, and children under the age of 2 are not supposed to wear them. That’s valid. It is my strongly-held belief, though, that any adult who is unable to wear a mask needs to find all possible ways to avoid being indoors in public places right now, for their own safety and the safety of others.)

If you have to go into a room with another person–someone’s office, say–you should flatly refuse if they aren’t wearing a mask when you get there, and you’ll want to limit the amount of time you’re there as much as you can.

But if you’re outside on a walk and someone briefly enters your 6′ bubble? It’s not ideal, but it’s a very small risk. It’s OK to take walks! Don’t go following someone else down the trail, because then you’re in their wake for too long, but passing someone going in the opposite direction does not put you at much risk. Being passed by a bicyclist: also not a big deal.

You definitely don’t want to sing with groups. Or talk loudly with other people who are also talking loudly. There are theories (and I think they are just theories, not something we know for sure) that inhaling deeply, like you do when singing or otherwise projecting your voice, allows virus particles to get deeper into your lungs and possibly infect you … worse? more quickly? I don’t know precisely where the science stands on that, but I’ve seen it discussed in multiple places as a distinct possibility.

Someone on Twitter pointed out that people who work in grocery stores are in a lot more danger than shoppers: they’re there for their whole shift, right? Even assuming all shoppers wear a mask, that is a LOT of potential exposure, compared to the … what? half hour? hour? that a shopper is in the store. Also, shoppers move around and can control how close they get to other people, where store employees cannot, so much. To be clear: I’m not suggesting that you go into stores. Curbside is free at most stores, and it keeps everyone safer, including store employees! I shared that mostly so you understand why essential workers have been so afraid through all of this.

Anyway, having done a lot of reading, I feel very safe doing curbside, even though my car doesn’t have a separate trunk compartment. I wear a mask, which keeps the person doing the delivery safe. I tend to have the fan or air conditioner on, which pushes air past me and out the back (though, to date, everyone doing curbside has also had a mask on, so this is a minor detail—between my mask and theirs, combined with the short time involved, the direction of airflow is probably not a big deal). The car is roughly six feet long. And the duration of the encounter is very short. It’s not zero risk, for them or for me, but it’s very low risk. Less, certainly, than my wandering around a store and standing next to the cashier (even with plexiglass) for the significantly longer time it takes to ring up and bag groceries.

In contrast, classrooms are darn near the worst-case scenario. My school is switching to 3-hour blocks for in-person courses, to keep students from having to come to campus as often, which seems good on the surface. They’re shrinking sections so that people can stay six feet apart, which is a positive step, if you take “there must be in-person classes” as a given. Staff and faculty are required to wear masks; it’s unclear whether they are requiring or merely encouraging masks for students, but if it’s the latter, nobody should agree to teach in-person. Unless the school installs sound amplification equipment, the professor and anyone else who needs to talk will be doing so at a loud volume, which disperses droplets further than talking in a quiet voice; also, to project their voice, a professor will have to breathe deeply, possibly putting themself at more risk. Classrooms are notoriously under-ventilated, and people will be trapped together for three hours, now. As for the rest of campus, our faculty share small offices, and we (humanity at large, not just my college) don’t know how big a risk restrooms are—ours have very narrow entrance/exit areas, though, so even if flushing turns out not to spray live viral particles into the air, I’d rate ours on the higher end of the risk spectrum. The library on my campus has a narrow entrance/exit, too, and there’s no good way to sanitize books between users. Basically, reopening campus is going to be a nightmare, and I am worried about my colleagues who are agreeing to go back and about our students. Our Provost put in writing that nobody would be required to teach face-to-face in the fall, so at least they’re being cool about part of it. (I hope the librarians and other employees aren’t forced into the building either!)

I mentioned that more people are sick than we know about. That’s true, and it doesn’t seem to be something anyone in government is taking into account, at all. There are a number of different tests being administered in the US, with varying (largely unpublished) levels of sensitivity (meaning they give false negatives at varying and largely unknown rates); on top of that there have been delays in processing (which make the test less useful), combined with improper test procedure (not sticking the swab far enough into the nasal passage to get a proper sample), combined with the disease moving from the upper to the lower respiratory tract several days after onset of symptoms (so a nasal swab won’t catch it, anyway). There are definitely people out there who have tested negative despite having COVID-19, and the official statistics ignore their existence.

Since I recently ran these numbers for someone in a professional capacity, I’ll use Pennsylvania as an example of what this looks like at scale: It’s generally accepted that COVID-19 has an infection fatality rate of less than 2%. (I just found a paper that quoted 1.04% globally.) Using last week’s numbers, if there had really only been 76,000-some cases in Pennsylvania, and more than 5,900 had died, that would imply that PA’s infection fatality rate was something like 7.7%. There’s no way that’s true, when our hospitals never reached capacity; it’s much more likely that there have actually been at least 295,000 cases in Pennsylvania, using the 2% infection fatality rate number. (It would be even more if we assumed 1.04%.) That is, frankly, terrifying. My governor is making decisions based on the 76,000 number, not on the 295,000 number. I bet yours is doing something similar.

Now, I left the “actually, COVID-19 is terrifying” information out of this message. I’m willing to tell you all about how it might affect the brain, how children are coming down with a secondary infection which makes “it’s not dangerous for children” a very dangerous lie, about how the lungs are affected, about how the heart and kidneys and liver are apparently also affected in some cases, and so on. I can talk about previously healthy 20-somethings who suffered strokes after recovery. I can, as someone on Twitter suggested, Google “COVID lung transplant” and let you know what I find. I can go on to talk about how, even in mild cases where someone recovers fully, it takes over a month to get better. But if you’re prepared to just believe me that it’s a very dangerous disease, even for people without risk factors making it more so, I’m honestly thrilled not to have to summarize so much gory stuff. If you want to know it, though, I’m willing to write it up.

Supporting Open Source Software / David Rosenthal

In the Summer 2020 issue of Usenix's ;login: Dan Geer and George P. Sieniawski have a column entitled Who Will Pay the Piper for Open Source Software Maintenance? (it will be freely available in a year). They make many good points, some of which are relevant to my critique in Informational Capitalism of Prof.  Kapczynski's comment that:
open-source software is fully integrated into Google’s Android phones. The volunteer labor of thousands thus helps power Google’s surveillance-capitalist machine.
Below the fold, I discuss "the volunteer labor of thousands".

I pointed out that very few kernel developers are unpaid volunteers:
even if one assumes that all of the “unknown” contributors are working on their own time, well over 85 percent of all kernel development is demonstrably done by developers who are being paid for their work. ... kernel developers are in short supply, so anybody who demonstrates an ability to get code into the mainline tends not to have trouble finding job offers. Indeed, the bigger problem can be fending those offers off. As a result, volunteer developers tend not to stay that way for long.
This was relevant to Kapczynski's comment, the vast bulk of open-source code in Android is the kernel and user-level code developed and supported by Google. However, Android is a very small part of the universe of open source, so I pointed out that the bigger picture is much less rosy:
It is definitely the case that there are gaps in this support, important infrastructure components dependent on the labor of individual volunteers.
Catalin Cimpanu illustrates the scale of the inadequate support problem in Vulnerabilities in popular open source projects doubled in 2019:
A study that analyzed the top 54 open source projects found that security vulnerabilities in these tools doubled in 2019, going from 421 bugs reported in 2018 to 968 last year.

According to RiskSense's "The Dark Reality of Open Source" report, released today, the company found 2,694 bugs reported in popular open source projects between 2015 and March 2020.

The report didn't include projects like Linux, WordPress, Drupal, and other super-popular free tools, since these projects are often monitored, and security bugs make the news, ensuring most of these security issues get patched fairly quickly.

Instead, RiskSense looked at other popular open source projects that aren't as well known but broadly adopted by the tech and software community. This included tools like Jenkins, MongoDB, Elasticsearch, Chef, GitLab, Spark, Puppet, and others.

RiskSense says that one of the main problems they found during their study was that a large number of the security bugs they analyzed had been reported to the National Vulnerability Database (NVD) many weeks after they've been publicly disclosed.

The company said it usually took on average around 54 days for bugs found in these 54 projects to be reported to the NVD, with PostgreSQL seeing reporting delays that amounted to eight months.
Source
Czech firm Jetbrains surveyed nearly 20K developers for their annual Developer Ecosystem survey:
And when asked if they contributed to open-source projects:
  • 44% said "No, but I would like to."
  • 20% said "I have only contributed a few times."
  • 16% said "Yes, from time to time (several times a year)."
  • 11% said "Yes, regularly (at least once a month)."
  • 4% said "No, and I would not like to."
  • 3% said "I work full-time on open-source code and get paid for it."
  • 2% said "I work full-time on open-source code but do not get paid for it."
So only 5% of developers work full-time on open source, and only 16% devote any significant proportion of their time to it. For 36% of developers, contributing is something they do rarely, probably only to fix an annoying bug they encounter. A significant improvement would be if some way could be found to encourage half the "No, but I would like to" developers to contribute rarely, getting occasional contributors to 58% of the population.

Geer and Sieniawski address maintenance of open source software (OSS):
Although there is "a high correlation between being employed and being a top contributor to" OSS, sustaining it takes more than a regular income stream. Long-term commitment to open source stewardship is also essential, as is budgeting time for periodic upkeep. For perspective, consider that 36% of professional developers report never contributing to open source projects, with another 28% reporting less than one open source contribution per year. Thus, despite more direct enterprise engagement with open source, risk-averse attitudes towards licensing risk and potential loss of proprietary advantage endure by and large. Consider further Table 1, which shows how concentrated contribution patterns are, particularly in JavaScript, and thus where additional OSS maintenance support could have an outsized impact.
Here is their Table 1:

Top 50
Packages
Primary
Language
Language
Rank
2019
Language
Rank
2018
Average
Dependent
Projects
Average
Direct
Contributors
npmJS113,500,00035
PipPython2378,000204
MavenJava32167,00099
NuGet.NET/C++6594,000109
RubyGemsRuby1010737,000146

Thirty-five people maintaining code with 100,000 users for each of them is surely a problem, especially when you consider how vulnerable the JavaScript supply chain is, and how tempting a phishing target each of the maintainers are for cryptojackers and other miscreants. To illustrate the problem, the Backstabber’s Knife Collection: A Review of Open Source Software Supply Chain Attacks by Marc Ohm, Henrik Plate, Arnold Sykosch and Michael Meier:
presents a dataset of 174 malicious software packages that were used in real-world attacks on open source software supply chains, and which were distributed via the popular package repositories npm, PyPI, and RubyGems.
Nadia Eghbal's 143-page 2016 report for the Ford Foundation Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure is a comprehensive account of the need for support of essential open source infrastructure outside the kernel. She concludes:
In the last five years, open source infrastructure has become an essential layer of our social fabric. But much like startups or technology itself, what worked for the first 30 years of open source’s history won’t work moving forward. In order to maintain our pace of progress, we need to invest back into the tools that help us build bigger and better things.

Figuring out how to support digital infrastructure may seem daunting, but there are plenty of reasons to see the road ahead as an opportunity.

Firstly, the infrastructure is already there, with clearly demonstrated present value. This report does not propose to invest in an idea with unknown future value. The enormous social contributions of today’s digital infrastructure cannot be ignored or argued away, as has happened with other, equally important debates about data and privacy, net neutrality, or private versus public interests. This makes it easier to shift the conversation to solutions.

Secondly, there are already engaged, thriving open source communities to work with. Many developers identify with the programming language they use (such as Python or JavaScript), the function they provide (such as data science or devops), or a prominent project (such as Node.js or Rails). These are strong, vocal, and enthusiastic communities.The builders of our digital infrastructure are connected to each other, aware of their needs, and technically talented. They already built our city; we just need to help keep the lights on so they can continue doing what they do best.

Infrastructure, whether physical or digital, is not easy to understand, and its effects are not always visible, but this should compel us to look more, not less, closely. When a community has spoken so vocally and so often about its needs, all we need to do is listen.
Around the same time as Eghbal, Cameron Neylon wrote about the related problem of infrastructure for academic research in Squaring Circles: The economics and governance of scholarly infrastructures. I discussed it in Cameron Neylon's Squaring Circles, and he expanded on it in his 2017 paper Sustaining Scholarly Infrastructures through Collective Action: The Lessons that Olson can Teach us.

Neylon starts by identifying the three possible models for the sustainability of scholarly infrastructures:
Infrastructures for data, such as repositories, curation systems, aggregators, indexes and standards are public goods. This means that finding sustainable economic models to support them is a challenge. This is due to free-loading, where someone who does not contribute to the support of the infrastructure nonetheless gains the benefit of it. The work of Mancur Olson (1965) suggests there are only three ways to address this for large groups: compulsion (often as some form of taxation) to support the infrastructure; the provision of non-collective (club) goods to those who contribute; or mechanisms that change the effective number of participants in the negotiation.
In other words, the choices for sustainability are "taxation, byproduct, oligopoly". Applying them to open source support:
  • Taxation conflicts with the "free as in beer, free as in speech" ethos of open source.
  • Byproduct is, in effect, the "Red Hat" model of free software with paid support. Red Hat, the second place contributor to the Linux kernel and worth $34B when acquired by IBM last year. Others using this model may not have been quite as successful, but many have managed to survive (the LOCKSS program runs this way) and some to flourish (e.g. Canonical). 
  • Oligopoly is what happens in practice. Take, for example, the Linux Foundation, which is:
    supported by members such as AT&T, Cisco, Fujitsu, Google, Hitachi, Huawei, IBM, Intel, Microsoft, NEC, Oracle, Orange S.A., Qualcomm, Samsung, Tencent, and VMware, as well as developers from around the world
    It is pretty clear the the corporate members, and especially the big contributors like Intel, have more influence than the "developers from around the world".
In 2013 Jack Conte together with Sam Yam launched Patreon, a platform by which "users" of artists' products such as the YouTube music videos Conte and his wife Nataly Dawn made as Pomplamoose, could support them by small monthly payments. Since then Patreon has transferred over a billion dollars from its now over 5M "patrons" to its over 150K members.

Linux Mint donations
Among the Patreon members I patronize is the Linux Mint distro; I use it on many of my computers. Mint raises $10-15K/month in donations, of which about $2.5K/month comes via Patreon. A fairly small proportion of a fairly small income stream, but unlike the rest of the donations it is dependable, regular income. Mint only joined Patreon quite recently, and hasn't been aggressive about marketing its membership. But I think many of their users would be willing to pay Mint's basic $5/month Patreon tier.

Although between the X Window System and the LOCKSS Program I have made contributions to open source software, when writing this post I realized that since my retirement my only contribution has been a fix to a minor but annoying bug in touchpad-indicator, which is an important part of my Linux environment on Acer C720 Chromebooks. I need to do better in future.

The Provenance of Facts / Mita Williams

Brian Feldman has a newsletter called BNet and on May 30th, he published an insightful and whimsical take on facts and Wikipedia called mysteries of the scatman.

The essay is an excellent reminder that if a fact without proper provenance makes it way into Wikipedia and is then published in a reputable source, it is nearly impossible to remove said fact from Wikipedia.

Both the Scatman John and “Maps” issues, however, point to a looming vulnerability in the system. What happens when facts added early on in Wikipedia’s life remain, and take on a life of their own? Neither of these supposed truths outlined above can be traced to any source outside of Wikipedia, and yet, because they initially appeared on Wikipedia and have been repeated elsewhere, they are now, for all intents and purposes, accepted as truth on Wikipedia. It’s twisty.

mysteries of the scatman

This is not a problem of only Wikipedia. Last year I addressed a similar issue in an Information Literacy class for 4th year Political Science students when I encouraged students to follow the citation pathways of the data that they plan to cite. I warned them not to fall for academic urban legends:

Spinach is not an exceptional nutritional source of iron. The leafy green has iron, yes, but not much more than you’d find in other green vegetables. And the plant contains oxalic acid, which inhibits iron absorption.

Why, then, do so many people believe spinach boasts such high iron levels? Scholars committed to unmasking spinach’s myths have long offered a story of academic sloppiness. German chemists in the 1930s misplaced a decimal point, the story goes. They thus overestimated the plant’s iron content tenfold.

But this story, it turns out, is apocryphal. It’s another myth, perpetuated by academic sloppiness of another kind. The German scientists never existed. Nor did the decimal point error occur. At least, we have no evidence of either. Because, you see, although academics often see themselves as debunkers, in skewering one myth they may fall victim to another.

In his article “Academic Urban Legends,” Ole Bjorn Rekdal, an associate professor of health and social sciences at Bergen University College in Norway, narrates the story of these twinned myths. His piece, published in the journal Social Studies of Science, argues that through chains of sloppy citations, “academic urban legends” are born. Following a line of lazily or fraudulently employed references, Rekdal shows how rumor can become acknowledged scientific truth, and how falsehood can become common knowledge.

Academic Urban Legends“, Charlie Tyson, Inside Higher Ed, August 6, 2014

I’m in the process of working on an H5P learning object dedicated to how to calculate one’s H-Index and yet, I’m conflicted about doing so. There are many reasons why using citations as a measure of an academic’s value is problematic for reasons far beyond the occasional academic urban legend:

To weed out academic urban legends, Rekdal says editors “should crack down violently on every kind of abuse of academic citations, such as ornamental but meaningless citations to the classics, or exchanges in citation clubs where the members pump up each other’s impact factors and h-indexes.”

Yet even Rekdal – who debunks the debunkers – says his citation record isn’t flawless.

“I have to admit that I published an article two decades ago where I included an academically completely meaningless reference (without page numbers of course) to a paper written by a woman I was extremely in love with,” he said. “I am still a little ashamed of what I did. But on the other hand, the author of that paper has now been my wife for more than 20 years.”

Academic Urban Legends“, Charlie Tyson, Inside Higher Ed, August 6, 2014

Talk Talk / Ed Summers

I recently reviewed two books about web archiving for American Archivist and thought it was worth mentioning it here so I could 1) link to a nice free pdf version, and 2) situate it a bit more since I’m off social media (e.g. Twitter) at the moment.

The review is of Ian Milligan’s History in the Age of Abundance? and The Web as History edited by Niels Brügger and Ralph Schroeder. I was asked to review both of them together, and no arm twisting was involved because I wanted to read both of these books for my dissertation research. I should say that I’ve met the authors before, and was already a fan of their work, particularly because of how they have been instrumental in developing web archives as a practice. I kind of wish they had me review Brügger’s The Archived Web, but that’s for another time.

The review turned into a bit of a complicated little dance because I wanted readers to get the sense of web archives not only as useful for historians wanting to do their work in understanding the past, but also as technical constructions that create our present. I invoked Hugh Taylor’s still highly relevant criticism of archivy as not concerned enough with how archives shape our lived experience, online and offline. Consider for example the current debate around facial recognition technologies that are built upon archives of images collected from the web.

But most of all the thing that emerged while writing this brief review was a recognition of how strange it is that the records of web archives (the WARC file) are almost entirely unavailable for study by the public. You need to know a secret handshake to get access to them, and sometimes they are simply not available. This is even the case for the most public of public web archives, the Internet Archive. The contents of WARC files are made available piecemal via interfaces such as the Wayback Machine, which render what a particular URL looked like at a given time. But most of the analyses that Milligan, Brügger and Schroeder discuss are related to understanding web archives as data, or Collections as Data if you will.

Web archives, like our non-web archives, are technical constructions for achieving particular goals. The absence of WARC data from the service offerings of web archives is a curious phenomena. Imagine going to visit an archive, and reading a finding aid, locating a box in the inventory, but not being able to request the folders within it? Perhaps that’s not the right analogy, and that perhaps its like requesting an entire record series, or group? That would not be workable in most archives. But when we situate distant reading practices in web archives we must consider who it is who can do that reading, and why.

I’m not sure I articulated this very well in the review, but I’d be interested to hear what you think, either here or at ehs@pobox.com

Harnessing open data for agricultural business opportunities in the DRC: Open Data Day 2020 report / Open Knowledge Foundation

On Saturday 7th March 2020, the tenth Open Data Day took place with people around the world organising over 300 events to celebrate, promote and spread the use of open data. Thanks to generous support from key funders, the Open Knowledge Foundation was able to support the running of more than 60 of these events via our mini-grants scheme

This blogpost is a report from Young Professionals for Agricultural Development (YPARD) in the Democratic Republic of the Congo who received funding from the Foreign and Commonwealth Office to help young and female agricultural entrepreneurs explore how they can use open data to create new businesses. This blogpost is published in French.

Open Knowledge Foundation en tant qu’institution qui prône les données ouvertes a sélectionné une soixantaine d’organisations à travers le monde pour bénéficier du small grant en vue d’organiser la journée d’#OpenDataDay le 7 mars. Et, Young Professionals for Agricultural Development (YPARD/DRC) est parmi les 67 organisations bénéficiaire de ce fonds. 

Opendata : une opportunité pour l’agrobusiness en RDC

C’est sous ce thème que @ypard_rdc a organisé en date du 07 mars 2020, une séance d’information  et de sensibilisation auprès de 25 jeunes réunis au bureau @HabariRDC avec l’appui de #OpenKnowledgeFoundation.

L’accès libre des données surtout agricoles est un sujet d’actualité à l’ère du numérique où des questions d’ordre mondial s’impose avec acuité. Le besoin en partage des nouvelles solutions est une nécessité d’autant plus il facilite un large éventail d’informations liées à la productivité agricole, à la maitrise des conditions météorologiques, au partage des innovations et offre une meilleure opportunité dans le domaine de l’entrepreneuriat agricole selon Eden Mvuenga, un des orateurs du jour.

Pour Lisette Ntumba, une jeune entrepreneure et membre d’YPARD RDC : “Les données ouvertes restent importantes pour nous jeunes entrepreneurs agricoles qui investissent dans la transformation des fruits car, elles permettent de mieux s’outiller en termes d’informations sur les emballages, les qualités nutritionnelles et certaines données utiles sur la transformation des fruits.”

Il sied de noter qu’il y a toujours un lien entre l’agrobusiness et accès libre aux données agricoles ouvertes; et plusieurs portails ainsi que plateformes en ligne offrent des possibilités d’accès aux données agricoles a renchérit Eden Mvuenga. Par ailleurs, l’internet reste incontournable comme moyen d’accès aux données ouvertes. 

Pour aider les jeunes à mieux comprendre les concepts clés sur Open Data ainsi que la mission de Open Knowledge Foundation et de Global Open Data for Agriculture and Nutrition (GODAN), les participants ont dû échanger sur les critères fondamentaux qui caractérisent une donnée ouverte à savoir : l’accessibilité, la disponibilité ; la redistribution et la réutilisation

Avec des exemples à l’appui, les jeunes ont partagé leurs expériences en matière d’accès aux données bien que butés à certaines difficultés telles que l’absence d’ouverture et/ou libéralisation des données agricoles au niveau des institutions étatiques via un portail gouvernemental des données ouvertes, le manque d’informations dans ce domaine d’Open Data. 

Ainsi, comme l’a noté Marlene Kabemba, participante à la session d’OpenDataDay, l’accès aux données ouvertes est une bonne opportunité surtout qu’YPARD RDC s’y attèle depuis plusieurs années quoique les Open Data ont encore du chemin à faire  en  RDC, c’est ainsi qu’il s’avère impérieux d’en faire la promotion en vue de leur intégration dans les sphères étatiques et surtout celles en charge des jeunes pour que ces derniers s’offrent les chances de voir leurs projets réussir grâce à des informations ouvertes et à libre accès.  

Avant de conclure la journée, plusieurs jeunes ont proposé à YPARD RDC de penser à créer OpenDataRDC (une structure qui s’occuperait uniquement de la promotion des données ouvertes, libres et accessibles en RDC).

Teaching students to tell stories with budget data in Guatemala: Open Data Day 2020 report / Open Knowledge Foundation

On Saturday 7th March 2020, the tenth Open Data Day took place with people around the world organising over 300 events to celebrate, promote and spread the use of open data. Thanks to generous support from key funders, the Open Knowledge Foundation was able to support the running of more than 60 of these events via our mini-grants scheme

This blogpost is a report from Ojoconmipisto in Guatemala who received funding from Hivos to teach students and journalists how to investigate and tell stories from public budget and contracting data. This blogpost is published in Spanish.

Ojoconmipisto participó el pasado 7 de marzo en la celebración mundial del Open Data Day. Organizó un taller dirigido a estudiantes de periodismo y periodistas en ejercicio, donde se habló de datos abiertos, compras públicas, fiscalización herramientas, acceso a la información y periodismo. Esta es la primera que el medio guatemalteco organiza la actividad, en las cuatro ediciones anteriores asistió como participante. 

El 5 de marzo publicó la actividad en el mapa de eventos de Opendataday.org, que registró 305. Ojoconmipisto fue uno de los 19 participantes de la categoría “seguimiento de fondos públicos”. La convocatoria, en donde anunciaba que tenía espacio para 25 personas, la hizo en sus redes sociales (Facebook y Twitter). Al menos 168 personas consultaron el formulario en línea. 

El encuentro se realizó en Hakuna Matata 2, un salón ubicado en la zona 13 de la ciudad de Guatemala. Asistieron 21 personas de un total de 27 que se registraron, entre ellos estudiantes de las Universidades del Istmo, Regional y Galileo, catedráticos universitarios y periodistas interesados en datos abiertos. 

La actividad de 14:30 a 18:30 horas consistió de cuatro charlas, tres relacionadas con datos y una con la Ley de Acceso a la Información que para Ojoconmipisto es una herramienta de trabajo. 

La primera a cargo de Daniel Ambeliz –autor de un estudio sobre los precios de antirretrovirales–, se centró en recursos digitales como Power BI, un programa que permite analizar bases de datos para crear visualizaciones y entender de manera sencilla. Junto a los participantes realizó un ejercicio práctico para identificar posibles enfoques y datos llamativos para una investigación. 

En la segunda sesión la dirigió Silvio Gramajo, especialista en temas de transparencia, quien abordó sobre la importancia de la Ley de Acceso a Información, la rendición de cuentas y el uso de los datos abiertos para construir ciudadanía. Esta se transmitió a través de Facebook Live que registró 343 reproducciones y un alcance de 1,673 personas. 

La tercera estuvo a cargo de Isaias Morales, reportero de Ojoconmipisto. El presentó una guía para fiscalizar la obra pública a partir del periodismo. Este documento es parte del proyecto “Obras bajo la lupa”, realizado con Open Contracting e Hivos, y monitorea 40 construcciones municipales. La guía explica procesos y cómo encontrar historias a partir el uso del portal de Guatecompras. Este es el sistema que registra todas las compras y contrataciones que se hacen con fondos públicos. 

Para cerrar la jornada, Francelia Solano, reportera de Nómada, impartió la charla “un dato, una historia”. Compartió su experiencia de usar los datos para investigar a los alcaldes del país. 

Al finalizar la actividad los estudiantes se acercaron a los dos reporteros de Ojoconmipisto para distintas consultas y solicitar orientación para realizar reportajes investigativos que requiere su universidad. Al menos cuatro de ellos quedaron en contacto con el equipo. 

La actividad se tuiteó desde la cuenta de Ojoconmipisto con los hashtags #OpenDataDay y #ODD2020. Publicamos 26 tuits con la interacción de los asistentes.

Celebrating open data initiatives in Paraguay: Open Data Day 2020 report / Open Knowledge Foundation

On Saturday 7th March 2020, the tenth Open Data Day took place with people around the world organising over 300 events to celebrate, promote and spread the use of open data. Thanks to generous support from key funders, the Open Knowledge Foundation was able to support the running of more than 60 of these events via our mini-grants scheme

This blogpost is a report from Girolabs in Paraguay who received funding from Mapbox to showcase local initiatives using and producing open data. This blogpost is published in Spanish.

¡Una vez más la comunidad datera en Paraguay ha acudido al llamado a la convocatoria anual y mundial sobre los datos abiertos, con esto sumamos ocho años desde su primera edición y estamos orgullosos y ansiosos de contarte lo que pasó!

Nuestra cita con el pueblo ODD fue realizado el 07 de marzo en el Loffice Las Mercedes, un lugar de coworking situado en la capital de nuestro país y ya identificado como punto simbólico del tema ya que hemos recibido a 70 personas (nada mal para un jueves nocturno). La actividad fue organizada por la iniciativa social CIVILAB con la colaboración de Girolabs y la Fundación CIRD

En esta edición hemos combinado el evento del ODD con la iniciativa social “Te Invito un Tere (TIUT)” como estrategia de organización y coordinación, pero …, ¿Qué es te Invito un Tere? 

Es un espacio colectivo de intercambio y aprendizaje, donde se comparten experiencias sobre temáticas de participación ciudadana en diversas aplicaciones.

Ya se llevaron adelante 9 ediciones desde el 2017, conociendo más sobre temáticas de educación, participación ciudadana, transparencia, medio ambiente, entre otros temas. Todas las ediciones son descontracturadas y cercanas, propiciando un ambiente de diálogo y colaboración.

¿Por qué decidimos juntar estas dos plataformas?

Por la importancia social del tereré como un símbolo que nos une para hablar de lo que sea y porque el ODD y TIUT comparten los mismos principios de aprender, desaprender y volver a aprender colaborando y participando.

¿Cuál fue la modalidad de las exposiciones y cuáles fueron los temas destacados de este año?

Como agregado especial esta vez optamos por las exposiciones “a toda hora”,  esto quiere decir que hubo exposiciones simultáneas en salas y en el tiempo del break que usualmente lo usamos para el coworking, siguiendo la temática datos y tereré hemos denominados a las tres salas destinadas para las exposiciones con nombres de remedios yuyos tradicionales como lo son el zarzaparrilla, capi´i catí y menta´í, esto sin lugar a duda le da una connotación bien paraguaya y de apropiación a los temas de datos abiertos.

Presentamos Mapasocial. Marco Aponte, socio fundador de Civilab, presentó un proyecto, denominado Mapa Social, que consiste en una plataforma digital que permite hacer donaciones a las organizaciones civiles e incluso que las personas puedan sumarse como voluntarios de los programas. 

Actualmente, la página registra 150 organizaciones no gubernamentales, pero la idea es sumar unas 300 organizaciones antes de finalizar el año. 

La forma de unirse al directorio es simple. Las organizaciones simplemente deben solicitar el permiso en la página y completar sus datos. La idea es juntar a todos los actores en un solo canal, convirtiéndonos realmente en un mapa social”.

Las organizaciones están georreferenciadas y la web cuenta con un visor de datos abiertos, así como la posibilidad de la descarga de la base de datos en CSV.

En esta edición informamos gratamente que hemos recibido la postulación de temas desde los tres poderes del estado, la academia y el sector de organizaciones sociales, esto nos alegra mucho ya que cada vez más existe una mira integral sobre las aristas en que se mueven los datos abiertos, a continuación, te damos una breve reseña de los temas desarrollados:

Zarzaparrilla Room

  • Ley de Protección de Datos en Paraguay (Martín Oxilia Aponte)
  • Datos Educativos en Paraguay: Perspectivas y Desafíos (Marcos Miranda – Juntos por la Educación)
  • Defensores, plataforma de sistematización de casos de tortura en Paraguay. Herramientas del programa de Frictionless Data del Open Knowledge Foundation.

Menta´i Room

  • Participación ciudadana, audiencias públicas, acceso a la información pública
  • La transparencia como condición de la ética pública (Academia – Viviana Romero)
  • “Los datos abiertos para soluciones globales” Experiencia de participación en Hackatón de la NASA. “Space Apps” (Giselle Ramírez Rojas)

Kapi’i katî Room

  • Nuevo Portal de Datos Abiertos del Ministerio de Tecnologías de la Información y Comunicación (MITIC) y del Ministerio de Agricultura (MAG).
  • Presupuesto Ciudadano (Centro de Estudios Ambientales y Sociales)
  • Cartografía Open Data (Katrina Lisnichuk, Diego Bernal y Tomás López – MapPyOSM
  • Datos Abiertos del Poder Judicial.

Metadata management in times of uncertainty / HangingTogether

Coronavirus image from WebMD

That was the topic discussed recently by OCLC Research Library Partners metadata managers, initiated by Erin Grant of University of Washington, Jennifer Baxmeyer of Princeton, Roxanne Missingham of Australian National University, and Suzanne Pilsk of the Smithsonian Institution. The COVID-19 crisis has caused a dramatic change in how libraries deliver services to patrons. Many libraries have increased the number of e-book acquisitions to meet the continuing research, teaching, and learning demands of their institutions. Discovery of resources through catalogs and federated search services has become more important than ever before. In addition, working from home has become the “norm” but not necessarily for everyone. Staff that process physical material are unable to do so from home, so they are either not working, or their work assignments have changed. Metadata managers have been learning many lessons from this period about how they should think about future crises and about how they might operate differently once staff return to work at their physical locations.

Most libraries closed abruptly, and few staff had previous experiences with working remotely.  The issues shared among the metadata managers in three virtual discussions and 58 pages of commentary from nine countries are summarized below.

Existing or new metadata work that could be done remotely: Online resource and digital collections work translated very well to working from home. Libraries could request and process invoices digitally. Some catalogers were able to take physical materials home to catalog, but others couldn’t. Libraries had to experiment with new workflows to accommodate copy cataloging physical materials without the item in hand using online accession lists, spreadsheets, or scanning specific pages of materials serving as digital surrogates. The University of Sydney produced this short video of scanning rare books with mobile phones so that other staff could catalog them from home.

The general shift “from p to e” (print materials to electronic versions) that had started pre-pandemic accelerated as libraries swiftly had to support online instruction when their campuses closed. Staff that had previously focused on processing print collections had to quickly learn to catalog electronic materials instead, and this period saw a surge in cross-team training and re-training to carry out essential tasks to support key services. Shifting to online thus required a much more holistic approach across the library, leading to much discussion, juggling, and ad-hoc training sessions.

Cataloging print materials was generally deferred to when staff could return to the libraries. With all professional conferences cancelled, more time could be directed to research, writing, and participating in Webinars. Staff had more opportunities for professional development of their skills. Administrative work such as budgeting, writing reports, and performance reviews continued, with staff meetings moved to video conferences. More time could be devoted to what had previously been long-deferred “rainy day”  tasks such as authority work, processing backlogs, database maintenance, record mediation and enhancement, correcting holdings, writing and reviewing documentation, troubleshooting metadata issues surfaced in the discovery systems, automation projects (such as automatically generating MARC records from spreadsheets), and writing new code for in-house applications. This experience highlighted how much of current metadata workflows are normally driven by physical collections at some institutions.

The impact of shifting services to online during this crisis on metadata work:

  • Focused greater attention on the value of consistent, accurate, and complete metadata
  • Increased the importance of batch editing metadata rather than editing records one at a time
  • Highlighted the suitability of carrying out added-value enhancement activities remotely and offline
  • Gave managers time to think through the barriers that held staff back from tackling rainy day projects and think of better ways to tackle new and existing work.
  • Made libraries aware that they need to be more flexible in staff’s work arrangements: laptops instead of desktop computers; rethinking more remote work arrangements with less time spent on-site; recognizing that more people can do multiple tasks.  

Successful managerial support of metadata staff in their transition to working from home:  All library staff, regardless of function, had managerial support to make sure that they had the equipment to work from home. Metadata managers all praised their IT staff who made sure everyone could connect to a virtual environment in record time.  They stressed the importance of regular, frequent check-in virtual meetings, individual and team support, clear email communications from library administrators, and acknowledging the stress of balancing or juggling work, home, and family responsibilities. Metadata staff take pride in their productivity, and managers had to develop new metrics or performance indicators replacing “number of records created.” Distributing tasks equally so that all staff felt valued and fulfilled has been a common challenge.

Addressing the technical challenges of switching to an all-virtual environment: Library staff switched to an all-virtual environment amazingly quickly!  Only a few had worked from home previously—moving everyone to working from home was a radical change and managers had insufficient time to review systems for remote access for the full range of work activities. Some libraries were able to provide staff with laptops or Chromebooks and set up hot spots for those with no Internet access or insufficient bandwidth.  Many staff were unfamiliar with the technologies needed for remote work. Managers soon became aware of the “digital divide” among their staff. Some library systems did not have full functionality for off-site cataloging work, and some installed a “remote work space” on Amazon Web Services to allow some basic functionality with their local library system. Managers and IT staff had to spend a lot of time helping staff to set up VPN and other remote access mechanisms. Not all staff were able to fully work from home because of these issues.

Zoom and Microsoft Teams kept the work—and workplace—going and teams connected. Some thought these online meetings more efficient than their previous face-to-face meetings, while others commented on “Zoom fatigue.”  Virtual “fireside chats” kept teams connected, purposeful, and valued, and offered a venue where staff could help each other work in difficult circumstances.  Low morale has been increasing as colleagues have been furloughed and institutions are forecasting drastic budget cuts.

Dealing with skill gaps and training: Staff training and development have been core to the success of metadata operations in this period, which has highlighted gaps in staff skills to be addressed. Training courses had to be quickly put in place to update the technical skills of some metadata specialists and whole teams had to be upskilled on e-book metadata processes. Staff who had no work without access to the physical collections had to be reassigned to new tasks that often required training and mentoring.  HathiTrust members in the United States took advantage of the Hathi Trust’s Emergency Temporary Access Service (ETAS) that legally gave them online access to materials corresponding to their physical collections. The Internet Archive gave access to a National Emergency Library (NEL), a temporary collection of books supporting emergency remote teaching, research activities, and independent scholarship while universities, schools, libraries were closed until 16 June 2020. Managers had to take a project management approach to traditional workflows, prioritizing tasks with an emphasis on keeping up with the sudden surge in e-book purchases as a result of campuses offering only online instruction and services, and matching staff capabilities to a myriad of new projects. Metadata managers had long wanted to do more cross-training and this crisis expedited it. The situation also increased more collaboration with other departments, such as ILL, e-resources, research support, and IT services.

Rethinking workflows and collections: Workflows were rethought, with managers taking an agile, rapid approach. Time-consuming manual processes that now seemed impractical and archaic were replaced. For example, licenses that used to be sorted into a bright red folder hand-carried up and down the chain of command for review, comment, and signature became a “virtual red folder” that vastly speeded up and streamlined the license vetting process. Tedious steps have been eliminated, and more editing of genre and subject headings and other metadata enhancements have been moved to batch processing. Working on these metadata enhancements projects remotely has increased metadata specialists understanding of the users’ view of the catalog through the discovery layer rather than only the catalogers’ view. Without a physical collection to refer to, metadata specialists have a deeper appreciation for data quality control and the need for better metadata at the earliest stages of processing.

The most fundamental change in this period has been for libraries to move from thinking about their collections to expanding access to electronic resources beyond their institutions. Libraries soon learned that even with the HathiTrust’s valued ETAS and the Internet Archive’s NEL that not all the content needed by academics was available digitally.

Changes to carry over into future metadata workflows post-pandemic: With three months’ experiences with staff working solely from home, metadata managers have been thinking about the changes they had to make in the COVID-19 period that may become permanent when restrictions are lifted. First among them is more flexibility in work arrangements. This period has demonstrated that much metadata work can be done remotely. Staff with long commutes would welcome the chance to work at least a few days a week from home. Metadata managers anticipate that staff working in acquisitions and metadata creation will continue to have expanded teleworking opportunities. “Teleworking is here to stay.”

Reflections on positive experiences from this crisis that metadata managers expect will continue included:

  • Increased technology skills of staff and more staff willingness to learn new skills
  • Increased communication with peers and other departments, and more metadata involvement with other divisions’ projects such as Research Support.
  • On-site work devoted only to tasks that must be done in person
  • Increased compassion for others’ circumstances
  • Fewer silos of format-specific specialties replaced by staff with skills to handle multiple formats
  • More reliance on vendor metadata and batch processes and less reliance on manual metadata record creation and maintenance with reduced amount of handling of material
  • More acceptance of cataloging from digital surrogates
  • More staff flexibility and accepting new tasks such as research management support, research data, and researcher identity management
  • Continued video conferencing to facilitate teamwork among dispersed staff that replace or supplement physical meetings
  • More institutional awareness of the importance of metadata, link maintenance, digitization activities, and database maintenance for users’ discovery of resources,

The complexities involved indicate a future with a hybrid working model as not all metadata work can be done from home. Effective teams will likely require a mixture of face-to-face and online interactions.

The increased reliance on electronic and digital resources from this period will also likely accelerate institutions desire to digitize more of their archival and distinctive collections that have been available only in physical form. The importance of online access to collections has never been demonstrated so compellingly as during the COVID-19 crisis. As academic courses will likely continue to be offered online (and as of this writing it’s still unknown when face-to-face classes will resume) more staff will need to be shifted to digital resources—those acquired by the library, digitized from the library’s physical collections, and those available from other collections. The shared global pandemic experiences have revitalized thinking about metadata as a critical activity beyond supporting access for research but also to assist in education and remote learning.

The post Metadata management in times of uncertainty appeared first on Hanging Together.

Connect 2020 on-line / Samvera

Save the dates!  Replacing our COVID-cancelled fall conference, we are pleased to announce that Samvera Connect 2020 on-line will take place as follows:

Thursday 22nd October will be an on-line workshops day.

Friday 23rd October will be our ‘plenary’ sessions.

Monday 26th – Wednesday 28th will feature our wide range of presentations.

The plenary and presentation sessions will each be spread over a three-hour block timed to allow maximum participation from members of our Community spread across many time zones – exact times to be confirmed.

We are working hard to bring you an on-line, but interactive, ‘poster session’ and to try and capture the important social aspects that a face-to-face conference would have offered.

Watch this space…

The post Connect 2020 on-line appeared first on Samvera.

Collaboration and Generosity Provide the Missing Issue of The American Jewess / Library Tech Talk (U of Michigan)

Image of The American Jewess periodical heading from the issue provided by Princeton

What started with a bit of wondering and conversation within our unit of the Library led to my reaching out to Princeton University with a request but no expectations of having that request fulfilled. Individuals at Princeton, however, considered the request and agreed to provide us with the single issue of The American Jewess that we needed to complete the full run of the periodical within our digital collection. Especially in these stressful times, we are delighted to bring you a positive story, one of collaboration and generosity across institutions, while also sharing the now-complete digital collection itself.

Near-field Communication (NFC) / Information Technology and Libraries

Libraries are the central agencies for the dissemination of knowledge. Every library aspires to provide maximum opportunities to its users and ensure optimum utilization of available resources. Hence, libraries have been seeking technological aids to improve their services. Near-field communication (NFC) is a type of radio-frequency technology that allows electronics devices—such as computers, mobile phones, tags, and others—to exchange information wirelessly across a small distance. The aim of this paper is to explore NFC technology and its applications in modern era. The paper will discuss potential use of NFC in the advancement of traditional library management system.

Michael Carroll Awarded 2020 LITA/Christian Larew Memorial Scholarship / LITA

Michael Carroll has been selected to receive the 2020 LITA/Christian Larew Memorial Scholarship ($3,000) sponsored by the Library and Information Technology Association (LITA) and Baker & Taylor.

This scholarship is for master’s level study, with an emphasis on library technology and/or automation, at a library school program accredited by the American Library Association. Criteria for the Scholarship includes previous academic excellence, evidence of leadership potential, and a commitment to a career in library automation and information technology.

The Larew Scholarship Committee was impressed by what Michael has already accomplished and look forward to seeing what he will accomplish after graduation in 2021. Michael has already shown a strong interest in digitization projects. He currently manages a team of students working with digitization. Previously, he has scanned and cataloged many collections. He has also assisted the Presbyterian Historical Society in creating sustainable processes for digitization. Michael has also shown his willingness and ability to work with a wide variety of projects and technologies that span both technical and non-technical including working with content management systems and mold remediation. 

When notified he had won, Carroll said, “I graciously accept this honor and look forward to the opportunities it will make possible in my developing career.”

Members of the 2020 LITA/Christian Larew Memorial Scholarship Committee are: Dale Poulter (Chair), Christopher Lawton (Past Chair), Julia Bauder, Faye Mazzia, and Harriet Wintermute.

Are Ivy League Libraries’ Websites ADA Compliant? / Information Technology and Libraries

As a doorway for users seeking information, library websites should be accessible to all, including those who are visually or physically impaired and those with reading or learning disabilities. In conjunction with an earlier study, this paper presents a comparative evaluation of Ivy League university library homepages with regard to the Americans with Disabilities Act (ADA) mandates. Data results from WAVE and AChecker evaluations indicate that although the error of Missing Form Labels still occurs in these websites, other known accessibility errors and issues have been significantly improved from five years ago.

Virtual Reality as a Tool for Student Orientation in Distance Education Programs / Information Technology and Libraries

Virtual reality (VR) has emerged as a popular technology for gaming and learning, with its uses for teaching presently being investigated in a variety of educational settings. However, one area where the effect of this technology on students has not been examined in detail is as tool for new student orientation in colleges and universities. This study investigates this effect using an experimental methodology and the population of new master of library science (MLS) students entering a library and information science (LIS) program. The results indicate that students who received a VR orientation expressed more optimistic views about the technology, saw greater improvement in scores on an assessment of knowledge about their program and chosen profession, and saw a small decrease in program anxiety compared to those who received the same information as standard text-and-links. The majority of students also indicated a willingness to use VR technology for learning for long periods of time (25 minutes or more). The researchers concluded that VR may be a useful tool for increasing student engagement, as described by Game Engagement Theory.

Measuring the Impact of Digital Heritage Collections Using Google Scholar / Information Technology and Libraries

This study aimed to measure the impact of digital heritage collections by analysing the citations received in scholarly outputs. Google Scholar was used to retrieve the scholarly outputs citing Memòria Digital de Catalunya (MDC), a cooperative, open-access repository containing digitized collections related to Catalonia and its heritage. The number of documents citing MDC has grown steadily since the creation of the repository in 2006. Most citing documents are scholarly outputs in the form of articles, proceedings and monographs, and academic theses and dissertations. Citing documents mainly pertain to the humanities and the social sciences and are in local languages. The most cited MDC collection contains digitized ancient Catalan periodicals. The study shows that Google Scholar is a suitable tool for providing evidence of the scholarly impact of digital heritage collections. Google Scholar indexes the full-text of documents, facilitating the retrieval of citations inserted in the text or in sections that are not the final list of references. It also indexes document types, such as theses and dissertations, which contain a significant share of the citations to digital heritage collections.

At the Click of a Button / Information Technology and Libraries

A number of browser extension tools have emerged in the past decade aimed at helping information seekers find open versions of scholarly articles when they hit a paywall, including Open Access Button, Lazy Scholar, Kopernio, and Unpaywall. While librarians have written numerous reviews of these products, no one has yet conducted a usability study on these tools. This article details a usability study involving six undergraduate students and six faculty at a large public research university in the United States. Participants were tasked with installing each of the four tools as well as trying them out on three test articles. Both students and faculty tended to favor simple, clean design elements and straightforward functionality that enabled them to use the tools with limited instruction. Participants familiar with other browser extensions gravitated towards tools like Open Access Button, whereas those less experienced with other extensions preferred tools that load automatically, such as Unpaywall.

Collaboration and Integration / Information Technology and Libraries

The University of North Florida (UNF) transitioned to Canvas as its Learning Management System (LMS) in summer 2017. This implementation brought on opportunities that allowed for a more user-friendly learning environment for students. Working with students in courses which were in-person, hybrid, or online, brought about the need for the library to have a place in the Canvas LMS. Students needed to remember how to access and locate library resources and services outside of Canvas. During this time, the Thomas G. Carpenter Library’s online presence was enhanced, yet still not visible in Canvas. It became apparent that the library needed to be integrated into Canvas courses. This would enable students to easily transition between their coursework and finding resources and services to support their studies. In addition, librarians who worked with students, looked for ways for students to easily find library resources and services online. After much discussion, it became clear to the Online Learning Librarian (OLL) and the Director of Technical Services and Library Systems (Library Director) that the library needed to explore ways to integrate more with Canvas.