At the beginning of this month, Alex Bayley suggested the idea of writing an Annotated bibliography of the inside of my head:
You know those books that you can’t stop thinking about, won’t shut up about, and wish everyone around you would read? The ones that, if taken in aggregate, would tell people more about you than your resume? I decided I wanted to write a list of those. Then I told some friends, and they wanted to write their own lists too. So we’re going to do a little blog carnival, and I’d like to invite you (yes, you) to join us.
I'm a librarian with a blog, and my fellow library blogger Alissa egged me on: how could I possibly refuse? So here it is, my annotated bibliography of the inside of my head (for now). I hope it's not too alarming to my new bosses.
The Empire Trilogy
The Empire trilogy was co-authored by Raymond E Feist and Janny Wurts after Feist completed his Riftwar Saga, but I read the Empire Trilogy first. I quickly grew bored with Feist's increasingly derivative works, but the Empire Trilogy was a revelation to a teenage boy who loved Lord of the Rings but found the fantasy fiction range on offer at Kingston Public Library in the 1990s frustratingly beige. The books feature a strong woman as the lead character, political intrigue almost on par with Game of Thrones / A Song of Ice and Fire, and a series of political and moral questions that Mara of the Acoma solves more by cleverness than force. I suspect I'd find the books disappointing if I re-read them now, but they helped me to expect more from fantasy fiction at a formative time.
The lies of Locke Lamora
Alas, having ascertained my preferences in fiction ran primarily to 'alternative reality political intrigue with minimal-to-nill magic', I struggled to find much of it around (though that doesn't necessarily mean there's not a lot out there). Eventually I discovered the alternative-reality-Oliver-Twist-meets-swashbuckling-adventure The lies of Locke Lamora and follow-up Red seas under red skies. I don't read much fiction these days, for various reasons, but I'm constantly on the hunt for something like these Scott Lynch masterpieces.
The name of the wind
I actually can't remember whether I read The lies of Lock Lamora or The name of the wind first, but I found them both gripping. I was distraught when I got the end of Patrick Rothfuss's The name of the wind to discover the second book in the series (The wise man's fear) hadn't yet been published. I'm still waiting, impatiently, for the third book. The name of the wind is an extraordinary book: almost nothing actually happens, yet I was completely gripped from start to end.
The Cranks bible
Nadine Abensur is a vegetarian chef who lives in the UK, and was born in Morocco to Jewish-French parents. When I was vegetarian for a couple of years I bought her The Cranks bible (Cranks being a restaurant group). This book completely changed how I thought about cooking. Abensur's background naturally led her to produce recipes that are based on traditional foods without being overly concerned about 'authenticity'. But more than that, she feels no compulsion to justify their vegetarian basis. The Cranks bible is a cookbook for delicous food that just happens to be vegetarian. It taught me to cook without apologies.
I have a Bachelor's degree in History, but it wasn't until well after I graduated that I really found the stuff I was interested in. Part of the reason I read less fiction than I used to is that it turns out there's enough speculative non-fiction to last several lifetimes. The further back into human existence we go, the more questions there are rather than answers, and the questions become less and less "what might have happened if X didn't occur?" and more "did X even happen at all?" All of the books in this section completely blew my mind.
Through the language glass
Perhaps some people would be utterly unsurprised by the revelations in Guy Deutscher's Through the language glass: why the world looks different in other languages. I was not, when I read it, one of those people. Deutcher's central thesis is that our reality is shaped by our languages as much as the other way around, but his book also revealed to me the complications of understanding what ancient peoples meant and thought even when they have left written records. Until reading this book it had never occured to me that different languages group colours differently: that a language might have a word for 'grellow' or 'grue', rather than all three of what I think of as yellow, green, and blue. Deuthscher's book was also the first time I became aware of the Guguu Yimithir language from what's now known as the Endeavour River on Cape York Peninsula: specifically its use of pure geographic directions (North, South etc) instead of egocentric directions like 'left' and 'right'.
The great divide: history and human nature in the Old World and the New
Despite receiving a university degree in history, I was left deeply cynical about Australian history books by my formal education and the History Wars that were raging during and to some extent after that time. It was only this year that I finally felt ready to re-engage: partially because there's some amazing stuff coming out. Billy Griffiths' Deep time dreaming is essentially a history of formal Australian archaeology. Reading this left me feeling amazed, humbled, and quietly angry. I was never taught any of it during my school years, despite most of what Griffith's writes about occuring well before I was born. Every Australian should know this history, including what's still unclear and controversial.
I read Bruce Pascoe's Dark Emu straight after Deep time dreaming, on something of an Australian history bender. I hesitated to place Dark Emu in this "Deep Time" section, both because nearly all the evidence Pascoe cites is from the early colonial period, and because of the problematic nature of seeing Aboriginal culture as "in the past". What Pascoe has identified, however, is compelling evidence of many thousands of years of intensive and sophisticated human management of plants, animals, and landscapes. I'd always been unsatisfied with the scant information and implausible explanations mainstream Australian education provides about the pre-invasion lives and diets of Aboriginal people, and Pascoe's book is the first thing I've read that really makes sense.
States and power
1835: the founding of Melbourne and the conquest of Australia
The third book in my Australian history binge was James Boyce's 1835, a book that had been languishing on my shelf for quite a while. Once I started reading it I couldn't believe I'd left it there for so long, nor that this was the first time I was reading so much of what it said. Boyce clearly outlines not only the brutal realities and astonishing speed of the genocide within what became the colony of Victoria (and New South Wales), but also the links with what was happening in the British Isles around the same time, with the 'settlement' of Australia in some ways simply an even more brutal example of the 'Enclosures' happening in England and Scotland.
Thirst: water & power in the ancient world does what it says on the tin: it's a book about ancient (though since reading Billy Griffiths' book I will always mentally put "ancient" in air quotes when it refers to anything fewer than 10,000 years) states and the nexus between controlling water and controlling people. The reason I keep thinking about this is partially that I live on the driest continent on the planet, in a time when dry climates are becoming even drier, and partially because every single state in the book continued to expand until it collapsed due in part or in full to simply running out of water.
Seeing like a state: how certain schemes to improve the human condition have failed
I discovered James C Scott's Seeing like a state through the bibliographies of other books and articles. It's somewhat academic, but much more readable than one might imagine from the title and the layout. Even career bureaucrats like me generally acknowledge that central governments are often indifferent to local circumstances and antagonistic to particular exceptions or differences, but Scott's analysis shows why states are like this. In contrast to the common charge that governments are 'uncaring', Scott makes the opposite claim: states care very much about understanding the people and places within their borders - it's just that they care about different things to those people. States must make their subjects or citizens legible in order to function as states at all. This is what leads to weird outcomes like being able to determine the geographic area a Filipino person's family comes from based on the first letter of their surname (a legacy of the process the Spanish colonial government used to enforce Hispanic surnames): these names were not particularly useful for the Filipinos, but did allow the government to more effectively track them.
Command and control
The extent to which centralised states are deluded about their ability to see and control even what's happening in their own military forces is outlined in Eric Schlosser's terrifying Command and control, outlining the many, many times the United States has almost accidentally nuked itself. It is horrifying, fascinating, and difficult to put down. This book left a profound impression on me because it drove home the point that even when dealing with something as extraordinarily dangerous as thermonuclear weapons, safety and control are as much an illusion as in any other field.
Against the grain: a deep history of the earliest states
James C Scott appears again here, because this year I read his latest book, Against the grain, and it had a similar effect on my to reading Dark emu (🤯). Scott's argument here is, in essence, that the earliest known settlements in what is referred to as the 'fertile crescent' of Mesopotamia appeared there because - despite it now being a dustbowl - it was at the time a junction between coastal wetlands and freshwater alluvial floodplains. Instead of moving between ecosystems, people here could simply stay put and let the ecosystems come to them. Scott makes other startling claims that upend the 'traditional' story of how the first states came about, and it's a compelling and fascinating book.
Venice: a new history
Thomas Madden's Venice: a new history is pretty interesting, but doesn't necessarily mount an argument in the same way as a book like Against the grain or Dark emu. The reason it appears in my annotated biography, however, is that it includes a detailed explanation of the Venetian system for choosing the Doge - the head of the Venetian republic:
A boy plucked randomly off the Venetian streets pulled wax balls out of an urn until thirty members of the Great Council from different families had been randomly selected. Those thirty were then reduced by another random lottery to nine. The nine elected a committee of forty, who were promptly reduced by lot to twelve. The twelve voted in twenty five new electors, who were then randomly reduced to nine. These nine elected a new committee of forty five, who were then reduced by lot to eleven. These eleven then selected the final committee of forty one councillors who would finally elect the Doge. I thought this was a pretty great system to wring out as much partisanship and power games as possible. And then, later, I read Against elections.
Against elections: the case for democracy
This Venetian electoral system was immediately attractive to me, but it was several years later (this year, in fact) that I discovered Liz Waters' translation of David Van Reybrouck's Against elections. I'd been thinking more and more about sortition as a viable and attractive alternative or at least addition to elected parliaments, but randomly appointing people to serve multi-year parliamentary terms was still unsatisfactory to me. Van Reybrouck's short book outlines firstly what problems sortition solves, secondly its effective use in ancient Greece, and thirdly how sortition could realistically work in practice in a modern state. The key is to forget about large bodies with multiple governmental roles and long terms, and instead create a system of interlocking bodies with very specific roles. It's compelling, and provides a clear rebuff to those who scoff that ordinary people can't govern ourselves.
Agitation and propaganda
Start with why
Simon Sinek turned an eight minute TED talk into a 231 page book, and whilst I generally consider TED talks akin to junk food, I do keep going back to the central point in Sinek's talk/book: "People don't buy what you do, they buy why you do it". It's all very Business School, but Sinek is right.
Two hundred Pharaohs, five billion slaves
Writing in a call centre, Adrian Peacock knew his 'why'. Two hundred Pharaohs, five billion slaves was the first anarchist polemic I ever read, and I remember being shocked, exhilarated and horrified all at the same time. I'm far too much of a natural born bureaucrat to be a real anarchist, but Peacock's argument that the history of capitalism is simply a series of Ponzi schemes certainly rings true - and he was writing in 1999, before the Enron collapse, let alone the global financial crisis of 2008.
In Assembly, Michael Hardt and Antonio Negri argue that for horizontalist, 'leaderless' Left movements to succeed, they need to be organised in such a way that temporary leaders determine tactics, whilst the mass movement - the assembly - determines strategy: exactly the opposite of a traditional organisational hierarchy. I found some of this book hard to follow, to be honest, and I'm not sure I even agree with or quite understand their argument. But that's exactly the point: I keep thinking about it because I'm not sure what I think about it. If you're interested in reading this, you probably should do the opposite of what I did, and read Zeynep Tufekci's Twitter and teargas: the power and fragility of networked protest. first. Tufekci gives a pretty clear explanation of how the 'horizontalist' protest movements of the last decade have actually worked in practice, whilst Hardt and Negri are talking about how they might optimally work in future.
Trigger warnings: political correctness and the rise of the Right
I really wasn't sure what to expect from this book, but it was written by Jeff Sparrow and endorsed by Roz Ward, so despite my reservations about the title I bought a copy. Sparrow's book is a tour of Left politics in the US and Australia since the 1990s. Reading Trigger warnings whilst the reverberations of the Victorian Greens' self-induced implosion were still fresh, Sparrow's phrase summing up what has happened to the mainstream Left - "Smug Politics" - felt all too real. Trigger warnings is worth reading just for Sparrow's analysis of what has happened to Left politics in the last couple of decades. But it's also a clear argument that what is needed is a bit of old fashioned solidarity. It's a theme that Jodi Dean has also written on in her new book Comrade: an essay on political belonging (I haven't actually read Dean's book yet, just the article in the Chronicle). With 2019's national election results in Australia and the UK, it might be time to reflect on what this really means, and what needs to be done.
Hope in the dark: untold history of people power
Rebecca Solnit wrote Hope in the dark in response to the 're-election' of US President George W Bush. I can't do justice to this collection of essays, other than to say Solnit is an extraordinarily gifted writer, and if you are feeling listless and in despair by the politics of our time, this is the book to read. It's neither saccharine nor unrealistic, but is indeed hopeful.
Work and money
Capital in the twenty-first century
It seems a long time ago now, but for about a year Thomas Piketty's Capital in the twenty-first century was the most talked about book in politics and economics. It's political salience was not so much in what it says as the form it takes: 580 pages of patiently argued text, 85 pages of notes and references, and plenty of charts. Piketty lays out the intuitively obvious point that progressive taxation of income is insufficient to creating more equal societies without redistribution of overall wealth, i.e. capital. Obviously his argument is rather more sophisticated than that, and I don't agree with what I consider to be his rather centrist policy proposals, but I still think about his explanation of how European capital stocks were built up through colonialism, and the effects of inter-generational wealth accumulation.
Scarcity: why having too little means so much
Piketty outlined how and why some have far too much, but Scarcity is about what happens when you have too little. Sendhil Mullainathan (an economist) and Eldar Shafir (a psychologist) essentially argue that the right-wing trope that poor people make bad decisions is supported by research: but that the causation is the complete opposite of what conservative commentators claim: living in poverty makes people more likely to make bad choices, rather than people who make bad choices being more likely to end up in poverty (though that's sometimes the case too, and then it becomes a self-perpetuating cycle). This book convinced me that something like a universal basic income (UBI) - a social safety net applying unconditionally to all - is a good idea. I keep thinking about it because I'm still not completely sold on UBI, but also because the studies described in this book help to move past arguments over what 'rational' behaviour looks like for those who are poor and/or living in poverty (which as Jane Gilmore notes, are not quite the same thing). The Sam Vimes Boots Theory of Economic Justice is still true, but it's also an insufficient explanation for why poverty is so hard to escape.
How to be idle
Tom Hodgkinson's How to be idle was published in 2004, the same year as Carl Honoré's In praise of slow, and something of a meditation on the same subject. I can't remember which I read first, but whereas Honoré's book is a journalistic report on the various 'slow movements', Hodgkinson's is a celebration of the good life. I think of it whenever I'm starting to feel guilty about not being busy enough.
Work: the last 1000 years
I ended up (not coincidentally) reading a few books this year that all referenced each other, and Andrea Komlosy's Work was one of them (another being Sven Beckert's Empire of Cotton, of which more in a moment). Refreshingly, although Komlosy's history is very much based on the European experience, it's Central Europe (specifically, the German-speaking parts) on which Komlosy concentrates, rather than the view from Western Europe that we mostly get in English language books. This is essentially a description of how 'work' has been defined, organised, recognised, and rewarded over the last thousand years, and what caused those things to change. I found it fascinating to read about models and conceptions of work that were very different to our own - often in surprising ways. It was here that I learned that early factory work was often treated as seasonal, and why it was so often children and women who worked in them. I continue to think about this book because it provides concrete evidence of alternative ways to think about work, power relationships, and living arrangements.
Empire of cotton: a new history of global capitalism
This book is not for the faint of heart, sprawling over more than 400 pages. On occasion I did wonder whether Sven Beckert needed to go into quite so much detail, but it really is a fascinating explanation of many of the big moments and issues of world history from the early-modern period onward, and how they are all related: from the North American slave states to the Indian famines of the 1890s, the 'Scramble for Africa', how Britain ended up controlling Egypt, and the initial drivers of the Japanese occupation of Korea. This is an extraordinary book and well worth investing the time in.
Spam: a shadow history of the Internet
"Every time we go online," reads the blurb, "we participate in the system of spam, with choices, refusals and purchases, the consequences of which we may not understand." Finn Brunton's book on the history and meaning of spam is one of the briefest books in this bibliography, but I keep being reminded of it. It was only later that I read of Paul Virilio's concept of "the integral accident", but that is essentially what Brunton means by his subtitle, "a shadow history of the Internet". For every step towards convenience, openness and increased power for 'legitimate' use of the Internet, the 'integral accident' of spam has similarly gained sophistication and power. I leave it as an exercise for the reader to decide what now constitutes the legitimate Internet, and what is merely spam.
Debt: the first 5000 years
Debt: the first 5000 years was the first David Graeber book I ever read, and I've left it until last because it is probably the most influential book I've read in the last decade. Graeber is an anthropologist rather than an economist, which is probably why his ideas about the meaning of money, debt, credit and economies seem so fresh and interesting. Graeber opens with a vignette of a cocktail party where an attendee states that "surely one has to pay one's debts", and he immediately declares for the contrary position. Indeed, says Graeber, the global financial system would collapse if everyone was forced to repay their financial debts regardless of the circumstances. What follows is a revisionist examination of the global history of morals, money, exchange, trust, and reciprocity. If you were to read only one book from this list, make it David Graeber's Debt.
LITA, ACRL, ALCTS, and LLAMA invite nominations for the 2020 Hugh C. Atkinson Memorial Award. Please submit your nominations by January 9, 2020.
The award honors the life and accomplishments of Hugh C. Atkinson by recognizing the outstanding accomplishments of an academic librarian who has worked in the areas of library automation or library management and has made contributions (including risk taking) toward the improvement of library services or to library development or research.
Winners receive a cash award and a plaque. This award is funded by an endowment created by divisional, individual, and vendor contributions given in memory of Hugh C. Atkinson.
The nominee must be a librarian employed in one of the following during the year prior to application for this award:
University, college, or community college library
Non-profit consortium, or a consortium comprised of non-profits that provides resources/services/support to academic libraries
The nominee must have a minimum of five years of professional experience in an academic library or in a non-profit consortium or a consortium comprised of non-profits.
“Be a realist. The world is made up of two classes–the hunters and the huntees. Luckily, you and I are hunters.”
Sanger Rainsford speaks these words at the start of “The Most Dangerous Game”, one of the most famous short stories of all time. First published in Collier’s magazine in 1924, it’s been reprinted in numerous anthologies, been adapted for radio, TV, and multiple movies, and assigned in countless middle and high school English classes. The tropes established in the story, in which a hunter finds himself a “huntee”, are so well-established in present-day American culture that there are lengthy TV Tropes pages not just for the story itself, but for the trope named by its title.
Up until now, the story’s been under copyright in the US, as well as in Europe and other countries that have “life plus 70 years” copyright terms. (The author, Richard Connell, died just over 70 years ago in 1949, so as of January 1, it will be public domain nearly everywhere in the world.) Anyone reprinting the story, or explicitly adapting it for drama or art has had to get permission or pay a royalty. On the other hand, many creators have reused its basic idea– humans being hunted for sport or entertainment– without getting such permission.
That’s because ideas themselves are not copyrightable, but rather the expression of those ideas. And the basic idea long predates this particular story: Consider, for instance, gladiators in Roman arenas, or tributes being hunted down in the Labyrinth by the Minotaur of Greek mythology. But the particular formulation in Connell’s short story, in which General Zaroff, a former nobleman bored with hunting animals, lures humans to his private island to hunt and kill them for sport, is both distinctively memorable, and copyrightable. Stray too close to it, or quote too much from the story, and you may find yourself the target of lawyers. (But perhaps not if you yourself are dangerous enough game. I don’t know if the makers of “The Incredibles“, which also featured a rich recluse using his wits and inventions to hunt humans on a private island, paid royalties to Connell’s estate, or relied on fair use or arguments about uncopyrightable ideas. But in any case, Disney is better equipped to either negotiate or defend themselves against infringement lawsuits than others would be.)
Rereading the story recently, I’m struck by both how it reflects its time in some ways, and in how its action is surprisingly economical. In 1924, we were still living in the shadow of the First World War, in which multiple empires and noble houses fell, while others continued but began to teeter. The deadly spectacles of public executions and lynchings were still not uncommon in the United States. And the dividing of people into two classes– those who are inherently privileged and those who are left in the cold or even considered fair game– was particularly salient that year, as the second incarnation of the Ku Klux Klan neared its peak in popularity, and as immigration law was changed to explicitly keep out people of the “wrong” national origin or race. Those sorts of division haunt our society to this day.
Rainsford objects to Zaroff’s dehumanizing game in what we now tend to think of the story’s setup, which actually takes most of the story’s telling. (The description of the hunt itself is relatively brief, and no words at all are used to describe the final showdown, which implicitly takes place in the gap between the story’s last two sentences.) In the end, though, Rainsford prevails by beating his opponent at his own game. He doesn’t want to kill another human being, but when pressed to the extreme, he adopts his opponent’s rules (at the end giving Zaroff the sporting warning “I am still a beast at bay… Get ready”) and proves to be the better killer.
With the story entering the public domain in less than three weeks, we’ll have the chance to reuse, adapt, and critique the story in quotation more freely than ever before. I hope we use the opportunity not just to recapitulate the story, but to go beyond it in new ways. That’s what happens in the best reuses of tropes. Consider for instance, how in the Hunger Games books, the main character Katniss repeatedly finds ways to subvert the trope of killing others for entertainment. Instead of prevailing by beating opponents at the deadly human-hunting game the enemy has created, she and her allies find ways to reject the game’s premise, cut it short, or prevent its recurrence.
When, in 19 days, we get another year’s worth of public domain works, I hope we too find ways not just to revisit what’s come before, but make new and better work out of them. That’s something that the public domain allows everyone, and not just members of some privileged class, to do.
Recently I read an article on CBC about a project by Nicole Hill from Six Nations of the Grand River to create modern stock photos of Indigenous people because they couldn’t find representations of people like them to promote development projects.
There’s been a bunch of awesome photo projects where people have created their own visual representations of their communities.
“Our ask? That you use these photos to show a different representation of all women in tech. That you use these images in pieces about entrepreneurs, software engineers, infosec professionals, IT analysts, marketers, and other people who make up the tech ecosystem. Just as white women have been the default “woman” in technology and American society as a whole, we believe the underlying belief of what it means to be — and who can be — a tech worker in the 21st century can benefit from this form of “disruption”. link
“Disabled And Here is a reclaiming of our depiction, featuring disabled BIPOC with different diagnoses (or lack thereof), body sizes/types, sexual orientations, and gender identities who reside in the Pacific Northwest. This is disability representation from our own community.”
I love that these also have alt text descriptions too.
“These photos are available for all uses and feature plus-size people at home. From looking at their phones in bed to having a glass of wine with friends, this collection is powerful because the emphasis is on what the models are doing, not how big they are while they’re doing it.”
The Gender Spectrum Collection
“The Gender Spectrum Collection is a stock photo library featuring images of trans and non-binary models that go beyond the clichés. This collection aims to help media better represent members of these communities as people not necessarily defined by their gender identities—people with careers, relationships, talents, passions, and home lives.”
Javier is currently serving as the Dance Preservation and Digital Projects Librarian at the University of Southern California in Los Angeles, California. He received his M.L.I.S. at the University of California, Los Angeles in 2015. Javier completed his undergraduate work in Politics and Latin American and Latino Studies at the University of California, Santa Cruz. He was selected as an ALA Emerging Leader in 2018 and was selected for the New Professional Programme by the International Council on Archives in 2017. Within librarianship, his interests include digital libraries, archives, and outreach.
The Digital Library Federation Forum was the first conference I ever attended that allowed me to feel the pulse on what is happening in the field of Digital Librarianship. This conference gave me the opportunity to meet new colleagues, share my personal experiences, and hear innovative strategies and ideas that are being implemented around the country. Although panels varied greatly in scope, one common theme that I identified in several of the presentations I attended was the effort to incorporate ethics and humanity into various aspects of the development of digital collections. In the last four years that I have been a librarian, this is one trend that I have seen gradually being discussed more often and presented at conferences.
I particularly appreciated presentations such as “Ethical Digital Libraries & Prison Labor,” as well as the “Documenting Detention” presentation in the “ethics + community archives” panel. I highlight these two presentations in particular as some that critically observed both the invisible labor involved in the archival process, but also the context of the material itself. Weeks after the conference, the question of safety for migrants while building a collection in real-time raised in the “Documenting Detention” presentation is something that I still ponder. Further if it wasn’t surprising enough to discover that libraries use prison labor for digitization, the ethical dilemmas that arise from such practices rattles the core of values that ALA promotes regarding access and social responsibility. While it is interesting to hear about completed projects or those in development, it demonstrates the thoughtfulness that librarians are putting into their work when they call into question processes and the substance of collections.
As a Forum Fellow at this year’s conference, I was fortunate to have a mentor with whom I had previously worked, and who was willing to provide guidance and listen to the challenges I have faced during my time in librarianship. This bolstered the experience I had at the conference, sharing with my mentor some of my interests with regards to digital librarianship and getting suggestions for which panels to attend. Beyond the conference, I received advice from my mentor about getting involved in committees at my home institution, both in and outside of the library. I found this aspect of my time at DLF extra helpful, particularly because my mentor has been in the field longer than I have, and was able to speak from personal experience about what has helped her advance in the field. I plan to continue this relationship, turning to my mentor for further guidance as I progress through librarianship.
All in all, an experience like that at the DLF Forum was refreshing. While we often prioritize meeting deadlines and quantitative assessment of our work, this conference was a reminder that we also need to think insightfully and critically of our own practices and what it means outside our workflow’s context. I will continue thinking of my own work in similar ways, always applying an analytical lens to push the boundaries of my labor.
Greetings again from the Steering Committee of Core: Leadership, Infrastructure, Futures, a proposed division of ALA.
Coming up this Friday, December 13 is the last of four town halls we are holding this fall to share information and elicit your input. Please join us! Register for Town Hall 4 today. ALCTS, LITA, and LLAMA division staff will lead this town hall with a focus on Core’s mission, vision, and values; benefits organizationally; benefits to members; and opportunities in the future. Our speakers will be Jenny Levine (LITA Executive Director), Julie Reese (ALCTS Deputy Executive Director), and Kerry Ward (LLAMA Executive Director and interim ALCTS Executive Director).
We’re excited to share an updated Core proposal document for ALA member feedback and review, strengthened by your input. We invite further comments on this updated proposal through Sunday, December 15.
Meanwhile, division staff will incorporate your comments and finalize this proposal document for submission to the ALCTS, LITA, and LLAMA boards by Wednesday, December 18. Then at the 2020 ALA Midwinter Meeting in January, each board will vote whether or not to place the question of forming a new division on the spring ballot for a vote by division members.
This week is by no means the last opportunity to share your perspectives on the proposed formation of Core. At Midwinter, following the vote on the Core question by each division board, ALCTS, LITA, and LLAMA will host a joint board meeting open to all, which will be an opportunity for shared reflection upon the boards’ momentous decisions. Stay tuned for announcements about further events in the spring.
You can always stay up to date on the latest Core news by visiting core.ala.org, but we also need to hear from you. Members like you will shape the future of this proposed division — please participate in these conversations to help develop and refine Core’s division identity.
We are pleased to announce joint stewardship of Frictionless Data between the Open Knowledge Foundation and Datopian. While this collaboration already exists informally, we are solidifying how we are leading together on future Frictionless Data projects and goals.
What does this mean for users of Frictionless Data software and specifications?
First, you will continue to see a consistent level of activity and support from Open Knowledge Foundation, with a particular focus on the application of Frictionless Data for reproducible research, as part of our three-year project funded by the Sloan Foundation. This also includes specific contributions in the development of the Frictionless Data specifications under the leadership of Rufus Pollock, Datopian President and Frictionless Data creator, and Paul Walsh, Datopian CEO and long-time contributor to the specifications and software.
Our first joint project is redesigning the Frictionless Data website. Our goal is to make the project more understandable, usable, and user-focused. At this point, we are actively seeking user input, and are requesting interviews to help inform the new design. Have you used our website and are interested in having your opinion heard? Please get in touch to give us your ideas and feedback on the site. Focusing on user needs is a top goal for this project.
Ultimately, we are focused on leading the project openly and transparently, and are excited by the opportunities that clarification of the leadership of the project will provide. We want to emphasize that the Frictionless Data project is community focused, meaning that we really value to input and participation of our community of users. We encourage you to reach out to us on Discuss, in Gitter, or open issues in GitHub with your ideas or problems.
Islandora 8 was released last June without built-in support for paged content. Our community placed a very high priority on correcting that omission and getting books and newspapers ready for migration from collections in islandora 7. Thanks to an incredibly successful community sprint back in September, paged content is in! You can find it now in the latest code in our GitHub (or the latest build with our ansible playbook), in our documentation, and it will be a big part of our next Islandora 8 release, coming in early 2020.
Our February webinar will explain and demonstrate into how paged content and complex objects are supported in Islandora 8, including integration with IIIF, easy drag & drop interfaces for page order, and multiple viewer options. We will also review versioning support, which will be coming out in the same release.
CLIR’s annual DLF Forum welcomes digital library, archives, and museum practitioners from member institutions and beyond—for whom it serves as a meeting place, marketplace, and congress. Planning committee members help make the Forum a special event every year. All are invited and welcome to participate; you don’t have to be part of a DLF member institution to volunteer, nor do you have to be sure you can attend the events in person (though we hope you can!).
We have opportunities available on the following subcommittees:
Program – Members of this committee will help review the DLF Forum CFP before it is released, create the Forum program based on submissions and peer reviews, and assist with moderator responsibilities leading up to and at the Forum. Please note: If you can’t attend the Forum or otherwise can’t be a moderator, you may still join this committee to help with the other tasks!
Sponsorship – Members of this committee will be responsible for suggesting and contacting potential sponsors for the DLF Forum and Digital Preservation.
Scholarship – The primary task of this group is to help to select Forum Fellows. Each member will read and rank up to 15 applications and will participate in committee calls to discuss feedback.
Community – This committee will focus its efforts on brainstorming, planning, and leading social and wellness events during the DLF Forum and creating a local guide for attendees on the DLF Forum website, with a mind toward experiences that are as inclusive and welcoming as possible.
Reviewers – We’ll be issuing a separate call for session proposal reviewers later this winter, but if you want to express interest now, you certainly can.
Library and Information Studies (LIS) has traditionally taken a conservative and uncritical approach to security and policing in libraries. The available literature usually adopts one of three frameworks: the liability framework emphasizing risk and its management, the security consultant framework featuring authors with private security or policing backgrounds, and the First Amendment framework seeking to balance the rights of the individual with the rights of the majority as seen in Kreimer v. Morristown. Despite some helpful recommendations from these contributions, they tend to encourage library staff to develop close relationships with local police and security guards without considering the negative effects this closeness can have on patrons who are Black, Indigenous or people of colour (BIPOC), people experiencing mental illness, and people from other marginalized communities. Research from outside of LIS has documented the negative psychological effects of police presence on BIPOC and has also established connections between the increased presence of police in libraries and the broader increase of police and security guards in public spaces. If libraries are to be safe places for patrons of all backgrounds, authors in LIS, and library workers in general must incorporate insights from other disciplines into their practice and begin to meaningfully address the complicated roles of police and security guards in the public library.
While issues relating to library security tend to be the concern of a small group of academics within the field of Library and Information Sciences (LIS), some of the more disturbing accounts of violence in North American public libraries have recently been covered in the general news. In 2018 alone, two news stories about violence in Canadian libraries were circulated widely, including one in which a patron kicked an elderly librarian in the chest at Richmond Public Library in British Columbia as well as another in which a librarian at a Christian Science Reading Room in Ottawa was sexually assaulted and then murdered in the middle of the day (Yogaretnam, 2018; Ferreras, 2018).
Because of these well-documented incidents, there has also been greater news coverage of the increasing presence of police and security guards in North American public libraries. In February 2019, the main branch of the Winnipeg Public Library (WPL) increased security measures by requiring patrons to go through bag checks and metal detection. Ed Cuddy, Manager of Library Services at WPL, said the changes were made because of “violent incidents, incidents involving people that are intoxicated or using other substances, where there has been significant threats to staff and security” (Caruk, 2019). Similarly, in January 2019, Yellowknife’s public library introduced security guards after seven fights broke out in 2018. The library also announced it would close earlier on those nights “when many municipal enforcement officers are in court, meaning they can’t respond to calls for assistance from library staff” (Panza-Beltrandi, 2019).
Accounts of violence against library workers and patrons have been accompanied by several stories of security and police overreach in libraries. In 2017 in Lakewood, Ohio, an off-duty police officer working a shift at the Lakewood Public Library broke the jaw of a seventeen-year-old patron after he placed her in “a full-nelson-type hold” when she refused to leave the premises (Mosby, 2017). At a branch of the District of Columbia Public Library, a security guard demanded a patron remove her hijab if she wanted to remain in the building. The incident led to “protests and a widespread lack of trust on the part of patrons,” resulting in the officer being placed on night duty to avoid further interaction with the public (Dixon, 2016).
These stories highlight the complex power dynamics at play in interactions between library patrons, library staff, security guards, and police officers. Though both staff and patrons were injured in the scenarios above, it is worth noting that while staff may be able to call on police or security guards if they feel unsafe, this is not always possible or even desirable for patrons. Many library workers – particularly if they are white – also enjoy the protection that professionalism affords – of being seen not just as an individual but as part of a large government organization with the added legitimacy that provides – sometimes the same large government organization employing the police officers or security guards tasked with settling disputes. Patrons, on the other hand, may not have anyone to vouch for them or legitimize their claims, leaving them on their own in these scenarios. Further, “because libraries and their staff represent a particularly middle class and white worldview”, BIPOC patrons do not have the luxury of starting from a neutral position when interacting with library staff but, from the outset, are more likely to be subject to discrimination (Selman et al., 2019, p. 13). Inversely, BIPOC staff may also be susceptible to similar discrimination from patrons, co-workers and/or security staff. Discussions surrounding these incidents of violence and the dynamics at play within them are all the more relevant in the context of the broader discussions about power, policing, and public space currently being led by groups like Black Lives Matter and need to be given greater consideration by library workers moving forward.
In writing this article, I want to acknowledge that my positionality as a white man affects my relationship to this topic. While I have witnessed incidents of violence, seen exclusion methods in action and spoken to patrons who have been discriminated against by security guards and police officers in libraries, I do not have personal experience and will never be able to fully understand the experiences of BIPOC with regard to policing and security. Though I can only make recommendations from my narrow understanding of the topic, the lack of existing literature on this topic motivated me to pursue it despite my limited perspective. I hope this article can serve as a basis for further study in this area by authors better situated to comment on how marginalized communities experience policing and security in libraries.
In the following sections, I will analyze the scholarly literature since the turn of the 21st century to assess the range of responses to policing and security in North American public libraries. I will begin by exploring the more conservative perspectives in LIS, starting with what I term “the liability framework,” which approaches patron and staff safety, network security and building security in a similar manner. Then I will analyze “the security consultant framework,” paying particular attention to two books, The Black Belt Librarian: Real-World Safety & Security by Warren Graham, and Library Security: Better Communication, Safer Facilities by Steve Albrecht (2015; 2012). I will then examine “the First Amendment framework” within the LIS literature which focuses on balancing collective and individual rights, before concluding with a survey of relevant perspectives from outside LIS.
Typically, LIS scholars writing on security and policing in libraries have taken a conservative and uncritical approach. They tend to overemphasize the positive effects of police presence without giving much consideration to how increasing securitization adversely affects BIPOC including both staff and patrons, people experiencing homelessness, people dealing with mental illness, and other marginalized groups. Although disciplines such as psychology and justice studies have documented the disproportionately negative effects of police presence on marginalized communities, this perspective is notably absent from even the most progressive LIS authors writing on policing and security. I will explore and critique the current LIS literature on policing and security in North American public libraries by supplementing it with research from other disciplines relevant to the current discourse in LIS, ultimately asking the question, “who gets the right to feel and be safe” in public libraries, and who does not (Barry, 2015)?
Fire, Flood and Fist Fights: The Liability Framework
The liability framework tends to view safety and security as a holistic endeavour and often aims to address fire and flood prevention, theft, online privacy, and violent behaviour simultaneously (McGinty, 2008). The need for security infrastructure is stressed throughout and the articles often provide long lists of devices and alarms that can be installed to improve security (Forrest, 2005; McGinty, 2008).
In addition to technological solutions, Forrest describes the “security ethos” which encourages library and security staff to monitor certain patron types who are deemed most likely to exhibit “suspicious activity” (2005, p. 91, 95). The focus on patron types is best captured in McGinty’s statement that “unfortunately, libraries also attract aberrant individuals, the homeless, and the mentally ill by having comfortable public space and tolerant staff” (2008, p. 117). Rather than celebrating this comfort and tolerance, McGinty suggests this is a liability. The implicit logic is that if comfortable spaces and tolerant staff attract ‘aberrant’ individuals, then less comfortable spaces and less tolerant staff are needed to repel them. This model followed to its logical conclusion, would exclude “aberrant individuals” in the hopes of making the library environment as controlled as possible (p. 117). Further, this reliance on patron types and ambiguous terminology like “aberrant individuals” is a thinly veiled application of stereotypes about what kinds of patrons staff believe are likely to cause problems in the library (p. 117). It is critical here to point out “that those most likely to have…been treated repeatedly as suspects are Black, Indigenous, poor, and gender non-conforming people” (Selman et al., 2019, p. 31).
The liability framework sometimes blurs the distinction between library staff and security staff, as at Western Kentucky University (WKU) where student patrollers were hired to assist campus police (Forrest, 2005). While their priority was monitoring patrons, the student patrollers were also free to answer reference questions if they were otherwise unengaged. Forrest states that “other institutions have reported similar benefits from the addition of student patrollers to the library’s security force,” citing a program at Southern Illinois University at Carbondale where “during the 18 months prior to the patrol’s assignment to the library, there was one criminal arrest, but there were ten arrests during the patrol’s 18-month presence in the library” (Forrest, 2005, p. 92). Again, we see exclusion methods being celebrated, and the implication that an increase in arrests in the library is desirable. The possibility that any of the arrests might have been unnecessary is not considered.
This kind of “pragmatic” approach common to the liability framework ignores the effects that certain security measures have on BIPOC, people experiencing mental illness, and other marginalized groups. In reality, the reliance on patron types within this framework serves to reinforce harmful stereotypes about marginalized communities. The primary flaw of the liability framework seems to be the authors’ unwillingness or at least failure to address the negative effects of the security measures they propose.
Countering the Black Belt Librarian: The Security Consultant Framework
Closely related to the liability framework is the security consultant framework, which relies on the expertise of external advisers. Key texts which promote this approach are Library Security by Steve Albrecht, a former San Diego Police reserve sergeant, and TheBlack Belt Librarian by Warren Graham, a former private-sector security director – both published by the American Library Association (ALA) (2015; 2012). This literature tends toward practical recommendations including conflict management training for library and security staff, establishing clear codes of conduct, and designing library spaces with security in mind. The importance of establishing a good working relationship with other community organizations such as advocacy groups for people experiencing homelessness and community mental health services is also emphasized (Albrecht, 2015, p.121-123).
In addition to partnering with community organizations, both authors support police involvement in libraries. Albrecht’s vision of this partnership is almost laughable as he suggests libraries should have “a place in the back office where [police officers] can sit and drink a cup of coffee” or “sit in your employees-only area just long enough to eat their lunch and finish a report before they have to go back out to face another barrage of radio calls” (2015, p. 119). While a cordial relationship with other municipal employees is certainly desirable, Albrecht’s vision drifts into the realm of fantasy. Though it is clear what police officers stand to gain from such a relationship, it is unclear how this would benefit library workers, not to mention the impression this increased closeness could have on patrons’ perceptions of intellectual freedom and privacy in libraries.
Albrecht’s only mention of police violence comes during a discussion of the different ways a patron might react to the police having been called, where he states “if the person is significantly mentally ill, he or she might believe that the cops will hurt or kill him or her when they arrive and take out their handcuffs” (2015, p. 74). Despite examples of police and security guards harming library patrons such as the ones highlighted earlier, Albrecht’s sole engagement with police violence frames it as a delusion of people with mental illnesses.
By publishing authors like Albrecht and Graham, the ALA, which is responsible for upholding and developing the professional values of librarians throughout the United States and beyond, has welcomed the ideologies of the police force and the private security firm into LIS. Albrecht and Graham are unapologetic about how their values differ from their vision of librarianship, and though they do provide some helpful insights into library service, their “quasi-military approach” and desire for “customer closeness” with police are incompatible with ensuring that library spaces are safe and welcoming for all patrons (2015, p. 71, 119).
In fact, both authors explicitly acknowledge they are not librarians, as Albrecht states:
…I’m perhaps less forgiving of the rude, angry, eccentric, entitled or threatening patron than you might be. What you are willing to tolerate, because of librarianship’s principles of access or simply because you see these same people day after day, may be different. (2015, p. xi-xii).
Statements like this in the introduction to Library Security should have been a warning sign to the editors that Albrecht was not the ideal author to write a book on a topic of such importance. Given that the available literature on security and policing in public libraries is quite limited, it is disconcerting that these two titles occupy such a prominent place. Library leaders and organizations like the ALA have not been confident enough in their expertise, opting to bring in “experts” from outside the profession, as we have seen with the introduction of CEOs, professional managers, and professional marketing staff.
Kreimer v. Morristown: The First Amendment Framework
Perhaps the most infamous incident involving police in a public library is outlined in Richard R. Kreimer v. Bureau of Police for the Town of Morristown. Mr. Richard Kreimer, a patron of the Morristown Public Library, in New Jersey, was often the subject of patron complaints due to his body odour and tendency to stare. When these complaints occurred, Mr. Kreimer was asked to leave the library by staff and “if he refused, the police were called” (Barber, 2012, p. 90). In 1991, after being removed from the premises by police on multiple occasions, Mr. Kreimer sued the Morristown Public Library in a case which eventually ended up before the United States Court of Appeals (USCA) resulting in a landmark decision cited throughout the literature (Barber, 2012; Wong, 2009). The USCA ruled that the First Amendment protects an individual’s right to receive information in an institution like the public library; however, the decision also stated libraries were limited public spaces and, as such, the library administration had the right to remove patrons from the library if they were violating a rule outlined in the code of conduct (Barber, 2012).
This First Amendment framework seeks to balance the rights of the individual against the rights of the majority and is supported by a great deal of the LIS literature on security and policing (Dixon, 2016; Trapskin, 2008; Wong, 2009). Wong frames the discussion as a balance between the needs of “majority users” and “special groups like the homeless” (2009). Though labelling certain patron groups “special” is reductive, this language of the unnamed majority and various minority groups captures the way a great deal of the LIS scholarship addresses these issues. Trapskin suggests that the recent security issues in libraries are the result of a lack of public space in cities more generally and a shift in how library space is used from a quiet study space to a more social space (2008). This struggle to balance the needs of various patron groups is captured well by DeFaveri who explains that “For every person who finds the library safe and pleasant there is another person who feels uncomfortable and unwelcome” (2005, p. 1). Similarly, just as there are patrons who would not enter a library if there were no security guards or police, there are patrons less likely to enter a library because there are security guards or police and both of these concerns need to be addressed.
In addition to providing a theoretical framework for a First Amendment approach to policing and security, these authors also offer practical recommendations including more traditional methods of library enforcement like banning mechanisms, library design, and the familiar suggestion that “library managers should work closely with their police departments” (Dixon, 2016; Trapskin, 2008, p. 76). Beyond these suggestions, the authors offer some progressive responses including partnerships with public health nurses to address mental illness and addictions, de-escalation training for library staff and security guards, and library programming which allows staff to develop relationships with patrons to increase mutual understanding (Dixon, 2016; Trapskin, 2008). While many of the recommendations in this section provide excellent alternatives to involving security guards and police, the persistence of an uncritical approach to policing and security is notable.
The most progressive vision of the relationship between library workers and police within the LIS literature comes from Chancellor’s 2017 article exploring two instances when American libraries opened their doors to the public amidst the unrest following the acquittal of police officers in the deaths of Freddie Gray and Michael Brown. The relationship between police violence and libraries in Chancellor’s examples are somewhat different than the ones considered here in that the violence was taking place outside of the library and the library functioned as a space of refuge from violence, however, I believe they are still instructive.
Though Chancellor’s article does not address the presence of police in libraries explicitly, it does argue that libraries must continue to “serve as safe havens in times of crisis” (2017, p. 2). Because widespread “racial profiling…mass incarceration, and shootings by overzealous police officers of unarmed African Americans are pervasive in today’s society,” ensuring libraries continues to be a safe space for all will require library workers to reassess their relationship with the police (Chancellor, 2017, p. 6). While this does not mean vilifying all police and security and banning them from libraries, to ensure libraries are a safe space for all, library staff will need to consider the effect of police presence on all patrons as well as their own staff. In the following section, I will highlight some perspectives from outside of LIS which extend Chancellor’s arguments and are relevant to this discussion.
Beyond the Bibliosphere: Perspectives from Outside of LIS
While Albrecht and Graham bring their perspectives from outside LIS, having published with ALA Editions suggests the audience for their books is still largely within the realm of LIS. Because of this, it is helpful to consider the work of a wide-ranging group of researchers from outside LIS entirely who have been studying issues surrounding policing and security in public spaces. One study of note, very much in line with Albrecht and Graham, found that “police presence can have a strong impact on public fear reduction” (Zhao, Schneider & Thurman, 2002, p. 295). This is relatively unsurprising given that if police presence did not affect public fear whatsoever there would be no reason to have a police force in the first place. At issue here is not that police presence does not have an impact on fear reduction for a portion of the public, but rather that police presence does not reduce fear for the entire public. It is interesting to note that the same study suggested “police presence may not have an influence on making citizens satisfied with police services,” indicating the public wants the police to do more than simply be present (Zhao et al., 2002, p. 295).
In contrast, Warner and Swisher conducted a study documenting the effect of police presence on the health of people of colour (2015). The study explored the variations in self-assessed life expectancy for youth from different ethnic backgrounds and found that black, as well as foreign-born and second-generation Mexican youth, were least likely to believe they would live past the age of thirty-five. The authors theorized that “the lower survival expectations of black youth…may also reflect unmeasured stressors associated with discrimination and concerns about increasing police surveillance, harassment, and violence” (Warner & Swisher, 2015, p. 13). Thus, while police presence may have a calming effect on the public generally, their presence can also have negative health effects depending on a person’s race.
In the last decade, several studies have been published in the United States documenting the increased presence of police in elementary and high schools and the effect of that presence on students. A 2016 report from the American Civil Liberties Union investigating school-related arrests found that Black, American Indian, Hawaiian/Pacific Islander, and Latinx students were much more likely to be arrested at school than white students. Notably, students with disabilities were also three times more likely to be subject to school-related arrest. Students at schools where 80% of the students came from low-income families were also seven times more likely to be arrested than students at schools where 20% came from low-income families (Nelson, Leung & Cobb, 2016, p. 3).
Similarly, Weisburst’s study of police presence in Texas schools found that the rate of suspensions and expulsions increased by 200% and disproportionately affected Black and Hispanic students (2019, p. 338). This was coupled with findings that schools receiving federal grants for police programs saw a 2.5% decrease in high school graduation and a 4% decrease in college enrolment, both disproportionately affecting low-income students. While public schools and public libraries are certainly not perfect analogues, their shared educational mandates and tendency to host diverse groups of people suggest LIS researchers should consider these findings.
Alongside these findings about public perception of police presence are a host of studies detailing what is known as “the weapons effect”. These studies use a variety of methods to document the effect of an individual simply seeing a firearm. In the original study, participants were placed in a room with a shotgun and a handgun on the table which the researcher explained were left over from a past study (Bushman, 2013). The control group in this scenario had a badminton racquet and a birdie on their table. The participants were then asked to decide how strong an electric shock to apply to the research assistant in the next room with the group who was exposed to firearms opting for stronger shocks than the control group. The original study has been replicated more than fifty times with the most surprising variation finding that even just hearing the name of a weapon can make participants more aggressive (Bushman, 2013). These findings have significant implications for police presence in libraries as they run counter to Albrecht’s belief that police “presence…simply calms things down” (2015, p. 73). While it may be true that police presence decreases public fear for a portion of society, paradoxically, the presence of weapons on these officers can also increase general aggression which in turn can escalate encounters with police.
The changing nature of public space has also been studied extensively by scholars outside of LIS. Tilley suggests public spaces are becoming increasingly commercialized as their purpose shifts toward private consumption (2014). This emphasis on private consumption has resulted in the increased use of private security companies to manage public spaces like parks and plazas. Of particular relevance to the discussion around police presence and ethnicity, Tilley suggests policing practices in these spaces have shifted to deal not only with violent crimes but also with perceived threats. In a move that recalls McGinty’s statement about “aberrant individuals, the homeless and the mentally ill,” maintaining a sense of safety in public spaces has become “largely dependent on the exclusion of racialized bodies, the poor, and those who are deemed undesirable” (2008, p.117; Tilley, 2014).
These types of exclusion mechanisms and their effects have been documented extensively by journalists and scholars alike including Desmond Cole who reported on the disproportionate effects of police street checks or carding programs on young black men in Toronto (2015). A 2017 report prepared for the Toronto Police Services Board by faculty from the University of Toronto’s Centre for Criminology and Sociolegal Studies compiled the results of recent studies on carding and police street checks from around the world and concluded “between 19 and 24 of the 27 studies show effects supporting the conclusion that minorities are more likely to be arrested than whites” (Doob & Gartner, 2017, p. A13). Even when youth involved in violent crime in the last year were removed from the equation, there was still a marked difference in involvement with police between white youth and youth of colour at 10.1% and 28.5% respectively (Doob & Gartner, 2017, p. A11). The critical point of application for library workers is that people “become less engaged with their communities if they are subject to what might be considered ‘unproductive’ police stops” (Doob & Gartner, 2017, p. A13).
In light of this research, the increasing presence of police and security guards in libraries is a part of the broader trend toward the privatization of public space which adversely affects BIPOC. If libraries are to remain safe and welcoming places for all people, library workers must be aware of and combat this shift in the way public space is used and managed.
In order to ensure public libraries are safe for all patrons, library workers must move beyond reacting to the effects of systemic issues and begin to directly address the root causes through strategic partnerships. At least “24 public libraries in the United States currently incorporat[e] social services and social workers” as well as several more at Canadian public libraries in “Edmonton, Winnipeg, Kitchener, Thunder Bay, Brantford, Hamilton, and Mississauga” (Fraga, 2016: Schweizer, 2018, p. 34). While social workers are becoming more common in library settings there is still remarkably little data about their effectiveness. This is an area in need of further study.
Halifax Public Libraries (HPL) also recently hired a social worker who has the unique role of overseeing the security staff at HPL (Selman et al., 2019, p. 14). This kind of creative restructuring of traditional staff hierarchies is critical for libraries looking to move “from a culture of suspicion to one of empathy and welcome,” (p. 19). As well as introducing social workers, HPL received funding to be able “to offer a free hot beverage and healthy snack to customers twice/week,” an important step in beginning to address the inequality in their community (p. 15). While certainly not part of the traditional function of libraries, this HPL program is an excellent example of library workers taking concrete action to address systemic issues affecting their patrons and incorporates Trapskin’s recommendation to develop “new programs and services that promote even more positive staff and user interaction” (2008, p. 74).
If library workers are going to continue to have relationships with police and security professionals then the two parties must have equal say in decision-making regarding library security. To close the gap that Albrecht describes between what library workers and police are “willing to tolerate,” library workers must have the means to provide meaningful oversight of library security and be free to uphold “librarianship’s principles of access” (2015, p. xi-xii). It is critical that library workers involve as many stakeholders as possible in these conversations, including other professionals partnered with libraries as well as patrons from a variety of diverse communities, especially those patrons most negatively impacted by policing and security methods. Such systems are already in place at the Thunder Bay Public Library where, in response to patron concerns about not feeling safe in their libraries, they developed “a Community Action Panel, Youth Advisory Council and Indigenous Advisory Council who give…guidance on safety matters” (Selman et al., 2019, p. 19).
Though librarians and library workers may not be trained as social workers and at times may struggle with the increasingly social role of the profession, collaboration with mental health counsellors, public health nurses, social workers, and other resources in the community can help ease some of this tension. Further research should identify and assess existing programs and examples of interprofessional collaboration which provide alternatives to security and policing. The increased use of video surveillance in libraries and its effect on patron privacy and safety, particularly for BIPOC patrons, is also relevant here and worthy of further consideration
For too long, the negative effects of police and security presence in libraries have been ignored or, at the very least, neglected. Police officers and security guards should be used judiciously just as one would use any other security tool available to library workers. If libraries are to be “safe havens” for all patrons as Chancellor describes, then the role of police and security guards must be reconsidered by library workers themselves (2017, p. 2). If we are to truly uphold the value of universal access to public libraries then we must continue to ask ourselves Barry’s excellent question “who gets the right to feel and be safe” and who does not (2015)?
I would like to thank my wonderful peer reviewers Sunny Kim and Ian G. Beilin as well as my wonderful ITLWLP editor Sofia Leung; they were thoughtful, kind and deeply intelligent presences throughout the process. I would also like to thank Dr. Ajit Pyati for guiding the Individual Study where this article began to take shape and Mark Standish for wise counsel during that process.
Albrecht, S. (2015). Library Security: Better communication, safer facilities. Chicago: ALA Editions.
Barber, G. (2012). The Legacy: Kreimer v. Bureau of Police, Twenty Years Later. Library & Archival Security 25(1), 89-94. Retrieved from https://www-tandfonline-com.proxy1.lib.uwo.ca/doi/full/10.1080/01960075.2012.657948
Barry, D. (21 Nov 2015). Police don’t make everyone feel safe – not when you’re seen as the enemy. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/2015/nov/21/police-surveillance-safety-seen-as-the-enemy
Caruk, H. (26 Feb 2019). Metal detectors, bag searches greet library patrons as new security measures start. CBC News. Retrieved from https://www.cbc.ca/news/canada/manitoba/security-measures-implemented-library-1.5033830
Chancellor, R. L. (2017). Libraries as Pivotal Community Spaces in Times of Crisis. Urban Library Journal, 23(1). Retrieved from http://academicworks.cuny.edu/ulj/vol23/iss1/2
Clark, I. (24 Aug 2016). Public libraries, police and the normalisation of surveillance. Retrieved from http://infoism.co.uk/2016/08/police-libraries/
Cole, D. (21 Apr 2015). The Skin I’m In: I’ve been interrogated by police more than 50 times—all because I’m black. Toronto Life. Retrieved from https://torontolife.com/city/life/skin-im-ive-interrogated-police-50-times-im-black/
DeFaveri, A. (2005). Breaking Barriers: Libraries and Socially Excluded Communities. Information for Social Change. 21. Retrieved from http://libr.org/isc/articles/21/9.pdf
Dixon, J. A. (2016). Safety First. Library Journal, 141(9), 29-31. Retrieved from https://lj.libraryjournal.com/2016/05/managing-libraries/safety-first-library-security/
Doob, N. & Gartner, R. (2017). Understanding the Impact of Police Stops: A report prepared for the Toronto Police Services Board. Centre for Criminology and Sociolegal Studies: University of Toronto. Retrieved from http://www.tpsb.ca/items-of-interest/send/ 29-items-of-interest/552-understanding-police-stops
Ferreras, J. (1 Mar 2018). Shocking video shows man kicking librarian at Richmond community meeting. Global News. Retrieved from https://globalnews.ca/news/4057307/richmond-kick-librarian/
Forrest, D. (2005). Security at Western Kentucky University Libraries. Library & Archival Security20(1-2), 89-97. Retrieved from https://www-tandfonline-com.proxy1.lib.uwo.ca/doi/abs/10.1300/J114v20n01_05
Fraga, J. (29 Mar 2016). Humanizing homelessness at the San Francisco Public Library: A social worker connects at-risk library patrons with resources and a chance to give back. City Lab.Retrieved from http://www.citylab.com/navigator/2016/03/humanizing- homelessness -at-the-sanfrancisco-public-library/475740/
Graham, W. (2012). The Black Belt Librarian: Real-world safety & security. Chicago: ALA Editions.
McGinty, J. (2008). Enhancing Building Security: Design Considerations. Library & Archival Security 21(2), 115-127. Retrieved from https://www-tandfonline-com.proxy1.lib.uwo.ca/doi/full/10.1080/01960070802201474
Mosby, C. (8 Jun 2017). Lakewood Police Officer Sued For Breaking Girl’s Jaw. Lakewood Patch. Retrieved from https://patch.com/ohio/lakewood-oh/lakewood-police-officer-sued-breaking-girls-jaw
Nelson, L., Leung, V. & Cobb, J. (2016).The Right to Remain a Student: How California School Policies Fail to Protect and Serve. ACLU of California. Retrieved from www.aclunc.org/ publications/right-remain-student-how-ca-school-policies-fail-protect-and-serve
Panza-Beltrandi, G. (23 Jan 2019). Yellowknife librarian details incident that led to increased security. CBC News. Retrieved from https://www.cbc.ca/news/canada/north/yellowknife-library-gets-security-guard-1.4989033
Schweizer, E. (2018). Social workers within Canadian public libraries: A multicase study(Unpublished master’s thesis). University of Calgary, Calgary, Canada. Retrieved from https://prism.ucalgary.ca/bitstream/handle/1880/106632/ucalgary_2018_schweizer_elizabeth.pdf?sequence=1&isAllowed=y
Selman, B., Curnow, J., Dobchuk-Land, B., Cooper, S., Samson, J. K., & Kohan, A. (9 September 2019). Millennium For All Alternative Report on Public Library Security. doi: https://doi.org/10.31229/osf.io/vfu6h
Tilley, J. (3 Mar 2014). Social Exclusion and Public Space. Imagining Justice. Retrieved from http://uprootingcriminology.org/blogs/social-exclusion-public-space/
Trapskin, B. (2008). A changing of the guard: Emerging trends in public library security. Library& Archival Security, 21(2), 69-76. Retrieved from https://www.tandfonline.com/doi/abs/10.1080/01960070802201359?src=recsys&journalCode=wlas20
Warner, T.D. & Swisher, R.R. (2015). Adolescent Survival Expectations: Variations by Race, Ethnicity, and Nativity. Journal of Health and Social Behavior, 1-17. doi: 10.1177/0022146515611730
Weisburst, K. (2019). Patrolling Public Schools: The Impact of Funding for School Police on Student Discipline and Long‐term Education Outcomes. Journal of Policy Analysis and Management 38(2), 338-365. doi: 10.1002/pam.22116
Wong, Y.L. (2009). Homelessness in Public Libraries. Journal of Access Services, 6(3), 396-410. Retrieved from https://www-tandfonline.com.proxy1.lib.uwo.ca/doi/full/10.108015367960902908599?src=recsys
Yogaretnam, H. (28 May 2018). Police ask for public’s help after librarian sexually assaulted in city’s 13th homicide of 2018. Ottawa Citizen. Retrieved from http://ottawacitizen.com/news/local-news/police-ask-for-publics-help-after-librarian-sexually-assaulted-in-citys-13th-homicide-of-2018
Zhao, J., Schneider, M., & Thurman, Q. (2002). The effect of police presence on public fear reduction and satisfaction: A review of the literature. The Justice Professional, 15(3), 273-299. Retrieved from https://journals-scholarsportal-info.proxy1.lib.uwo.ca/ details/08884315/v15i0003/273_teopposarotl.xml
This post was written by Andreas Orphanides, who received a Focus Fellowship to attend the 2019 DLF Forum.
Andreas is Associate Head, User Experience at the NC State University Libraries. His work focuses on developing high-quality, thoughtfully designed solutions to support teaching, learning, and information discovery. He holds a Bachelor of Arts from Oberlin College, a Master of Science in Library Science from UNC-Chapel Hill, and a Master of Computer Science from NC State University. His professional interests include human factors, systems analysis, and design ethics.
Gone in 90 Seconds
Constrained writing is the practice of creative writing under artificial restrictions: you might challenge yourself not to use the letter e, or write a short story in under 100 words, or fit your thoughts into sonnet form. The idea is that writing under these constraints forces you to summon creative approaches to problem solving.
We don’t have a lot of equivalents to constrained writing in the academic realm. The closest we get is various sorts of structured presentations: 7×7 / Pecha Kucha, Battle Decks, lightning talks. The lightning talks at DLF are an unusually short 90 seconds — less than a third the length of the shortest talk I’d ever given previously.
How do you make a meaningful talk that fits into a minute and a half? The first step is to pick an idea. No, pick a smaller idea than that. OK, then you write it down. Now cut out half the words. OK, try saying all those words: can you do it in less than 90 seconds? Probably not. Cut some more words out. Is the idea still coherent? Barely. Good enough!
Then there’s the matter of the slides. It’s simply not possible to fit an idea that’s all that complex into 90 seconds. That said, for any complexity you do have, the visuals are going to have to do a lot of the heavy lifting. OK, so you’ve got some slides, with some words, and some lines, and a chart or two: they’re spartan, but functional. But if we’re going to get people’s attention over the span of our short talk (if they get distracted, they’ll miss it!) we’ve got to give a little life to the slides. The solution? Redraw them all using a magic marker. No problem.
Now it’s time to put it all together. You’ve got your slides: intended to be just kindergarten-looking enough to catch people’s attention. You’ve got your speaker notes: meticulously curated, parsimonious, perfectly timed to the slides. A few practice run-throughs. 87 seconds — that gives a 3% margin of error. It’ll have to do.
A few days later, you arrive at the venue. It’s your first DLF — it’s not your usual scene, and you’re not 100% sure how you ended up here, in fact. You feel like a lot is riding on this — you’ve got a special opportunity as a Forum Fellow to do this talk, and you do NOT want to screw it up. Anyway, it’ll be great. You’ve practiced. You’ve got presenter mode. What could go wrong?
One thing that could go wrong is that presenter mode isn’t available on the plenary podium. You’ll have to print notes. It’s an hour before your presentation. The organizers don’t have a printer available. Oh, and no one ever seems to be in the business center.
In a final act of creativity under constraint, you maybe trick the hotel’s airline check-in computer (“BOARDING PASSES ONLY,” scolds the sign) into printing out your slide notes.
It’s 5 pm. It’s the plenary session and you’re waiting for your turn to talk to your 500 newest friends about a postage-stamp sized topic near and dear to your heart. You’re, what, fifth in the lightning talk queue, and somehow the talks ahead of you are both taking forever and going by so fast that you can’t keep up with what the presenters are saying. And then it’s your turn.
At last, the podium. Time becomes an abstraction; reality a haze. It just happens: squint at the notes, say the words, click the slides at the right time. Pretend to look at the audience every now and then. Somehow avoid screwing up, for the most part. You wrap up just before the unceremonious interruption of the buzzer. In a heady daze, you stumble back to your seat and try (and fail) to absorb the rest of the lightning talks.
Afterwards, there’s some flattering tweets, some compliments about your amazing marker-scribbling skills, some heavy hors d’oeuvres. In the final assessment, you feel like you acquitted yourself well. And then you sleep for 14 hours.
Authors practice constrained writing to inspire creativity. DLF’s 90-second lightning talks, precisely because of their absurd brevity, allow (and require) participants to pour a lot of creativity into a tiny package. Writing and delivering a lightning talk — and seeing the other great talks in the lightning talk session (I swear, I kind of remember some of them!) — was a great enhancement to my first DLF experience. We don’t get a lot of opportunities to undertake constrained writing in our line of work. I encourage you to try it.
 This never happened, I wasn’t there, plausible deniability, etc.
This post was written by Michael B. Toth. Michael is President of R.B. Toth Associates. He has provided technical and management support for advanced digitization programs for over 20 years in libraries around the globe.
As I boarded my flight to Tampa at Washington Dulles Airport, I wondered what my first Digital Library Federation Forum meeting would be like. I didn’t have to wait until arrival in Tampa to find out: as I sat near the front of the aircraft, numerous fellow passengers greeted the person sitting next to me as they boarded. I finally turned to her and asked if they all worked together and were heading to a meeting. She noted they were all heading to the same conference in Tampa: the DLF Forum. I noted I was headed there as well and it would be my first, to which she noted: You’ll love it – it’s a great group of people! That brief prelude to the DLF Forum proved to be an accurate introduction. When I arrived at the Forum it was indeed great to see the informal interactions as well as the session presentations and discussions on important issues for digital libraries. I was also impressed with the leadership role of CLIR in making sure the Forum addressed all the needs and goals of such a diverse group. CLIR did amazing work organizing the Forum in such a relaxing venue, while somehow avoiding the worst of hurricane season and Florida heat (waterview photo attached from my morning stand-up paddleboarding).
With so much going on, I obviously couldn’t be everywhere, nor reflect more than my own DLF novice perspective on the Forum. With my two decades of work on preserving technical data for future generations, I was pleased to see one area of focus reflected in several sessions and the interests of over five dozen participants at the DLF Forum and, of course, DigiPres: the need for sustainable data repositories and digital preservation. Many participants cited the importance of workflows, open access, and prioritization. One topic I did not hear addressed was the importance of standards to ensure collections could be preserved for future generations in a form and format that is both machine-readable and comprehensible for humans. All the best workflows, collection plans, and prioritization will only yield effective results when the data are accessible to users and their technology for years to come.
Giao Luong Baker of Duke, Uwe Bergmann of the Stanford National Accelerator Laboratory, and I attended this year’s Forum to highlight how advanced digitization systems can offer new insights into library collections when used effectively with integrated data, program management, and cross-disciplinary skills. Yet, equally important was for us to understand the DLF institutions’ and users’ resources and needs for technical solutions. Many institutions simply do not have the resources to capitalize on advanced digitization technology, despite the need. This highlighted the potential for some institutions (perhaps on a regional basis) to serve as technology centers where their imaging systems could be available to support under-resourced institutions, perhaps on a cost sharing basis or as research grant partners. This could allow smaller institutions to tap the infrastructure and technology available in better-resourced institutions to support digital research into collection items.
One topic that I was disappointed to not hear discussed at the Forum was a key ongoing legal case that could determine public accessibility to government documents, with potentially far-reaching impact on libraries and government records transparency. Supreme Court of the United States Case 18-1150, Georgia v. Public.Resource.Org, Inc., notes “The question presented is: Whether the government edicts doctrine extends to—and thus renders uncopyrightable—works that lack the force of law, such as the annotations in the Official Code of Georgia Annotated.” On the final day of the DLF Forum, the American Library Association filed an amici curiae brief in support of the lawsuit and allow free access to the official code of a state. They note:
“Citizens patronize libraries to access and learn about the law and their government. Citizens also rely on libraries to preserve our cultural heritage, including our nation’s laws. By reaffirming the government edicts doctrine, the Eleventh Circuit’s decision assists libraries in fulfilling these roles. The “force of law” standard pressed by the State of Georgia, on the other hand, would implausibly exclude important portions of the law from the public domain, and would do so in a confusing, unadministrable manner. The ensuing un-certainty would undermine libraries’ ability to connect citizens with the law. Amici thus respectfully request that the Court affirm the Eleventh Circuit’s decision that the Official Code of Georgia Annotated (“O.C.G.A.”) falls under the government edicts doctrine, and thus is, in its entirety, not copyrightable.”
On 2 December I attended the Supreme Court oral arguments in this case. The Justices’ questions did not fall along the frequently cited conservative and liberal positions, but ones of authorship and online access. During these arguments in the beautiful Sienna marble courtroom, I reflected upon the sincere concerns, ideals, and goals of the DLF members and Forum participants that are at stake.
For about the past year or so, I’ve been interested in working on a tool that would allow a user to take records that include an LC call number and generate LCSH headings for the records. In theory, I’ve thought it was generally possible. Library of Congress call numbers roughly correspond to parts of LCSH – at least, for the primary or subject, location, or subject/location pair. The problem, this isn’t at a granular level, and doesn’t expose any subjects or topics that might be related to a specific item. And then, of course, there is the pesky issue that by and large, most of the data that I would need to poll doesn’t exist in a place that is easy to automate. Sure, LC makes their data available, but often times as PDFs. The Linked Data tools don’t cover my need, and the resources that would be really, really helpful, are locked up in tools like Cataloger’s Desktop. So, I let this one go for a little while so I could ruminate on the problem.
While letting this go, I spent this year doing some work within MarcEdit to enable the creation, loading, and management of knowledge-graphs (fairly small [1 million items or less]) within the application. These are built in memory, and are performant. It’s built upon a couple .NET components (like dotnetrdf) and some glue written into MarcEdit’s link data platform to allow me to think about how I might address the import, creation, and editing of records using application profiles (so I could think about Bibframe or whatever comes after). However, in completing this work, I had an idea as to how I might be able to address the LCSH generation. Since it would be almost impossible to actually generate subjects out of thin air – the next best think would be to take information about a record, develop a knowledge graph of information related to that record, and then extract the common subjects that are mostly likely applicable to the record at hand.
I started writing workflows on my whiteboards at home, and came up with something that roughly looks like the following:
Essentially, to “generate subjects”, the tool starts with a query derived from breaking down the LC call number found within the record. Using that as a starting point, the tool queries either WorldCat (using the Search API) or the U.S. Library of Congress (currently with Z39.50, but I’d love to transition to SRU if the call number index could be enabled) to get an initial set of records. From there, the tool breaks down records and starts to build a graph looking for common threads. As threads are discovered, new queries are spawned. This occurs quickly, and across asynchronous threads. Once a corpus is set, the tool evaluates the available subjects to select those and meet a specific threshold and have commonalities. This means that the generation process isn’t static, so generating subjects across the same sets of records could result in minor differences between the suggestions as a thread ignored in building the graph may be promoted with a second run (I don’t recommend multiple runs – I just found this interesting) – creating near but not always exactly the same suggestions.
But does it work?
Well, if you are able to live with the caveats that these subjects are generated based on similarities to records and are not generated out of thin air, I’ve found that this approach works pretty well. It works best if you have a WorldCat API key (or can get one) as the corpus of records to query is much larger and the response time is a lot better than Z39.50 (maybe SRU would solve the issue) – and if you can live with the process not being super fast (it’s not), then I think it works pretty well.
And if I don’t have Call Numbers?
Well, MarcEdit and OCLC has you covered. MarcEdit leverages OCLC’s classify API allowing you to build call numbers:
OCLC makes this tool public under very permissive usage terms. So, you can use this tool to generate call numbers, and then use the Generate LCSH to generate subjects. And to make sure this works, I pulled a sample of 1200 records from Harvard’s open MARC records set. Within the set, there were only 400ish LC Call numbers. Using the Call number tool, that number raises to 1100. I deleted all subjects, and then asked the tool to generate subjects. The tool then created suggestions for 90% of the records with call numbers, taking ~4 minutes to generate.
When can I try this?
I’m currently allowing users in the MarcEdit community to test the work. I made a call and have emailed those interested with links to the beta software for testing. I’m hoping that within two weeks or so, I’ll hear from folks that this is working as expected, and I’ll move the tool into a production release of MarcEdit.
Can I learn more:
I posted about this work on Twitter over the weekend. You can see:
Librarians are among the strongest proponents of open source software. Paradoxically, libraries are also among the least likely to actively contribute their code to open source projects. This article identifies and discusses six main reasons this dichotomy exists and offers ways to get around them.
Libraries share a number of core values with the Open Source Software (OSS) movement, suggesting there should be a natural tendency toward library participation in OSS projects. However Dale Askey’s 2008 Code4Lib column entitled “We Love Open Source Software. No, You Can’t Have Our Code,” claims that while libraries are strong proponents of OSS, they are unlikely to actually contribute to OSS projects. He identifies, but does not empirically substantiate, six barriers that he believes contribute to this apparent inconsistency. In this study we empirically investigate not only Askey’s central claim but also the six barriers he proposes. In contrast to Askey’s assertion, we find that initiation of and contribution to OSS projects are, in fact, common practices in libraries. However, we also find that these practices are far from ubiquitous; as Askey suggests, many libraries do have opportunities to initiate OSS projects, but choose not to do so. Further, we find support for only four of Askey’s six OSS barriers. Thus, our results confirm many, but not all, of Askey’s assertions.
This blog is part of a series showcasing projects developed during the 2019 Frictionless Data Tool Fund.
The 2019 Frictionless Data Tool Fund provided four mini-grants of $5,000 to support individuals or organisations in developing an open tool for reproducible research built using the Frictionless Data specifications and software. This fund is part of the Frictionless Data for Reproducible Research project, which is funded by the Sloan Foundation. This project applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate reproducible data workflows in research contexts.
Frictionless DarwinCore, developed by André Heughebaert
André Heughebaert is an open biodiversity data advocate in his work and his free time. He is an IT Software Engineer at the Belgian Biodiversity Platform and is also the Belgian GBIF(Global Biodiversity Information Facility) Node manager. During this time, he has worked with the Darwin Core Standards and Open Biodiversity data on a daily basis. This work inspired him to apply for the Tool Fund, where he has developed a tool to convert DarwinCore Archives into Frictionless Data Packages.
The DarwinCore Archive (DwCA) is a standardised container for biodiversity data and metadata largely used amongst the GBIF community, which consists of more than 1,500 institutions around the world. The DwCA is used to publish biodiversity data about observations, collections specimens, species checklists and sampling events. However, this domain specific standard has some limitations, mainly the star schema (core table + extensions), rules that are sometimes too permissive, and a lack of controlled vocabularies for certain terms. These limitations encouraged André to investigate emerging open data standards. In 2016, he discovered Frictionless Data and published his first data package on historical data from 1815 Napoleonic Campaign of Belgium. He was then encouraged to create a tool that would, in part, build a bridge between these two open data ecosystems.
As a result, the Frictionless DarwinCore tool converts DwCA into Frictionless Data Packages, and also gives access to the vast Frictionless Data software ecosystem enabling constraints validation and support of a fully relational data schema. Technically speaking, the tool is implemented as a Python library, and is exposed as a Command Line Interface. The tool automatically converts:
* DwCA data schema into datapackage.json
* EML metadata into human readable markdown readme file
* data files are converted when necessary, this is when default values are described
The resulting zip file complies to both DarwinCore and Frictionless specifications.
André hopes that bridging the two standards will give an excellent opportunity for the GBIF community to provide open biodiversity data to a wider audience. He says this is also a good opportunity to discover the Frictionless Data specifications and assess their applicability to the biodiversity domain. In fact, on 9th October 2019, André presented the tool at a GBIF Global Nodes meeting. It was perceived by the nodes managers community as an exploratory and pioneering work. While the command line interface offers a simple user interface for non-programmers, others might prefer the more flexible and sophisticated Python API. André encourages anyone working with DarwinCore data, including all data publishers and data users of GBIF network, to try out the new tool.
“I’m quite optimistic that the project will feed the necessary reflection on the evolution of our biodiversity standards and data flows.”
To get started, installation of the tool is done through a single pip install command (full directions can be found in the project README). Central to the tool is a table of DarwinCore terms linking a Data Package type, format and constraints for every DwC term. The tool can be used as CLI directly from your terminal window or as Python Library for developers. The tool can work with either locally stored or online DwCA. Once converted to Tabular DataPackage, the DwC data can then be ingested and further processed by software such as Goodtables, OpenRefine or any other Frictionless Data software.
André has aspirations to take the Frictionless DarwinCore tool further by encapsulating the tool in a web-service that will directly deliver Goodtables reports from a DwCA, which will make it even more user friendly. Additional ideas for further improvement would be including an import pathway for DarwinCore data into Open Refine, which is a popular tool in the GBIF community. André’s long term hope is that the Data Package will become an optional format for data download on GBIF.org.
We are proud to announce that Jessica Gilbert Redman will be the new editor of the LITA Blog.
Gilbert Redman has been the web services librarian at the University of North Dakota for the past three years. She coordinates and writes for the library blog and maintains the library website. She has completed a post-graduate certificate in user experience and always seeks to ensure that end users are able to easily find the information they need to complete their research. Additionally, she realizes communication is the key component in any relationship, be it between libraries and their users or between colleagues, and she always strives to make communication easier for all involved.
“I am excited to become more involved in LITA, and I think the position of LITA Blog Editor is an excellent way to meet more people within LITA and ALA, and to maintain a finger on the pulse of new and emerging library technologies. My goal for the LITA blog is to give a voice to more people involved with technology in libraries and to keep everyone up-to-date with how libraries are using technology in new and innovative ways. I want to be sure we’re telling the stories that we don’t always get to hear in order better learn about making libraries more helpful for our users,” said Gilbert Redman.
Watch for calls from Jessica for regular and guest contributors coming soon.
Thanks again to everyone involved for another successful community sprint. During those two weeks, we reviewed all of our existing documentation and added quite a few new pages to support the new features we've picked up since 1.0.0 was released. You can see it all at islandora.github.io/documentation.
Make an Image
POST / PUT
How to Contribute
Installing Features in Karaf
Creating Resource Nodes
Manual Install Docs
One particuarly awesome contribution that we received during the Sprint was manual install docs from Daniel Aitken at discoverygarden. It is the last remaining unmerged piece of the sprint, which is unfortunate because we'd really love to have these docs. If anyone out there has the time to follow all the steps and build a box by hand, we'd really love to have your feedback so we can publish these as soon as possible.
We've integrated our documentation workflow using TravisCI and are now deploying the documentation to https://islandora.github.com/documentation every time mardown is merged into master. This means we'll always have up to date documentation available for the latest features!
We at the Islandora Foundation would like to sincerely thank everyone who participated during the sprint. Because of continuing contributions like yours, we can continue to provide high quality free and open source software.
Daniel Aitken - discovergarden
Janice Banser - Simon Fraser University Library
Caleb Derven - University of Limerick
Mark Jordan - Simon Fraser University Library
Rosie Le Faive - University of Prince Edward Island
This post was written by Weiwei Shi, who received an ARL+DLF Fellowship to attend this year’s DLF Forum.
Weiwei is the Digital Initiatives Applications Librarian at the University of Alberta. She leads the DI development team and manages the development activities to support DI applications and services to enable openness, diversity, and accessibility. She and her team support a wide range of applications such as the discovery services, digital assets management system, various repositories services, digital preservation, and research data management workflow. She has a strong interest in project management, user-centred design and web accessibilities focused on library applications. She has a BSc in management information systems from China and an MLIS from the University of Alberta.
As I started to write this reflection on DLF 2019 by going through the 34 pages of notes I took over three days, I relived those instructive and transformative moments at the conference. The journey started with the opening keynote speaker, Marisa Duarte. And that is what this reflection will be focusing on. However, I want to note that it is truly the underlying theme of Equity, Diversity, and Inclusion (EDI) for the entire conference that provoked my deep thoughts and uplifted me emotionally.
Duarte’s session, titled “Beautiful Data: Justice, Code, and Architectures of the Sublime”, left me a wealth of food for thought and very profound feelings. As a first-generation Chinese immigrant, the notion of “algorithmic domination” is something that often crosses paths with me in life. But I found myself deliberately separate my professional identity from my social self, adjusting the professional image to fit into the “white-dominated norm,” to follow the system instructions that I learned in the western society to help me progress professionally. As a regular Chinese user of various digital library applications, I feel frustrated with the imperial system designs that center the white-dominated perspectives and neglect needs from marginalized groups. However, I oppress the frustration in my professional self. For example, it is nearly impossible to search for materials in library discovery applications and digital collections with my native language. Many aspects contribute to this broken system, from the awkward PinYin-based or less-known English translation records, to the neglection of CJK language support in search algorithm, from the often non-existence CJK OCR, to the biased collection development practices. But I have never challenged the system even though I have worked in the field of library technology for many years. I accepted the norm because I don’t want to, in Duarte’s word, “causing too much trouble” especially because of my race. It is through her powerful statements: “The seeds of our liberation are embedded in the code we create. “, and “With your degree, you are literally entitled, with a certain authority. And it’s your responsibility to do something with that. “, that I found the strength to examine the internalized oppression that has almost perpetuated to my professional identity. Her presentation repeatedly stressed that social justice, diversity, and equity are directly embedded in the data we collect and present (or discard), the technology we use, the algorithm we create, and the platform we develop. And she painted an inspiring image for the profession with her powerful language:
“If conquerors write the history, then librarians and archivists should keep and expose the records of their crimes.”
“Think of yourselves as compassionate time travellers – not only caring for the documents of the past, but also as stewards of documents of the future.”
Looking around the ballroom, I can sense the energy inspired by her talk. I feel both bestowed with the privilege and obligated with responsibilities to be part of this force that shapes the future, the future that my community can interact with the collections of our own records, discover and access them in a way that is sensible and relevant to them. Just a few weeks before the conference, a fellow Chinese librarian and I discussed the possibility of establishing a Canadian-Chinese digital library that enables the dissemination of the community collections across the country. Now, more than ever, after Duarte’s inspiring keynote, I feel an urgency to take on such responsibility. To paraphrase Duarte, we need to break the library rules and norms, and not to be stuck in the tyranny of the bureaucracy. We need to come together and negotiate boundaries for social justice and EDI. And that “we” includes me and many others who like me, have tried so hard to “blend in”.
On my flight to DLF, I imagined the conference to be very practical, a great blend of content curators, metadata librarians, and library information technology practitioners such as myself, talking through operational challenges and trends for our digital library applications. And yes, in a sense, the conference was like that. But Duarte’s keynote provoked profound conversations, reinvigorated our collective passion, and allowed me to reflect on personal values and places where I can, and should be more proactively contribute my voice and knowledge.
I want to express my sincere gratitude to both ARL/DLF fellowship that gives me such an opportunity and to Marisa Duarte for such an enlightening and motivating speech.
I had my eye on attending DLF well before being chosen as a GLAM Cross-Pollinator from ARLIS/NA. I am currently the Arts Digital Projects Librarian at the Robert B. Haas Family Arts Library at Yale University, and primarily work on a crowdsourced transcription project around historical Yale theater programs called Ensemble@Yale. This project has a lot of components; many are digital, of course, but our emphasis has been fostering and serving the project’s community. My experiences with this digital project are a perfect parallel to those at DLF. I expected takeaways revolving around digital issues and left with the strong community attached to them.
Going into DLF for the first time, I anticipated revelations about the digital components of my work (data, project management, accessibility, digital preservation, etc.), matching my “of course” sentiment around Ensemble@Yale. Of course, attending this conference expanded my knowledge in these areas. Of course, sessions I attended were on workflows and processes. Of course, many conversations revolved around digital topics. What I hadn’t anticipated was the emphasis on the people around and the community behind all of those technical layers to our work.
Small practices to create an inclusive, comfortable, and person-centered environment impressed me. Using the mic, sharing slides and notes, scheduling substantial breaks, providing gender neutral bathrooms, and asking for pronouns were all noted in my observations of this inclusive approach. Moreover, many of these practices provided me the opportunity to feel like part of the #DLFvillage and comfortably join conversations like the Growing with the crowd – crowdsourcing and digital collections working session facilitated by Samantha Blickman and Ben W. Brumfield. Gathering with others who were either working on or considering crowdsourced projects transformed into discussions of the core tenant to serve our volunteers. What began as talks of logistic and data considerations quickly shifted to the people-centered nature of the work. Everyone working on an existing project emphasized this as the most important factor; making it fun, inclusive, and easy for volunteers were all points of advice from those working on projects already.
There were two presentations that were not as directly related to my work that opened up impactful discussions that were new to me. The first was Privacy matters: Incorporating surveillance pedagogy into library instruction by Andy Boyles Petersen, and the second was Ethical Digital Scholarship and Prison Labor? by Alexis Logsdon. Incorporating privacy into information literacy instruction was something I hadn’t seen explored in this way, and I had never heard of libraries using prison labor. Both shed light on how we as information professionals can strive to protect people through our practices.
Every presentation I attended tracked back to the people-centered nature of digital work. Of course, I walked away from DLF invigorated and with a list of ideas to improve my project workflows through practices I learned. I also left with the thread of people-centered work running through it all and hope to return to the village and see some of those people again soon.
I mentioned in my previous post that I was looking forward to works entering the public domain in the US as a routine annual event. This coming January 1, we’ll have the second large expiration of copyrights in the US since 1998 (the first being the most recent January 1). I’ve sometimes heard cynicism expressed that this would ever happen. Some public domain fans will tell you that Disney had been responsible for preventing valuable characters like Mickey Mouse from entering the public domain for decades, and that they’ll force more copyright extensions through before his copyright is scheduled to expire in a few years.
Personally, I think that’s a myth that makes despair too easy. While Disney has indeed been one of the companies that has lobbied for longer copyrights, they’re far from the only group that has done so, and their role is often exaggerated relative to other entertainment and publishing industry groups. Moreover, copyrights to some of their most profitable characters are already starting to expire, and so far they have neither pushed hard to extend them, nor to my knowledge seen a loss in their profitably.
I’m referring here to the A. A. Milne characters that Disney now owns (after some earlier legal battles were resolved): Winnie-the-Pooh, and his friends Christopher Robin, Piglet, Owl, and the other inhabitants of the Hundred Acre Wood. By some accounts they are as profitable as Mickey, possibly more so. And they probably will remain profitable even as their copyrights end.
Christopher Robin’s first published appearance as a character, for instance, was in the poem “Vespers” (whose most famous line is “Christopher Robin is saying his prayers”). It appeared in the January 1923 issue of Vanity Fair, and is already in the public domain in the US, as of the start of this year. Beginning in January 1924, Milne published more children’s poems featuring Christopher Robin and others in Punch, which were republished later in the year in the book When We Were Very Young. That book was an international best-seller, and along with the later book Winnie-the-Pooh, it launched Milne, his son, and his stuffed toys to worldwide fame and fortune (which, as I noted in last year’s post on Milne’s Success, was at best a mixed blessing for them.)
I find When We Were Very Young a delightful book. Like its successors, it isn’t straight-up nostalgic whimsy, but has a gently wry sensibility that parents may notice more readily than their children do. Along with “Vespers”, some of the other verses (like the ones about changing guards at Buckingham Palace, and the king who likes “a little bit of butter in my bread”) are still well-known. But the book is significant not just for the text, which is already in the public domain in some other countries with terms less than “life plus 70 years”, but for Ernest Shepard’s illustrations for the book, which will be joining the public domain in the US along with Milne’s poems. Those include recognizable likenesses not just of Christopher Robin, but of a certain bear that appears a few times, including in the upper left of the book’s cover:
The bear doesn’t yet have the name “Winnie-the-Pooh”. In this book he’s just called “Teddy Bear”, or more formally, Mr. “Edward Bear” (a name also used in his later book). His appearance, as established in this book, joins the public domain next month. The year after that, it will be joined by his “Pooh” name and his first prose story (“The Wrong Sort of Bees”, published in London’s Evening News at Christmastime 1925). The following year, most of the rest of Pooh’s Hundred Acre Wood friends will join the public domain, along with the book Winnie-the-Pooh (which includes Pooh’s bee story as its first chapter). Tigger, who bounced into print two years later in The House at Pooh Corner, will be the last of the major Milne characters to join the public domain, the same year as we can expect Mickey Mouse’s first copyrights to expire in the US.
But Disney will still have be able to profit substantially from its rights to Pooh (and Mickey). After all, the Winnie-the-Pooh cartoons and movies they made came later, and their copyrights still have decades left on them. Disney’s likenesses of Pooh and his friends also differ substantially from Shepard’s, and will therefore also be under copyright for many more years.
Moreover, much of the revenue Disney gets from these characters is not from their stories or cartoons, but from the merchandise associated with them– clothing, housewares, toys, and the like. Those can be protected by trademark, and unlike copyrights, trademarks for various kinds of goods and services do not expire as long as their owners keep using them along similar lines. (For example, while the character Peter Pan is no longer copyrighted in most countries, trademarks restrict using him to promote things like peanut butter and bus transportation to his current licensees.)
Someone who wants to reuse Christopher Robin, Pooh, and Mickey Mouse in creative works after their copyrights expire might still need to be careful about how they promote that work. But courts have made it clear in cases like Dastar v. Twentieth Century Fox that trademark cannot be used to create a de-facto perpetual copyright. I expect that over the next few years we’ll see some legal skirmishing over where to draw the line between unrestricted creativity and restricted merchandising, for Disney’s characters now entering the public domain. (We’ve seen similar conflicts over Tarzan in the past, even as many of his stories have long been in the public domain and freely available online.)
Personally, I’m content if Winnie-the-Pooh-branded bedsheets and bubble bath remain Disney’s domain, as long as readers can freely enjoy Milne’s stories and Shepard’s drawings, and writers and artists can adapt them into new stories, scenes, and objects (and promote those new works within reasonable guidelines). I’m hoping we’ll keep getting new arrivals to the public domain every year, from 1924 in January, then 1925 next year, and so on. And I’m hopeful that we will, as long as so many of us appreciate and make clear the value of a growing public domain, that those who might otherwise try to extend copyright further can’t ignore us.
YUL moved into a new organizational structure in the summer of 2018. There are three divisions, each overseen by an associate dean (AD). The Restructuring Progress Update – Sep 26 explains a bit about Research and Open Scholarship, which is where this AD is needed. (The person who had held that role for a long time left to become a chief librarian at another university; someone internal filled the role for two or three years but then stepped down and the role is now vacant.)
(Now, I should say we began to move into a new structure in 2018, because it’s not all done yet. The librarians and archivists have moved and by and large are settled into new roles, though there are a number of unresolved issues. The restructuring is still an item on the agenda of the regular meetings between YUFA (our union) and the Employer. For more about library reorgs, see my post Navigating the Reorganization about an October conference on the subject.)
York University Libraries (YUL) is seeking an experienced leader for the Associate Dean, Research & Open Scholarship position. The position will be attractive to individuals who understand the evolving role of the research library, have a strong understanding of research culture, scholarly communications, content and unique collections, and are adept at championing the Libraries.
A successful record of leadership, planning, developing and managing library programs and services and leading staff through change gained through at least five years of experience in library management positions.
There’s no closing date in the ad because the search will be open until it’s filled, but it looks like they’ll start reviewing applications in the second week of January.
No associate dean job is easy. Whoever takes this job will face many of the same problems as at other academic libraries. On the other hand, York University is (aside from the strikes) a fine place to work. I really like being there: the students are smart and engaged, the faculty are doing interesting research, the salary and benefits are good, and it’s coming up on two years since we got a subway station. On the third hand, there are (as you’d expect) some things about the job that are unique to York University Libraries.
I’m not on the search committee. If anyone is considering applying, or gets asked for an interview, I’m happy to take a phone call and answer questions. See also Interviewing at York University Libraries for a general idea of how the day will probably go; however, this is not a regular position so various things during the day will be different.
We need an associate dean, and I hope we get a really good one. Spread the word.
Today, your identity on the Internet is essentially owned by the big email providers and social networks. Google, Yahoo, Facebook, Twitter - chances are you use one of these services to conveniently log into other services as YOU. You don't need to remember a new password for each service, and the service providers don't have to verify your "identity". What you gain in convenience, you lose in privacy, and that's turned out really well, hasn't it?
The "flow" you use to take advantage of this single sign-in is a "dance" that takes you from website to website and back to the site you're logging into. A similar dance occurs to secure access to resources licensed on you behalf by libraries, institutions, corporations, etc.. I wrote a bunch of articles about "RA21" (now rebranded as the vaguely NSFW "SeamlessAccess"), an effort spearheaded by STM publishers to improve the user experience of that dance. (It can be complicated and confusing because there are lots of potential dance partners!)
Henri Matisse, La danse (first version) 1909
These dance partners style themselves as "identity providers". That label makes me uncomfortable. Identity can't be something that can be stripped from you by on the whim of a megacorporation. Instead, internet identity should be woven from a web of relationships. These can be formed digitally or face-to-face, global or local, business or personal.
You'd have thunk that the whole identity-on-the-internet thing would have improved in the 13 years since that login dance was first rolled out. And you'd be almost right, because a new architecture for internet identity is now on the horizon. Made possible by many of the same technologies that are securing the internet and inflating the blockchain bubble, massively distributed and even "self-sovereign identity" are becoming real-ish.
These technologies will inevitably be applied to the access authorization problem. Access via distributed identity replaces the website-to-website dance with the presentation of some sort of signed credential. A service provider verifies the signature against the signer's public key. It's like showing a passport that can't be forged. A tricky bit is that the credential also needs to be checked against a list of revoked credentials. This would have been cumbersome even ten years ago, but distributed databases are now a mature technology, versions of which underpin the internet itself.
Interlinked with the concept of distributed identity is the notion that users of the web should be able to securely control their data, and that decisions about what a web site gets to know about you should not be delegated to advertising networks.
Unfortunately, we're not quite ready for distributed identity, in the sense that implementation for today's web would require users to install plugin software, which has its own set of usability, privacy and security issues. The ideal situation would be for some sort of standardized distributed identity and secure data management capability to be installed in browser software - Chrome, Firefox, Safari, etc.
There's a lot of work going on to make this happen.
ID2020 has put out an identity manifesto that starts with the declaration that "The ability to prove one’s identity is a fundamental and universal human right."
Tim Berners-Lee is leading the Solid Project, which let's you "move freely between services, reuse data across apps, connect with anyone, and select what you share precisely".
The W3C Verifiable Claims Working Group has published Technical Recommendations for "Verifiable Credential Use Cases and a "Verifiable Credential Data Model". They observe that "from educational records to payment account access, the next generation of web applications will authorize entities to perform actions based on rich sets of credentials issued by trusted parties."
The Sovrin Network is a "new standard for digital identity – designed to bring the trust, personal control, and ease-of-use of analog IDs – like driver’s licenses and ID cards – to the Internet."
The common thread here is that users, not unaccountable third parties, should be able to manage their identity on the internet, while at the same time creating a global chain of trust.
It seems to me that there's a last-mile problem with all these schemes. If identity is really a universal human right, how do we create a chain of trust that can include every human? That problem becomes a lot easier to solve if there were some sort of organization with a physical presence in communities all over, trusted by the community and by other organizations. A sort of institution experienced in managing information access and privacy, and devoted to the needs of all sorts of users.
In other words, what if "libraries" existed?
The federated authentications systems used by libraries today - Shibboleth, Athens, and related systems use a dance similar to what you do with Google or Facebook. It's a big step that moves your internet identity away from "surveillance capitalists" towards community institutions. But you still don't have control over what data your institution give away, as you will in the next-generation internet identity systems I describe here. (RA21 is no different from Shib or Athens in this respect.)
What might libraries do to prepare for the age of distributed identity? The first step is not about technology, it's about mission. I believe libraries should start to think of themselves as internet relationship providers for their communities. When I get access to a resource though my library, I won't be "logging in", I'll be asserting a relationship with a library community, and the library will be standing behind me. Joining an identity federation is a good next step for libraries. But the library community needs to advocate for user identity as a basic human right and prepare their systems to support a future where no dancing is required.
Update 12/5/2019: revised last two paragraphs to be less mystifying.
Early-on, and up until earlier this year, the Archives Unleashed Toolkit had functionality built in for reading Twitter data into RDDs, and processing them with a number of small methods that overlapped a bit with the utilities that twarc ships with. In an effort to simplify the Toolkit’s codebase and dependencies, we removed all of that functionality. Though, it was not to be forgotten. It lives on (for now!) as a stale branch. When the tweet analysis functionality was implemented, it was still pre-Apache Spark 2.0; pre-DataFrame ease. This made working with the data, and expanding the tweet analysis section of the codebase a bit tedious.
A lot of Twitter data analysis is going to involve trends over time. This means the datasets that are being worked with are going to get larger and larger over time. Which means working with and processing data from Twitter is going to get more and more difficult over time. Sounds like a perfect use case for Apache Spark, eh?
A few of the datasets that I’ve collected from Twitter have tweet counts well over 10 million. In the past, analysis basically consisted of firing up screen or tmux and just waiting as I processed the datasets. This is a completely practical solution. You just have to be patient, and have stable and reliable computing resources. Though, at a certain point, as datasets scale or patience wanes, you really start thinking that there has to be a better way do this this! 😄
Working with newer versions of Spark, I took notice of the a really convenient Data Sources API that make loading something like line-oriented JSON from Twitter into Spark, and transforming it into a DataFrame with an inferred schema REALLY easy.
val tweets = "/path/to/tweets.jsonl"
val tweetsDF = spark.read.json(tweets)
That’s it! You can see the full inferred schema printed and example printed out here.
Similar to the Archives Unleashed Toolkit, twut, or Tweet Archives Unleashed Toolkit is a library (or package) for Apache Spark. It currently provides helper methods for a few processes that are used in basic analysis and statistic gathering on line-oriented JSON collected from a variety of Twitter APIs.
What does it do?
Dehydration (tweet id extraction)
val tweets = "src/test/resources/10-sample.jsonl"
val tweetsDF = spark.read.json(tweets)
twut Tweet ID extraction
Extract user info such asfavourites_count, followers_count, friends_count, id_str, location, name, screen_name, statuses_count, andverified
val tweets = "src/test/resources/10-sample.jsonl"
val tweetsDF = spark.read.json(tweets)
twut Tweet User Info
Extract tweet text
val tweets = "src/test/resources/10-sample.jsonl"
val tweetsDF = spark.read.json(tweets)
twut tweet text
Extract tweet times
val tweets = "src/test/resources/10-sample.jsonl"
val tweetsDF = spark.read.json(tweets)
twut tweet times
There are a lot of different options to work with data from Twitter since it is just line-oriented JSON. Pick your language. Pick your tool. Pick your platform. There’s a wealth of them out there to get the job done. In this case, I’m picking a few that I am most familiar with in order to illustrate the use case for creating a Apache Spark library for working with large Twitter datasets. Let’s look at a simple benchmark between three tools that extract the ids of tweets, so those datasets — one small, and one large — can be shared and rehydrated.
As is apparent in the figure above, on a really small dataset, jq and twarc dehydrate are going to be leaps and bounds faster than twut.ids. jq and twarc dehydrate are both able to extract the ids in under a second (0.422s and 0.108s respectively). Whereas twut.ids is almost 100x slower! That’s not good, but this also time takes into account the start-up costs of firing up Apache Spark, reading in the tweets as an Apache Spark Data Source, converting the line-oriented JSON tweets into a DataFrame, and finally running the id extraction.
Benchmark over a large collection
The second test I ran uses a relatively large dataset of 2,420,164 tweets (9.3G) directed at @realDonaldTrump (NSFL!). Tweets were collected using the search command in twarc. The full dataset is available here.
Looking at the figure above, we can begin to see the advantages of using Apache Spark to process a dataset like this. twut.ids is able to process the dataset in just over two minutes (129.17s) on average, and twarc dehydrate is slightly slower at just under three minutes (167.138s). That time difference between the two should scale even more in the favour of twut and Spark as the size of the dataset grows. Lastly, and surprisingly, jq is the slowest at just under six minutes (353.73s). Though, in fairness to jq, it can be used in conjunction with xargs or GNU parallel to really sped things up.
If you look closer at the output of time, you can really get a sense of much computing resources each is able to use by looking at the “Percent of CPU this job got.”
jq — 4%
twarc dehydrate — 99%
twut.ids — 1012.2%
Gotta love Spark taking advantage of all available resources that are available!
I’ve just started working on the project in the past week, so it’s really rough around the edges, and could use a lot of improvement. I’m thinking a fair bit about adding some filter methods like extracting out all the tweets from verified users, or removing all the retweets. Or, putting on my archivist hat, and looking at twut as a finding aid utility. I also need to get the PySpark side of things sorted out, so you just fire up a Jupyter Notebook (with PySpark), and hack away.
There’s definitely a whole lot more that can be done here, and I am really curious if folks feel this is a useful project, and worth putting more time into. Let me know what you think in Slack or on the GitHub repo. It’d be nice to hear if this is useful 😃
First, some disclosure might be in order.
My background has me thinking of this in the context of how it impacts libraries and library consortia.
For the past four years, I’ve been co-chair of the NISO Information Discovery and Interchange topic committee (and its predecessor, the “Discovery to Delivery” topic committee), so this is squarely in what I’ve been thinking about in the broader library-publisher professional space.
I also traced the early development of RA21 and more recently am volunteering on the SeamlessAccess Entity Category and Attribute Bundles Working Group; that’ll become more important a little further down this post.
I was nodding along with Roger’s narrative until I stopped short here:
The five major publishing houses that are the driving forces behind GetFTR are not pursuing this initiative through one of the major industry collaborative bodies. All five are leading members of the STM Association, NISO, ORCID, Crossref, and CHORUS, to name several major industry groups. But rather than working through one of these existing groups, the houses plan instead to launch a new legal entity.
While [Vice President of Product Strategy & Partnerships for Wiley Todd] Toler and [Senior Director, Technology Strategy & Partnerships for the American Chemical Society Ralph] Youngen were too politic to go deeply into the details of why this might be, it is clear that the leadership of the large houses have felt a major sense of mismatch between their business priorities on the one hand and the capabilities of these existing industry bodies. At recent industry events, publishing house CEOs have voiced extensive concerns about the lack of cooperation-driven innovation in the sector. For example, Judy Verses from Wiley spoke to this issue in spring 2018, and several executives did so at Frankfurt this fall. In both cases, long standing members of the scholarly publishing sector questioned if these executives perhaps did not realize the extensive collaborations driven through Crossref and ORCID, among others. It is now clear to me that the issue is not a lack of knowledge but rather a concern at the executive level about the perceived inability of existing collaborative vehicles to enable the new strategic directions that publishers feel they must pursue.
This is the publishers going-it-alone.
To see Roger describe it, they are going to create this web service that allows publishers to determine the appropriate copy for a patron and do it without input from the libraries.
Librarians will just be expected to put this web service widget into their discovery services to get “colored buttons indicating that the link will take [patrons] to the version of record, an alternative pathway, or (presumably in rare cases) no access at all.”
(Let’s set aside for the moment the privacy implications of having a fourth-party web service recording all of the individual articles that come up in a patron’s search results.)
Librarians will not get to decide the “alternative pathway” that is appropriate for the patron: “Some publishers might choose to provide access to a preprint or a read-only version, perhaps in some cases on some kind of metered basis.”
(Roger goes on to say that he “expect[s] publishers will typically enable some alternative version for their content, in which case the vast majority of scholarly content will be freely available through publishers even if it is not open access in terms of licensing.” I’m not so confident.)
No, thank you.
If publishers want to engage in technical work to enable libraries and others to build web services that determine the direct link to an article based on a DOI, then great.
Libraries can build a tool that consumes that information as well as takes into account information about preprint services, open access versions, interlibrary loan and other methods of access.
But to ask libraries to accept this publisher-controlled access button in their discovery layers, their learning management systems, their scholarly profile services, and their other tools?
That sounds destined for disappointment.
I am only somewhat encouraged by the fact that RA21 started out as a small, isolated collaboration of publishers before they brought in NISO and invited libraries to join the discussion.
Did it mean that it slowed down deployment of RA21? Undoubtedly yes.
Did persnickety librarians demand transparent discussions and decisions about privacy-related concerns like what attributes the publisher would get about the patron in the Shibboleth-powered backchannel? Yes, but because the patrons weren’t there to advocate for themselves.
Will it likely mean wider adoption? I’d like to think so.
Have publishers learned that forcing these kinds of technologies onto users without consultation is a bad idea? At the moment it would appear not.
Some of what publishers are seeking with GetFTR can be implemented with straight-up OpenURL or—at the very least—limited-scope additions to OpenURL (the Z39.88 open standard!).
So that they didn’t start with OpenURL, a robust existing standard, is both concerning and annoying.
I’ll be watching and listening for points of engagement, so I remain hopeful.
A few words about Jeff Pooley’s five-step “laughably creaky and friction-filled effort” that is SeamlessAccess.
Many of the steps Jeff describes are invisible and well-established technical protocols.
What Jeff fails to take into account is the very visible and friction-filled effect of patrons accessing content beyond the boundaries of campus-recognized internet network addresses.
Those patrons get stopped at step two with a “pay $35 please” message.
I’m all for removing that barrier entirely by making all published content “open access”.
It is folly to think, though, that researchers and readers can enforce an open access business model on all publishers, so solutions like SeamlessAccess will have a place.
(Which is to say nothing of the benefit of inter-institutional resource collaboration opened up by a more widely deployed Shibboleth infrastructure powered by SeamlessAccess.)
This post was written by Raquel Vazquez Llorente, who received an International Fellowship to attend the DLF Forum 2019.
Raquel is an international criminal lawyer specialised in technology and human rights investigations. As a Senior Legal Advisor to eyeWitness, she helps bridge the gap between human rights defenders on the frontlines and investigators or prosecutors, and provides strategic advice on how technology can be leveraged to facilitate justice. Raquel has worked in conflict and post-conflict environments in the Middle East, North Africa and South Asia on a range of international criminal law, security and human rights issues—mostly focusing on state abuse of power and large scale violations. Raquel holds an MSc (with Distinction) in International Strategy and Diplomacy from the London School of Economics and Political Science (LSE), and an Advanced Degree in Law and Business Administration from Universidad Carlos III de Madrid. She is a Visiting Scholar at the Human Rights Center at UC Berkeley School of Law and at the Bonavero Institute at the University of Oxford.
Is archiving an act of resistance?
We are generating records at an unprecedented rate. In technologically advanced societies, we are oftentimes unaware of the bytes of data our connected selves can produce without much meaningful input. We go through our lives recording steps, heartbeats, water consumption, menstrual cycles, and our thoughts and fears disguised as questions we feed to Google. However, in societies rising up against oppression and tyranny, creating certain types of records—and most importantly, preserving them for the future—is an act of resistance. Welcome to the story of a revolution, the social feed version.
My coming of age as an activist happened during the Arab Spring, and it caught me living in the Middle East. The streets were on fire, and YouTube brought the flames to our laptops. Over the last ten years, the spread of smartphones and internet connectivity has enabled not only the capturing and broadcasting of photo and video, but it has also facilitated the rise of civil society groups that are preserving digital information. Most of them hope this data will be used to seek accountability for human rights violations. Welcome to the archive of a revolution, the 21st century version.
In my work at eyeWitness I have been fighting hard with a community of pioneers to help local NGOs capture and preserve photographic and video material that can be used to bring perpetrators of human rights violations to account. While our technology has often got most of the attention, it is only the combination of tech innovation with our workflows and data sharing protocols what has allowed us to trail blaze cooperation with international bodies in search of justice.
It was precisely the need to understand better how we could refine some of our workflows what draw me to the DLF this year. I wanted to explore the world of the digital libraries and archives community, and apply their insights into the international justice field. In the DLF I found a crowd who embraced the complexities of processes for data ingestion, and tinkered with machine learning applications for tagging and cataloguing large amounts of information. We are bringing sexy back to workflows, I thought. Or maybe they were always a thing in the DLF community, and I have been missing out on all the fun.
If Facebook were a country, the number of their monthly active users would surpass by 400 million the total population of the five permanent members of the United Nations Security Council. Most of the content across social media platforms is a rather dull archive of birthday parties and funny animals. Snippets of ordinary lives. But a small part of it is an extraordinary attempt at recording history. Information that is collected on the frontlines, uploaded to the internet, and preserved by civil society groups around the world must be accessible if we want it to contribute to future accountability processes. Given the sheer volume of data, there is a need to increase our capacity to structure and analyse all these records so they can be catalogued, indexed, searched and combined or integrated into other databases.
Data-silos will harm any future efforts at seeking the truth or bringing justice. How can we strengthen the impact of digital archives in transnational justice, and build the evidence locker of the 21st century? What lessons can be drawn from other digital libraries and archives, and how can technologies like machine learning and artificial intelligence help with the preservation, indexing, cataloguing and cross-referencing of information? I was excited to see at the DLF how interdisciplinary collaboration can answer some of these questions. If journalists write the first draft of history, archivists are the beholders of memory. Welcome to the resistance, the nerd version.
In higher education, open textbooks have created new ways to learn, share, and adapt knowledge - and save students money in the meantime. For casebooks that can cost law students hundredsofdollarseach, this gives law schools the opportunity to create casebooks to serve their communities.
What do open casebooks look like? From Contracts (Prof. Charles Fried), Criminal Law (Prof. Jeannie Suk-Gersen), Civil Procedure (Prof. I. Glenn Cohen), Torts (Prof. Jonathan Zittrain) and more, open casebooks are one way to create course content to support the future of legal education.
How can you create a new casebook with 6.7 million unique cases from the Caselaw Access Project? Here’s how!
Several Canadian research libraries are providing funding to ensure that the vast majority of DSpace repositories around the world can be discovered through OpenAIRE, an international database of open scholarly content.
Quality and comprehensive metadata are a critical requirement for building discovery and other services on top of distributed repository content. To that end, several regional repository networks including Europe, Latin America and Canada have agreed to adopt the OpenAIRE metadata guidelines – which define a common approach to assigning metadata elements.
In order to support the new OpenAIRE guidelines, repository platforms must change the way they expose their metadata. While new versions of repository platforms will support the guidelines, many institutions still use previous versions of the software, and are faced with non-compliance or undertaking challenging local development of their software.
Queen’s University Library along with several other Canadian university libraries are pleased to provide funding for the development of an extension to DSpace 5 & 6 that will support compliance with the OpenAIRE Guidelines for Literature Repository Managers v4. Given the widespread use of the DSpace platform, this will enable hundreds of repositories around the world to participate in network services such as OpenAIRE and LA Referencia.
This work is being undertaken as part of the collaboration between the Canadian Association of Research Libraries, COAR and OpenAIRE, with financial support for this development from the following Canadian libraries: Queen’s University, Université de Montréal, Université Laval, University of British Columbia, University of Saskatchewan, Victoria Island University, and York University.
The technological implementation will be undertaken by 4Science, which is a Certified Partner of DSpace, contributor to COAR Next Generation Repository Expert Group, collaborator of OpenAIRE and ORCID, and longtime supporter of open source technologies, open standards and interoperability.
The release of the implementation and user documentation is expected by the end of February 2020.
Dr. Joan E. Beaudoin (@joanebeaudoin) is an Associate Professor in the School of Information Sciences at Wayne State University in Detroit, Michigan. She teaches and performs research on metadata, information organization, digital libraries, digital preservation, museum informatics, and the access to and use of visual information. Prior to her appointment at Wayne State University she performed archaeological fieldwork, taught art history, and had a lengthy career in academic visual resources collections. Her research has been published in a number of scholarly journals, including the Journal of the Association for Information Science and Technology, Journal of Documentation, Journal of Academic Librarianship, Knowledge Organization, and Art Documentation, and she has presented her research at regional, national and international conferences.
Attending an MCN conference has been on my to-do list for a very long time and thanks to the DLF GLAM Cross-Pollinator Fellowship I was finally able to achieve this goal. The extensive conference program offered many options, and so I focused on attending sessions discussing collection data, collection systems, and descriptive practices in museums. I began the conference by attending a museum datathon session. The presenters gave overviews of the various data projects they had been involved in, and these provided a foundation for group work later accomplished by the session’s attendees. A surprisingly expansive view of museum data resulted from the interests expressed by the attendees. These included data literacy, data practices and standards, data ethics, data portability, cross-walking data sets, data visualization, human vs. machine created data, JSON to CSV conversions, and Wikidata. Clearly there is much to be explored when it comes to data among the museum community and these topics are creative fodder ripe for further treatment!
Sessions discussing collection systems and the records describing collection items brought to light several topics of current interest among the cultural heritage sector. These included the need to acknowledge the role that legacy systems and practices have played in the creation of collection data. For example, limitations in the fields of information included in a data record may be the result of the restrictive nature of the data model used to design the original system, while culturally insensitive language found in the collection data may be the result of terminology used at the time of the records’ creation which is now profoundly outdated. Several sessions discussed the impact of the approach taken in the descriptive process, and the limitations inherent in all acts of description. Museums take very seriously their charge to effectively manage their collections, and so descriptive item records are created and used by institutional staff with the primary focus of the collection data being performing this work. These same data, however, are also expected to be useful to audiences with diverse interests, needs, and terminology. When these competing forces upon collection data are combined with evolving systems and language, a much fuller picture emerges of the complexity behind the current state-of-the-art in museums. An additional topic of interest within the purview of collection data was a session on the exploratory use of AI by the Metropolitan Museum of Art to generate keywords for collection items. While lingering issues with the AI methods were noted by the presenters, their research results suggest future promise.
Other notable topics highlighted at the MCN 2019 conference, open access publishing, storytelling, data privacy, data hubs, user experience and human-centered design all made me feel as if I were among my information science colleagues. Beyond the familiarity of these topics, and the rich programming the conference offered, I also must acknowledge the many attendees who gave me such a warm welcome. I cannot think of a better way to round out my sabbatical this fall.
In just thirty days from when this post appears, a new crop of works will join the public domain. Exactly what will come out of copyright will vary by country. In Europe and other places with “life+70 years” copyright terms, works by authors who died in 1949 will join the public domain on January 1, 2020. In countries that still have “life+50 years” terms, works by authors who died in 1969 will. And in the United States, copyrights that were secured in 1924 that are still in force will expire.
As an American, I’m especially excited about the works in that last set. For most of the 21st century to date, almost nothing entered the public domain in the US, after a 1998 law extended copyright terms by 20 years. Then last year, all copyrights still active from 1923 expired, and we finally had a Public Domain Day here with lots of new published works that many people noted and celebrated. And it looks like we’re going to get another big set of works from 1924 in the public domain next month.
Last year, I was so excited about the coming of the first substantial Public Domain Day here in a long time that I wrote advent calendar posts every day in December, discussing 31 works from 1923 that would (and did!) join the public domain in 2019. It was a lot of fun, but also a lot of work. I thought it worth the effort, though, to note such a big change in the copyright environment we’d grown accustomed to. But I wasn’t planning to do all that work again this year.
A few thing, though, have made me reconsider, at least in part. One of them was an article I saw today about a new collection of stories by Zora Neale Hurston being published in early 2020, Hitting a Straight Lick with a Crooked Stick. Now recognized as a major 20th century American writer, Hurston published works in a variety of genres and forums from the 1920s through the 1950s. However, she was not well known outside of African American and literary scholarship circles until the 1970s, when Alice Walker wrote an article in Ms. Magazine in appreciation of her work, and her novel Their Eyes Were Watching God was reprinted and became a best-seller.
Hurston’s new collection brings back into print a number of her early short stories, which the publisher’s blurb describes as “lost” and “in forgotten periodicals and archives”. My first thought on reading the blurb was to be annoyed about the erasure of the librarians and archivists who collected, cataloged, and preserved those publications and thereby ensured that they were not, in fact, lost or forgotten. But then, on further reflection, I realized that for much of the general public, they might as well have been lost, since many people do not have easy access to the libraries and archives that hold them.
One of Hurston’s early stories, “Drenched in Light”, appeared in the December 1924 issue of Opportunity: Journal of Negro Life, which published a variety of articles, stories, poems, studies, and art by African Americans. The journal began in 1923, and HathiTrust opened access to its first volume on Public Domain Day at the start of this year. My listing for the journal also includes some later volumes of the magazine, since as it turns out, the publishers did not renew copyrights for issues prior to the 1940s. (Most of its authors didn’t renew their contributions either, as you can see in the full set of renewals we’ve found for Opportunity.) My listings do not yet, however, include the 1924 volume. HathiTrust has a scan of it, but neither they nor anyone else has yet opened access to it, presumably because no one with a scan feels confident enough about its rights status to do so yet. I expect it to become visible in 30 days, when 1924’s remaining copyrights expire in the US and HathiTrust opens its volumes from 1924.
Those without access to Opportunity in print might be able to read “Drenched in Light” before then in Hurston’s previously published Complete Stories collection. But they won’t be able to view the rich context in which it first appeared, from all the other writers and artists who had work published in Opportunity in 1924– even though as I noted in one of last year’s advent calendar entries, many early African-American publications, including many of Hurston’s stories, did not get renewed copyrights.
Between now and Public Domain Day 2020, I’ll be posting on works published in 1924, both the famous and the obscure, that I look forward to coming into clearer view in the new year. Some will be joining the public domain on January 1. Some, like the 1924 Opportunity issues, are already in the public domain, but are not as widely accessible as they could be. (Though many of them can be found in my library, and perhaps in yours.) I won’t write a post every day, but I hope to publish a fair number on a variety of works by the new year. You’re welcome to participate, either directly, such as by suggesting works or contributing comments, or indirectly, such as by contributing further information about what’s in the public domain or soon will be. (Our copyright information for Opportunity, for instance, is part of Penn’s serials copyright knowledge base that you canadd to.)
I hope Public Domain Day will be an annual cause for celebration in the United States and elsewhere. I want new arrivals to the public domain to become routine, but not taken for granted, lest the public domain be frozen again as it was for far too many years. I hope this series of posts, and other work being done by libraries, readers, and fans of the public domain worldwide, help us recognize the treasures of the public domain and bring more of them to light.
We will be kicking off 2020 with another Islandora 8 webinar showcasing a pilot site. One of the earliest adopters of Islandora 8, and an integral part of its development so far, the University of Nevada Las Vegas will join us on January 21, 2020 to showcase the results of their migration. They will discuss why they chose to go with Islandora 8 back in early 2018, what they have learned so far, and what's coming next for UNLV.
We're taking another look back at Islandoracon this week, to highlight another one of the amazing projects that came from our Islandora 8 Use-a-Thon. We've seen how to build exhibits and how to generate audio thumbnails; now, let's dive into the deep end of Drupal contributed modules with team Blue Lobster's recipe to integrate Islandora with Amazon Alexa. You can see the pitch on these clever slides, but the basic premise is that since Islandora 8 plays nicely with pretty much any Drupal contributed module, even boundary-pushing tools like Alexa integration are on the table. The use case for the team was to use Alexa to create interactive exhibits that pull information from similar collection across multiple institutions (it turned out that both team members' home institutions have collections centered around the experiences of African American nurses), but the true applications of this recipe are as varied as its components. Potentially, you could use this recipe to build an Islandora integration that can:
Send Citations, metadata, whatever we want to the user if they have set up their email
Creating a collaborative exhibit
Play audio and video objects and read transcripts
Respond to user search queries (like how many objects match the subject in the repositories)
Answer specific questions about the object (“Invocation Name, when was this recorded?”)
Interact with other applications or modules (got a print ordering system? Want to add event calendar items to your exhibit?)
Be accessed via web page, Alexa device, or phone app
Many thanks to Brad Spry (UNCC) and Mariee Vibbert (Case Western) for this innovative idea.
A researcher in the economics department wants to find a colleague on campus with expertise on how different cultures form trust networks, to join a grant proposal for a project exploring the role of trust in market-based exchange. How can the researcher find an appropriate collaborator?
A staff member in the Research Office is collecting information on
the research activity of the university, such as publication counts for a
variety of research outputs, total external funding awards, and recent academic
awards and recognitions. This information will be submitted to an important
global rankings exercise – a key component of the university’s reputation and
brand. Can this information be easily located?
As these questions suggest, research analytics – creating and
delivering intelligence on a university’s research enterprise – is a matter of
practical concern on many campuses. The data needed to support research
analytics may be scattered across many sources, both internal and external to
the university: for example, the institutional repository, personnel records,
faculty CVs, and bibliometric and research impact databases like Web of Science
or Google Scholar. Some of this information may be consolidated in a research
information management system (RIMS) like Pure or Symplectic Elements, which in
turn is populated by a combination of automated data feeds and manual entry.
As more and more data on research activity is collected, the range of questions that can be answered with it grows commensurately. A new service area is emerging around research analytics, but what is the role of the academic library? The OCLC Research Library Partnership (RLP)’s Research Support Interest Group recently welcomed Brian Mathews, Associate Dean for Innovation, Preservation, and Access, and David Scherer, Scholarly Communications Librarian and Research Curation Consultant, to lead a virtual discussion (actually two discussions, to accommodate RLP participants across many global time zones) on research analytics at Carnegie Mellon University, where the University Libraries recently deployed a new RIMS and is developing a research analytics service. Brian and David shared their experiences at Carnegie Mellon, which led to rich conversations and sharing of insights among participants from a number of RLP Partner institutions. Here are a few of the themes that resonated in the discussions:
Research analytics is a developing area,
with lots of uncertainty – including what a service might look like. It
was clear from the discussion that there is as yet no established path toward
operationalizing research analytics as a service. Uncertainty touches even the
most fundamental issues, such as how to define research analytics and what a
research analytics service would look like. Brian and David from Carnegie Mellon
observed that effective engagement with potential stakeholders required the
ability to produce customized reports and visualizations; “stock” analytics
would not be enough. Another participant suggested focusing on standardized
services of value to many users, while at the same time training people to use analytics
tools themselves for more specialized purposes.
Is research analytics a new service area for academic libraries? Several participants pointed out that libraries bring to bear a great deal of relevant expertise, such as bibliometrics, data management, and even experience in dealing with vendors and licensing (useful for purchasing systems like RIMS or securing access to data sources). At Carnegie Mellon University Libraries, where data is considered part of their collections, the move to research analytics is seen as a natural progression. Brian and David also emphasized that at their institution, the role of the Library is to provide lots of “carrots”, or incentives, for researchers and staff to contribute data. As David put it, they see themselves more like H&R Block (a US tax preparation service) helping researchers recognize and meet data requirements, rather than the Internal Revenue Service (the US tax collection authority). Encouraging the contribution of complete, accurate data is vital: research analytics will only be as good as the underlying data.
Interoperability is a big obstacle.
The data needed for research analytics often resides on multiple university
systems. To gather and synthesize it for analytics purposes, it often has to be
re-entered or migrated into yet another system, such as the RIMS. A number of
participants in the discussions raised the point that data interoperability
across campus systems – and between campus systems and external systems – needs
to be improved. The ideal, as one participant put it, is “one touch – enter and
re-use”. Requiring researchers to duplicate effort by entering the same data
into multiple systems is a clear disincentive for data contribution, and an
important obstacle to overcome in seeking researcher buy-in.
What will research analytics be used for?
The ability to generate information about a university’s – or an individual’s –
research activity presents both opportunities and concerns. Research analytics has
many practical applications, ranging from bringing together researchers with
mutual interests, to helping the university manage and promote its scholarly
reputation. Participants underlined the importance of having compelling use
cases in hand when talking about research analytics with other campus units. A
good starting place might be surfacing campus expertise and identifying networks
of collaboration, which several participants indicated was a priority on their
campuses. But be prepared to address concerns as well: in particular, those
surrounding the use of metrics which might be construed as evaluating an
individual researcher’s performance or impact.
Libraries need to team up and staff up.
Developing a research analytics service can require a significant investment in
staff resources. Several participants noted that staff scarcity is a key
limitation they face in deploying RIMS and utilizing them for research
analytics. A library building a research analytics service may initially assign
this responsibility to existing staff, but for many libraries, this is not sustainable
– as one participant pointed out, their institution currently employs only one
bibliometrics librarian. Several participants emphasized the importance of
leveraging limited staff by teaming up across campus units – for example, in
cross-unit working groups – regardless of where the central administration of
the RIMS (or research analytics service) is located. For example, the library
might work with individual academic departments or the university press to ensure
researchers sign up for ORCID IDs.
Teaching, research, and service are the three great missions of the
modern university. Research analytics is a data-driven method for cultivating a
better understanding of the research mission, including the types and scope of
the intellectual capital produced by the university and its impact in the
scholarly community and beyond. Our discussions, sparked by Brian and David’s
experiences at Carnegie Mellon, highlighted both the promise and uncertainty of
this new service area, as well as the possibilities it presents for academic
Position: Member Support Technician
Location: Telecommute/Travel statewide, Pennsylvania
Organization: Pennsylvania Integrated Library System
Type of Position: Part-time / Hourly
Education Requirement: Technology certifications or degree
Experience: >1 year
Pennsylvania Integrated Library System (PaILS) is a non-profit membership organization that serves libraries and their patrons through a collaborative community by providing a hosted installation of a high quality open source integrated library system software that is cost-effective and promotes resource sharing among libraries statewide.
We are seeking people who are highly skilled, productive, self-motivated, and have a tendency to embrace change and automation through technology.
Successful candidates will work well independently and on teams using remote communication technology tools and in-person meetings to solve problems and implement ideas. PaILS staff keep up with current developments in technology and excel at customer service and support to help libraries thrive.
Position Description: Serve as the first line of contact on a remote support desk to receive, process, and evaluate requests for assistance with the use of the software.
Complete tasks in the hosted open source Evergreen ILS to support SPARK Libraries and growth of new PaILS members. Work independently and with vendor support to assist libraries with settings changes and updates. Create answers, training, and documentation for frequently asked questions to serve PaILS members.
1. Receive, sort, and prioritize support desk tickets with category and urgency. Answer support questions, collect needed data, and refer complex questions to vendor or team members.
2. Communicate with members. Answer questions and resolve issues by phone, email, or online meeting room, as well as in person.
3. Complete work and research for support tickets related to cover art, policy updates, account settings, notifications, and report templates. Create workflows and form responses for common topic support requests.
4. Offer online support and training events on request and regularly.
5. Create and present documentation and training content.
6. Update patron and item barcode standards quarterly.
7. Attend assigned committee, project, and migration meetings to take notes and offer assistance. Assist in on-boarding and migration of new libraries with attendance at Go Live Day.
8. Support migrations and special projects.
9. Attend professional development, training, conferences, and events.
1. Customer service attitude
2. Strong written and verbal communication skills.
3. Working knowledge of library catalogs and library patron experience.
4. Quick learner, able to acquire knowledge of workflows and technologies.
5. Possess a valid driver’s license and be able to visit member library facilities, using one’s own vehicle.
Compensation: Hourly, $20-25; paid professional development.
How to Apply: To apply send cover letter and resume as a single .pdf via email with subject line Member Support Technician Application to email@example.com.
Applications will be accepted until position is filled; review will begin immediately.
Position: Integrated Library System Application Specialist
Location: Telecommute/Travel statewide, Pennsylvania
Organization: Pennsylvania Integrated Library System
Type of Position: Full-Time / Exempt
Education Requirement: Bachelors
Experience: >2 years
Pennsylvania Integrated Library System (PaILS) is a non-profit membership organization that serves libraries and their patrons through a collaborative community by providing a hosted installation of a high quality open source integrated library system software that is cost-effective and promotes resource sharing among libraries statewide.
We are seeking people who are highly skilled, productive, self-motivated, and have a tendency to embrace change and automation through technology.
Successful candidates will work well independently and on teams using remote communication technology tools and in-person meetings to solve problems and implement ideas. PaILS staff keep up with current developments in technology and excel at customer service and support to help libraries thrive.
Position Description: The Integrated Library System Application Specialist will provide software application expertise as the intermediary between the open source Evergreen Integrated Library System (ILS) software and the library staff who use it to serve patrons. Under the direction of the Executive Director, this full-time exempt position is responsible for managing software performance and membership support. Work independently, with the staff team, with vendors, and with SPARK user group and teams to ensure member libraries and users are satisfied with software performance, support, and training. Provide input into strategic direction for software development and technology planning to improve library resource sharing statewide.
1. Oversee systemwide settings in the integrated library system to ensure optimal performance of application software for library staff and patrons. Ensure that automation system settings align with well-considered member library workflows and policies. Regularly review software settings and implement plans to ensure best use of features.
2. Communicate with library staff, partners, and stakeholders, to build relationships and increase access to resource sharing through the software. Actively participate in advancing library technology use and the open source software community.
3. Contact member libraries by phone, email, and online tools to provide assistance in using the software application. Ensure that libraries are aware of system maintenance and changes.
4. Update and keep membership contact information current. Oversight of mailing list membership.
5. Manage the support desk software application and ticket queue. Ensure that requests and calls are answered in a timely fashion. Answer workflow, procedure, and policy related ILS questions at a high level.
6. Plan and lead integrated library system software and data projects.
7. Oversee the on-boarding and data migration process for new member libraries. Attend meetings and help locations make decisions pertaining to preferred settings. Assist with data review and testing. Review collection mappings, circulation and hold policies, and notices. Attend Go Live Day.
8. Coordinate testing and implementation of software developments and upgrades. Understand and participate in the software development lifecycle in the Evergreen open source community.
9. Research, test, and recommend software development projects, bug fixes, and wish list features.
10. Prepare data reports for the PaILS Board and membership. Analyze and report on community needs based on data and insights from member libraries.
11. Create and present training and documentation, including audio/video and written content to support best practice uses of the software on the SPARK shared installation. Organize documentation/knowledge books.
12. Organize and update support, training, and events calendars.
13. Update library holdings through upload and testing of full file holdings exports replacements each quarter.
14. Attend professional development, training, conferences, and events.
1. Understanding of library staff and patron experience with a positive customer service attitude.
2. Excellent written and verbal communication. Experience writing and updating policies, documenting procedures, and creating and delivering training using software and online tools.
3. Experience in software application management. Working knowledge of integrated library systems or other software applications.
4. Project management skills. Experience creating a project plan, facilitating meetings, developing task lists, estimating work effort, assigning timelines, tracking issues, and reporting outcomes.
5. Problem solving skills and the ability to translate problems affecting one location into solutions helping a wider audience.
6. Ability to manage multiple projects and priorities and work in conditions where regular interruptions occur.
7. Quick learner, able to easily acquire knowledge of workflows and connected technologies.
8. General knowledge of bibliographic records (i.e. MARC, RDA).
9. Base level knowledge of SQL, HTML, CSS, XML, and SSL.
10. Possess a valid driver’s license and be able to visit member library facilities, using one’s own vehicle.
Compensation: Annual salary range: $50,000-65,000 commensurate with experience. Benefits package and paid professional development.
How to Apply: To apply send cover letter and resume as a single .pdf via email with subject line Integrated Library System Application Specialist to firstname.lastname@example.org.
Applications will be accepted until position is filled; review will begin immediately.
This post was written by Jess Farrell, who received a Focus Fellowship to attend this year’s DLF Forum.
Jess is the Project Manager for BitCuratorEdu and Community Coordinator for the Software Preservation Network. Previously, she was Curator of Digital Collections at Harvard Law School Library, Assistant Archivist at McDonald’s Corporation and Armstrong-Johnston Archival Services, and Project Archivist at the Avery Research Center for African American History and Culture. Jess received her MLIS from the University of South Carolina (2011) and BA from the College of Charleston. She coordinates the Digital Library Federation’s Born-Digital Access Working Group and is the current Chair of the Electronic Records Section of the Society of American Archivists.
The Comfort of an Echo at DLF 2019
When I attended my first DLF Forum in 2016, it stood out as a space I wanted to keep engaging in because presenters weren’t afraid to ask big, critical questions and discuss social justice issues, and the space felt like it was genuinely trying to be accessible to as many different types of people as possible.
But as a digital archivist, it didn’t stand out as a space where I could engage deeply with issues related to born-digital preservation. I decided that DLF was where I would come to better understand labor practices and how to approach my work with care, but not for improving my skills as a digital archivist or for models of digital archiving projects. And that was fine – we are fortunate to have other professional development options in this field that cover those topics!
Fast forward to 2019, and this year I had a very different experience with the program. I’m no longer in a digital archivist position, but I was thrilled to hear some of the things that always kept me up at night as a practicing archivist echoed back to me across many sessions. I never liked that library administrators often fetishized digitization as improved access, collection growth as necessary and good, and digitization as an ideal way to grow collections. The labor required to actually make digitized collections discoverable doesn’t align with modern archival processing practices (MPLP), and requires re-processing, reviewing, and additional description generation. This work is often invisible, undervalued, and usually just not done, thus creating large amounts of content that are inaccessible by anyone who doesn’t already know that they exist. Because this labor was usually not accounted for enough, if at all, by administrators, digitization budgets – the most poorly planned of which are more like just reformatting budgets – look fairly efficient. As a digital archivist biased toward privileging born-digital collecting, preservation, and access, I was often frustrated at how it was often possible to get support for a digitization project to generate new digital collections, but usually impossible to get support to maintain existing born-digital collections.
In addition to my bias toward supporting born-digital collection management processes above many other library services at our present moment in time, earlier this year I became further disenchanted by the great digitization project when I started to examine our practices at a Climate Teach-In. De-growth is the only thing that can make preservation at GLAM institution sustainable at this point – full stop. We absolutely cannot continue to collect and preserve unappraised or poorly-appraised digital material, both because our planet depends on it and because we are never going to have the staff to manage the content; we can’t be good stewards of it. We are likely to lose more of what we have now if we continue reactive digital collecting practices and building digital backlogs.
These are all thoughts from the past few years that I felt were validated at this year’s DLF Forum. I heard implications, not just in my own head, that perhaps we should slow down on creating new digital collections and consider diverting attention to born-digital preservation, curation, and appraisal. Over and over I heard people understanding the digital backlogs we’ve gotten ourselves into and what that could mean for long-term access. I really enjoyed Digital Double Bind: Exploring the impact of More Product, Less Process (MPLP) on digital collections, where more than one represented institution was halting digitization to figure out how to improve processes and ultimately access and usability. I also greatly enjoyed The Story Disrupted: Memory Institutions and Born Digital Collecting. Before I came to the session, I mustered up the courage to ask some questions about testing our assumptions that growing our digital collections is a goal…and to my delight, other people articulated my thoughts much better than I could have. It was one of the best open discussions that I’ve attended at a conference.
I heard one phrase in each of these sessions that I haven’t been able to get out of my head. The first is “stable material” in questioning what it might look like to prioritize work on “unstable” material (i.e., born-digital, AV, software, etc.) over “stable” (material for which preservation is well understood and well-executed). The second is “acceptable loss” when acknowledging that we will lose some of what we’ve collected due to climate change.
I was glad to see a much richer conversation on preservation this year, whether it’s due to a broader membership that includes more archivists, digital librarians having a stronger understanding of archives than ever before, or just a general trend in thought that I’m happy to finally see myself reflected in. I’ll give myself a little credit for building some community around these topics. For the past two years I have been co-coordinating the Born-Digital Access Group – thanks to Ashley Taylor for being my partner in crime 2017-2019 and then Alison Clements since August – which focuses on providing access to born-digital material, recognizing preservation as a necessary part of the process. And this year our group released our first two deliverables for comment through December 15 – a levels of access guidelines to help library, archive, and museum workers make decisions about modes of access; and a set of Access Values that will guide our work and our research interests moving forward.
I truly felt like I have been growing into and with the DLF community, and am so glad the DLF Forum Focus Fellows programs supported that growth this year!
This Thanksgiving, I’ve been spending my holiday wrapping up an update and planning the next large MarcEdit update – and as these plans start to take hold, I wanted to start giving folks an idea of where this is going.
MarcEdit 7.5 will represent the next big release of the MarcEdit 7.x branch. I expect that this release will happen around Spring (May) 2020. Starting Jan. 2020, I expect that there will be a preview-1 of the software available for users to track development. Very likely, 7.5 will not be an in-place update as it will shift from using the .NET Framework, to targeting .NET Core. This is a significant difference, as .NET Core represents Microsoft’s Standards based, open source version of the .NET framework, with core components developed to work the same across Windows/Linux/Mac versions.
.NET Core 3.x introduces an updated UI engine. This is exciting, as MarcEdit has been making use of the traditional Windows presentation layer, which uses GDI+. This shift will move the tool to be able to utilize current GPU processing, which should enable better font scaling, and allow me to make use of OpenType fonts. This is exciting, as most new font development happens in OpenType.
Easier installation – presently, MarcEdit requires .NET to be installed. .NET Core will allow for single exe. apps. While MarcEdit won’t be compiled as a single exe, I will be able to package all necessary .NET components so I’ll be able to target the most current version of .NET going forward, regardless of what the user has installed.
Native Z39.50 code – I’m dropping the c++ yaz libraries and am using a C# library. This will give me the ability to have more access to the underlying process.
Updating MarcEditor – .NET introduces something called a FlowDocument. This will enable the kind of paging marcedit does now, as well as introduce some new reading/output types. This will give me the option to allow users to customize the MarcEditor output so that it renders in a way that supports workflows. I’m excited about this change
Updated Accessibility Engine – MarcEdit 7.x introduced better accessibility support. 7.5 will allow the tool to tap into the native OS system functions, as well as enhance them to provide more accessibility options. I’ll flesh these out as I get working on this code.
Updated UI – I continue to look at older windows and forms and look at ways to clean them up. I’ll continue that work
This is a start of what to expect. The first preview version of MarcEdit 7.5 will be the first code running with .NET Core 3.x. There will be very little else different. Porting to a new framework will take me about a month to complete. Once done, I’ll start the teardown and rebuild of various parts of the application.
Code4libBC Day 2 lightning talk notes! Code club for adults/seniors – Dethe Elza Richmond Public Library, Digital Services Technician started code clubs, about 2 years ago used to call code and coffee, chain event, got little attendance had code codes for kids, teens, so started one for adults and seniors for people who have done … Continue reading "Code4libBC Lightning Talk Notes: Day 2"