Planet Code4Lib

Working with SPARQL in MarcEdit / Terry Reese

Over the past couple of weeks, I’ve been working on expanding the linking services that MarcEdit can work with in order to create identifiers for controlled terms and headings.  One of the services that I’ve been experimenting with is NLM’s beta SPARQL endpoint for MESH headings.  MESH has always been something that is a bit foreign to me.  While I had been a cataloger in my past, my primary area of expertise was with geographic materials (analog and digital), as well as traditional monographic data.  While MESH looks like LCSH, it’s quite different as well.  So, I’ve been spending some time trying to learn a little more about it, while working on a process to consistently query the endpoint to retrieve the identifier for a preferred Term. Its been a process that’s been enlightening, but also one that has lead me to think about how I might create a process that could be used beyond this simple use-case, and potentially provide MarcEdit with an RDF engine that could be utilized down the road to make it easier to query, create, and update graphs. 

Since MarcEdit is written in .NET, this meant looking to see what components currently exist that provide the type of RDF functionality that I may be needing down the road.  Fortunately, a number of components exist, the one I’m utilizing in MarcEdit is dotnetrdf (  The component provides a robust set of functionality that supports everything I want to do now, and should want to do later. 

With a tool kit found, I spent some time integrating it into MarcEdit, which is never a small task.  However, the outcome will be a couple of new features to start testing out the toolkit and start providing users with the ability to become more familiar with SPARQL in general.  The first new feature will be the integration of MESH as a known vocabulary that will now be queried and controlled when run through the linked data tool.  The second new feature is a SPARQL Browser.  The idea hear is to give folks a tool to explore SPARQL endpoints and retrieve the data in different formats.  At this point, I’m supporting XML, RDFXML, HTML. CSV, Turtle, NTriple, and JSON as output formats.  This means that users can query any SPARQL endpoint and retrieve data back.  In the current proof of concept, I haven’t added the ability to save the output – but I likely will prior to releasing the Christmas MarcEdit update. 

Proof of Concept

At this point, this is still somewhat conceptual.  However, the current SPARQL Browser looks like the following:


At present, the Browser assumes that data resides at a remote endpoint, but I’ll likely include the ability to load local RDF, JSON, or Turtle data and provide the ability to query that data as a local endpoint.  Anyway, right now, the Browser takes a URL to the SPARQL Endpoint, and then the query.  The user can then select the format that the result set should be outputted. 

Using NLM as an example, say a user wanted to query for the specific term: Congenital Abnormalities – utilizing the current proof of concept, the user would enter the following data:

SPARQL Endpoint:


PREFIX rdf: <>
PREFIX rdfs: <>
PREFIX xsd: <>
PREFIX owl: <>
PREFIX meshv: <>
PREFIX mesh: <>

SELECT distinct ?d ?dLabel 
  ?d meshv:preferredConcept ?q .
  ?q rdfs:label 'Congenital Abnormalities' . 
  ?d rdfs:label ?dLabel . 
ORDER BY ?dLabel 

Running this query within the SPARQL Browser produces a resultset that is formatted internally into a Graph for output purposes. 




The images snapshot a couple of the different output formats.  For example, the full JSON output is the following:

  "head": {
    "vars": [
  "results": {
    "bindings": [
        "d": {
          "type": "uri",
          "value": ""
        "dLabel": {
          "type": "literal",
          "value": "Congenital Abnormalities"

The idea behind creating this as a general purpose tool, is that in theory, this should work for any SPARQL endpoint.   For example, the Project Gutenberg Metadata endpoint.  The same type of exploration can be done, utilizing the Browser. 


Future Work

At this point, the SPARQL Browser represents a proof of concept tool, but one that I will make available as part of the MARCNext research toolset:


As part of the next update.  Going forward, I will likely refine the Browser based on feedback, but more importantly, start looking at how the new RDF toolkit might allow for the development of dynamic form generation for editing RDF/BibFrame data…at least somewhere down the road. 


[1] SPARQL (W3C):
[2] SPARQL (Wikipedia):
[3] SPARQL Endpoints:
[4] MarcEdit:
[5] MARCNext:

Intersecting circles / William Denton

A couple of months ago I was chatting about Venn diagrams with a nine-year-old (almost ten-year-old) friend named N. We learned something interesting about intersecting circles, and along the way I made some drawings and wrote a little code.

We started with two sets, but here let’s start with one. We’ll represent it as a circle on the plane. Call this circle c1.

Everything is either in the circle or outside it. It divides the plane into two regions. We’ll label the region inside the circle 1 and the region outside (the rest of the plane) x.

1 circle

Now let’s look at two sets, which is probably the default Venn diagram everyone thinks of. Here we have two intersecting circles, c1 and c2.

2 circles

We need to consider both circles when labelling the regions now. For everything inside c1 but not inside c2, use 1x; for the intersection use 12; for what’s in c2 but not c1 use x2; and for what’s outside both circles use xx.

We can put this in a table:

1 2
1 x
1 2
x 2
x x

This looks like part of a truth table, which of course is what it is. We can use true and false instead of the numbers:

1 2

It takes less space to just list it like this, though: 1x, 12, x2, xx.

It’s redundant to use the numbers, but it’s clearer, and in the elementary school math class they were using them, so I’ll keep with that.

Three circles gives eight regions: 1xx, 12x, 1x3, 123, x2x, xx3, x23, xxx.

3 circles

Four intersecting circles gets busier and gives 14 regions: 1xxx, 12xx, 123x, 12x4, 1234, 1xx4, 1x34, x2xx, x23x, x324, xx3x, xx34, xxx4, xxxx.

4 circles

Here N and I stopped and made a list of circles and regions:

Circles Regions
1 2
2 4
3 8
4 14

When N saw this he wondered how much it was growing by each time, because he wanted to know the pattern. He does a lot of that in school. We subtracted each row from the previous to find how much it grew:

Circles Regions Difference
1 2
2 4 2
3 8 4
4 14 6

Aha, that’s looking interesting. What’s the difference of the differences?

Circles Regions Difference DiffDiff
1 2
2 4 2
3 8 4 2
4 14 6 2

Nine-year-old (almost ten-year-old) N saw this was important. I forget how he put it, but he knew that if the second-level difference is constant then that’s the key to the pattern.

I don’t know what triggered the memory, but I was pretty sure it had something to do with squares. There must be a proper way to deduce the formula from the numbers above, but all I could do was fool around a little bit. We’re adding a new 2 each time, so what if we take it away and see what that gives us? Let’s take the number of circles as n and the result as ?(n) for some unknown function ?.

n ?(n)
1 0
2 2
3 3
4 12

I think I saw that 3 x 2 = 6 and 4 x 3 = 12, so n x (n-1) seems to be the pattern, and indeed 2 x 1 = 2 and 1 * 0 = 0, so there we have it.

Adding the 2 back we have:

Given n intersecting circles, the number of regions formed = n x (n - 1) + 2

Therefore we can predict that for 5 circles there will be 5 x 4 + 2 = 22 regions.

I think that here I drew five intersecting circles and counted up the regions and almost got 22, but there were some squidgy bits where the lines were too close together so we couldn’t quite see them all, but it seemed like we’d solved the problem for now. We were pretty chuffed.

When I got home I got to wondering about it more and wrote a bit of R.

I made three functions; the third uses the first two:

  • circle(x,y): draw a circle at (x,y), default radius 1.1
  • roots(n): return the n nth roots of unity (when using complex numbers, x^n = 1 has n solutions)
  • drawcircles(n): draw circles of radius 1.1 around each of those n roots
circle <- function(x, y, rad = 1.1, vertices = 500, ...) {
  rads <- seq(0, 2*pi, length.out = vertices)
  xcoords <- cos(rads) * rad + x
  ycoords <- sin(rads) * rad + y
  polygon(xcoords, ycoords, ...)

roots <- function(n) {
    seq(0, n - 1, 1),
      c(round(cos(2*x*pi/n), 4), round(sin(2*x*pi/n), 4))

drawcircles <- function(n) {
  centres <- roots(n)
  plot(-2:2, type="n", xlim = c(-2,2), ylim = c(-2,2), asp = 1, xlab = "", ylab = "", axes = FALSE)
  lapply(centres, function (c) circle(c[[1]], c[[2]]))

drawcircles(2) does what I did by hand above (without the annotations):

2 circles in R

drawcircles(5) shows clearly what I drew badly by hand:

5 circles in R

Pushing on, 12 intersecting circles:

12 circles in R

There are 12 x 11 + 2 = 123 regions there.

And 60! This has 60 x 59 + 2 = 3598 regions, though at this resolution most can’t be seen. Now we’re getting a bit op art.

60 circles in R

This is covered in Wolfram MathWorld as Plane Division by Circles, and (2, 4, 8, 14, 24, …) is A014206 in the On-Line Encyclopedia of Integer Sequences: “Draw n+1 circles in the plane; sequence gives maximal number of regions into which the plane is divided.”

Somewhere along the way while looking into all this I realized I’d missed something right in front of my eyes: the intersecting circles stopped being Venn diagrams after 3!

A Venn diagram represents “all possible logical relations between a finite collection of different sets” (says Venn diagram on Wikipedia today). With n sets there are 2^n possible relations. Three intersecting circles divide the plane into 3 x (3 - 1) + 2 = 8 = 2^3 regions, but with four circles we have 14 regions, not 16! 1x3x and x2x4 are missing: there is nowhere where only c1 and c3 or c2 and c4 intersect without the other two. With five intersecting circles we have 22 regions, but logically there are 2^5 = 32 possible combinations. (What’s an easy way to calculate which are missing?)

It turns out there are various ways to draw four- (or more) set Venn diagrams on Wikipedia, like this two-dimensional oddity (which I can’t imagine librarians ever using when teaching search strategies):

4 set Venn diagram

You never know where a bit of conversation about Venn diagrams is going to lead!

Sony leak reveals efforts to revive SOPA / District Dispatch

Working in Washington, D.C., tends to make one a bit jaded: the revolving door, the bipartisan attacks, not enough funding for libraries — the list goes on. So, yes, I am D.C.-weary and growing more cynical. Now I have another reason to be fed up.

Sony Pictures studio.

Sony Pictures studio.

The Sony Pictures Entertainment data breach has uncovered documents that show that the Motion Picture Association of America (MPAA) has been trying to pull a fast one —reviving the ill-conceived Stop Online Privacy Act (SOPA) legislation (that failed spectacularly in 2010) by apparently buying off state Attorneys General.  Documents show that MPAA has been devising a scheme to get the result they could not get with SOPA —shutting down web sites and along with them freedom of expression and access to information.

The details have been covered by a number of media outlets, including The New York Times. The MPAA seems to think that the best solution to shutting down piracy is “make invisible” the web sites of suspected culprits. You may think that libraries have little to worry about; after all we aren’t pirates. But the good guys will be yanked offline, as well as the alleged bad guys. Our provision of public access to the Internet would then be in jeopardy because a few library users allegedly posted protected content on, for example, Pinterest or YouTube. Our protection from liability for the activities of library patrons using public computers could be thrown out the window along with internet access. This makes no sense.

SOPA, touted initially by Congress as a solution to online piracy, also made no sense from the start because it was too broad. If passed, it would have required that libraries police the internet and block web sites whenever asked by law enforcement officials. Technical experts confirmed that the implementation of SOPA could threaten cybersecurity and undermine the Domain Name System (DNS), also known as the very “backbone of the Internet.”

After historically overwhelming public outcry the content community and internet supporters were encouraged to work together on a compromise, parties promised to collaborate, and some work was actually accomplished. Now it seems that, as far as MPAA was concerned, collaboration was just hype.   They were, the leaked documents show, planning all along to get SOPA one way or another.

The library community opposes piracy. But we also oppose throwing the baby out with the bath water.

Update: The Verve has reported that Mississippi Attorney General Hood did indeed launch his promised attack on behalf of the MPAA by serving Google with a 79-page subpoena charging that Google engaged in “deceptive” or “unfair” trade practice under the Mississippi Consumer Protection Act. Google has filed a brief asking the federal court to set aside the subpoena and noting that Mississippi (or any state for that matter) has no jurisdiction over these matters.


The post Sony leak reveals efforts to revive SOPA appeared first on District Dispatch.

Dodging the Memory Hole: Collaborations to Save the News / Library of Congress: The Signal

Dodging the memory hole graphicThe news is often called the “first draft of history” and preserved newspapers are some of the most used collections in libraries. The Internet and other digital technologies have altered the news landscape. There have been numerous stories about the demise of the newspaper and disruption at traditional media outlets. We’ve seen more than a few newspapers shutter their operations or move to strictly digital publishing. At the same time, niche news blogs, citizen-captured video, hyper-local new sites, news aggregators and social media have all emerged to provide a dynamic and constantly changing news environment that is sometimes confusing to consume and definitely complex to encapsulate.

With these issues in mind and with the goal to create a network to preserve born-digital journalism, the Reynolds Journalism Institute at the University of Missouri sponsored part one of the meeting Dodging the Memory Hole  as part of the Journalism Digital New Archive 2014 forum, an initiative at the Reynolds Institute. Edward McCain (the focus of a recent Content Matters interview on The Signal) has a unique joint appointment at the Institute and the University of Missouri Library as the Digital Curator of Journalism. He and Katherine Skinner, Executive Director of the Educopia Institute (which will host part two of the  meeting in May 2015 in Charlotte, N.C.) developed the two-day program which attracted journalists, news librarians, technologists, academics and administrators.

Cliff Lynch, Director of the Coalition of Networked Information, opened the meeting with a thoughtful assessment of the state of digital news production and preservation. An in-depth case study followed recounting the history of the Rocky Mountain News, its connection to the Denver, CO community, its eventual demise as an actively published newspaper and, ultimately, the transfer of its assets to the Denver Public Library where the content and archives of the Rocky Mountain News remain accessible.

This is the first known arrangement of its kind, and DPL has made its donation agreement with E.W. Scripps Company openly accessible so it can serve as a model for other newspapers and libraries or archives. A roundtable discussion of news executives also revealed opportunities to engage in new types of relationships with the creators of news. Particularly, opening a dialog with the maintainers of content management systems that are used in newsrooms could make the transfer of content out of those systems more predictable and archivable.

Ben Welsh, a database producer at the Los Angeles Times, next debuted his tool Storytracker, which is based on PastPages, a tool he developed to capture screenshots of newspaper websites.  Storytracker allows for the capture of screenshots and the extraction of URLs and their associated text so links and particular stories or other content elements from a news webpage can be tracked over time and analyzed. Storytracker is free and available for download and Welsh is looking for feedback on how the tool could be more useful to the web archiving community. Tools like these have the potential to aid in the selection, capture and analysis of web based content and further the goal of preserving born-digital news.

Katherine Skinner closed the meeting with an assessment of the challenges ahead for the community, including: unclear definitions and language around preservation; the copyright status of contemporary news content; the technical complexity of capturing and preserving born-digital news; ignorance of emerging types of content; and the lack of relationships between new content creators and stewardship organizations.

In an attempt to meet some of these challenges, three action areas were defined: awareness, standards and practices and legal framework. Participants volunteered to work toward progress in advocacy messaging, exploring public-private partnerships, preserving pre-print newspaper PDFs, preserving web-based news content and exploring metadata and news content management systems. Groups will attempt to demonstrate some progress in these areas over the next six months and share results at the next Dodging the Memory Hole meeting in Charlotte. If you have ideas or want to participate in any of the action areas let us know in the comments below and we will be in touch.

Parable of the Polygons is the future of journalism / Casey Bisson

parable of the polygons

Okay, so I’m probably both taking that too far and ignoring the fact that interactive media have been a reality for a long time. So let me say what I really mean: media organizations that aren’t planning out how to tell stories with games and simulators will miss out.

Here’s my example: Vi Hart and Nicky Case’s Parable of the Polygons shows us how bias, even small bias, can affect diversity. It shows us this problem using interactive simulators, rather than tells us in text or video. We participate by moving shapes around and pulling the levers of change on bias.

This nuclear power plant simulator offers some insight into the complexity that contributed to Fukushima, and I can’t help thinking the whole net neutrality argument would be better explained with a simulator.

The Jane-athon Prototype in Hawaii / Manage Metadata (Diane Hillmann and Jon Phipps)

The planning for the Midwinter Jane-athon pre-conference has been taking up a lot of my attention lately. It’s a really cool idea (credit to Deborah Fritz) to address the desire we’ve been hearing for some time for a participatory, hands on, session on RDA. And lets be clear, we’re not talking about the RDA instructions–this is about the RDA data model, vocabularies, and RDA’s availability for linked data. We’ll be using RIMMF (RDA in Many Metadata Formats) as our visualization and data creation tool, setting up small teams with leaders who’ve been prepared to support the teams and a wandering phalanx of coaches to give help on the fly.

Part of the planning has to do with building a set of RIMMF ‘records’ to start with, for participants to add on their own resources and explore the rich relationships in RDA. We’re calling these ‘r-balls’ (a cross between RIMMF and tarballs). These zipped-up r-balls will be available for others to use for their own homegrown sessions, along with instructions for using RIMMF and setting up a Jane-athon (or other themed -athon), and also how to contribute their own r-balls for the use of others. In case you’ve not picked it up, this is a radically different training model, and we’d like to make it possible for others to play, too.

That’s the plan for the morning. After lunch we’ll take a look at what we’ve done, and prise out the issues we’ve encountered, and others we know about. The hope is that the participants will walk out the door with both an understanding of what RDA is (more than the instructions) and how it fits into the emerging linked data world.

I recently returned from a trip to Honolulu, where I did a prototype Jane-athon workshop for the Hawaii Library Association. I have to admit that I didn’t give much thought to how difficult it would be to do solo, but I did have the presence of mind to give the organizer of the workshop some preliminary setup instructions (based on what we’ll be doing in Chicago) to ensure that there would be access to laptops with software and records pre-loaded, and a small cadre of folks who had been working with RIMMF to help out with data creation on the day.

The original plan included a day before the workshop with a general presentation on linked data and some smaller meetings with administrators and others in specialized areas. It’s a format I’ve used before and the smaller meetings after the presentation generally bring out questions that are unlikely to be asked in a larger group.

What I didn’t plan for was that I wouldn’t be able to get out of Ithaca on the appointed day (the day before the presentation) thanks not to bad weather, but instead to a non-functioning plane which couldn’t be repaired. So after a phone discussion with Hawaii, I tried again the next day, and everything went smoothly. On the receiving end there was lots of effort expended to make it all work in the time available, with some meetings dribbling into the next day. But we did it, thanks to organizer Nancy Sack’s prodigious skills and the flexibility of all concerned.

Nancy asked the Jane-athon participants to fill out an evaluation, and sent me the anonymized results. I really appreciated that the respondents added many useful (and frank) comments to the usual range of questions. Those comments in particular were very helpful to me, and were passed on to the other MW Jane-athon organizers. One of the goals of the workshop was to help participants visualize, using RIMMF, how familiar MARC records could be automatically mapped into the FRBR structure of RDA, and how that process might begin to address concerns about future workflow and reuse of MARC records. Another goal was to illustrate how RDA’s relationships enhanced the value of the data, particularly for users. For the most part, it looked as if most of the participants understood the goals of the workshop and felt they had gotten value from it.

But there were those who provided frank criticism of the workshop goals and organization (as well as the presenter, of course!). Part of these criticisms involved the limitations of the workshop, wanting more information on how they could put their new knowledge to work, right now. The clearest expression of this desire came in as follows:

“I sort of expected to be given the whole road map for how to take a set of data and use LOD to make it available to users via the web. In rereading the flyer I see that this was not something the presenter wanted to cover. But I think it was apparent in the afternoon discussion that we wanted more information in the big picture … I feel like I have an understanding of what LOD is, but I have no idea how to use it in a meaningful way.”

Aside from the time constraints–which everyone understood–there’s a problem inherent in the fact that very few active LOD projects have moved beyond publishing their data (a good thing, no doubt about it) to using the data published by others. So it wasn’t so much that I didn’t ‘want’ to present more about the ‘bigger picture’, there wasn’t really anything to say aside from the fact that the answer to that question is still unclear (and I probably wasn’t all that clear about it either). If I had a ‘road map’ to talk about and point them to, I certainly would have shared it, but sadly I have nothing to share at this stage.

But I continue to believe that just as progress in this realm is iterative, it is hugely important that we not wait for the final answers before we talk about the issues. Our learning needs to be iterative too, to move along the path from the abstract to the concrete along with the technical developments. So for MidWinter, we’ll need to be crystal clear about what we’re doing (and why), as well as why there are blank areas in the road-map.

Thanks again to the Hawaii participants, and especially Nancy Sack, for their efforts to make the workshop happen, and the questions and comments that will improve the Jane-athon in Chicago!

For additional information, including a link to register, look here. Although I haven’t seen the latest registration figures, we’re expecting to fill up, so don’t delay!

[these are the workshop slides]

[these are the general presentation slides]

Configuring WordPress Multisite as a Content Management System / Chris Prom

In summer/fall 2012, I posted a series regarding the implementation of WordPress as an content management system.  Time prevented me from describing how we decided to configure WordPress for use in the University of Illinois Archives.  In my next two posts, I’d like to rectify that, first by describing our basic implementation, then by noting (in the second post) some WordPress configuration steps that proved particularly handy.It’s an opportune time to do this because our Library is engaged in a project to examine options for a new CMS, and WordPress is one option.

When we went live with the main University Archives site in August 2012, one goal was to manage  related sites (the American Library Association Archives, the Sousa Archives and Center for American Music, and the Student Life and Culture Archives) in one technology, but to allow a certain amount of local flexiblity in the implemenation.  Doing this, I felt, would minimize development and maintenance costs while making it easer for staff to add and edit content.  We had a strong desire to avoid staff training sessions and sought to help our many web writers and editors become self sufficient, without letting them wander too far afield from an overall design aesthetic (even if my own design sense was horrible, managing everything in one system would make it easier to apply a better design at a later date).

I began by setting up a WordPress multisite installation and by selecting the thematic theme framework.  In retrospect, these decisions have proven to be good ones, allowing us to achieve the goals described above

Child Theme Development

Thematic is  theme framework, and is not suitable for those who don’t like editing CSS or delving into code (i.e. for people who want to set colors and do extensive styling in the admin interface.   That said, its layout and div organization are easy to understand, and it is well documented. It includes a particularly strong set of widget areas, so that is a huge plus.  It is developer friendly since it is easy to do site customizations in the child theme, without affecting the parent Thematic style or the WordPress core.

Its best feature: You can spin off child themes, while reusing the same content blocks and staying in sync with WordPress best practices.  Even those with limited CSS and/or php skills can quickly develop attractive designs simply by editing the styles and including a few hooks to load images (in the functions file).  In addition to me, two staff members (Denise Rayman and Angela Jordan) have done this for the ALA Archives and SLC Archives.

Another plus: The Automattic “Theme division” developed and supports Thematic, which means that it benefits from close alignment with WP’s core developer group. Our site has never broken on upgrade when using my thematic child themes; at most we have done a few minutes of work to correct minor problems.

In the end, The decision to use Thematic required more upfront work, but it forced me to  about theme development and to begin grappling with the WordPress API (e.g. hooks and filters), while setting in place a method for other staff to develop spin off sites.  More on that in my next post.

Plugin Selection

Once WordPress multisite was running, we spent time selecting and installing plug-ins that could be used on the main site and that would help us achieve desired effects.  The following proved to be particularly valuable and have proven to have good forward compatibility (i.e. not breaking the site when we upgraded WordPress):

  • WPTouch Mobile
  • WP Table Reloaded (adds table editor)
  • wp-jquery Lightbox (image modal windows)
  • WordPress SEO
  • Simple Section Navigation Widget (builds local navigation menus from page order)
  • Search and Replace (admin tool for bulk updating paths, etc.)
  • List Pages Shortcode
  • Jetpack by
  • Metaslider (image carousel)
  • Ensemble Video  Shortcodes (allows embedding AV access copies in campus streaming service)
  • Google Analytics by Yoast
  • Formidible (form builder)
  • CMS Page Order (drag and drop menu for arranging overall site structure)
  • Disqus Comment System

Again, I’ll write more about how we are using these, in my next post.


The best paper I read this year: Polster, Reconfiguring the Academic Dance / William Denton

The best paper I read this year is Reconfiguring the Academic Dance: A Critique of Faculty’s Responses to Administrative Practices in Canadian Universities by Claire Polster, a sociologist at the University of Regina, in Topia 28 (Fall 2012). It’s aimed at professors but public and academic librarians should read it.

Unfortunately, it’s not gold open access. There’s a two year rolling wall and it’s not out of it yet (but I will ask—it should have expired by now). If you don’t have access to it, try asking a friend or following the usual channels. Or wait. Or pay six bucks. (Six bucks? What good does that do, I wonder.)

Here’s the abstract:

This article explores and critiques Canadian academics’ responses to new administrative practices in a variety of areas, including resource allocation, performance assessment and the regulation of academic work. The main argument is that, for the most part, faculty are responding to what administrative practices appear to be, rather than to what they do or accomplish institutionally. That is, academics are seeing and responding to these practices as isolated developments that interfere with or add to their work, rather than as reorganizers of social relations that fundamentally transform what academics do and are. As a result, their responses often serve to entrench and advance these practices’ harmful effects. This problem can be remedied by attending to how new administrative practices reconfigure institutional relations in ways that erode the academic mission, and by establishing new relations that better serve academics’—and the public’s—interests and needs. Drawing on the work of various academic and other activists, this article offers a broad range of possible strategies to achieve the latter goal. These include creating faculty-run “banks” to transform the allocation of institutional resources, producing new means and processes to assess—and support—academic performance, and establishing alternative policy-making bodies that operate outside of, and variously interrupt, traditional policy-making channels.

This is the dance metaphor:

To offer a simplified analogy, if we imagine the university as a dance floor, academics tend to view new administrative practices as burdensome weights or shackles that are placed upon them, impeding their ability to perform. In contrast, I propose we see these practices as obstacles that are placed on the dance floor and reconfigure the dance itself by reorganizing the patterns of activity in and through which it is constituted. I further argue that because most academics do not see how administrative practices reorganize the social relations within which they themselves are implicated, their reactions to these practices help to perpetuate and intensify these transformations and the difficulties they produce. Put differently, most faculty do not realize that they can and should resist how the academic dance is changing, but instead concentrate on ways and means to keep on dancing as best they can.

Poussin's Dance to the Music of Time A Dance to the Music of Time, by Nicolas Poussin (from Wikipedia)

About the constant struggle for resources:

Instead of asking administrators for the resources they need and explaining why they need them, faculty are acting more as entrepreneurs, trying to convince administrators to invest resources in them and not others. One means to this end is by publicizing and promoting ways they comply with administrators’ desires in an ever growing number of newsletters, blogs, magazines and the like. Academics are also developing and trying to “sell” to administrators new ideas that meet their needs (or make them aware of needs they didn’t realize they had), often with the assistance of expensive external consultants. Ironically, these efforts to protect or acquire resources often consume substantial resources, intensifying the very shortages they are designed to alleviate. More importantly, these responses further transform institutional relations, fundamentally altering, not merely adding to, what academics do and what they are.

About performance assessment:

Another academic strategy is to respect one’s public-serving priorities but to translate accomplishments into terms that satisfy administrators. Accordingly, one might reframe work for a local organization as “research” rather than community service, or submit a private note of appreciation from a student as evidence of high-quality teaching. This approach extends and normalizes the adoption of a performative calculus. It also feeds the compulsion to prove one’s value to superiors, rather than to engage freely in activities one values.

Later, when she covers the many ways people try to deal with or work around the problems on their own:

There are few institutional inducements for faculty to think and act as compliant workers rather than autonomous professionals. However, the greater ease that comes from not struggling against a growing number of rules, and perhaps the additional time and resources that are freed up, may indirectly encourage compliance.

Back to the dance metaphor:

If we return to the analogy provided earlier, we may envision academics as dancers who are continually confronted with new obstacles on the floor where they move. As they come up to each obstacle, they react—dodging around it, leaping over it, moving under it—all the while trying to keep pace, appear graceful and avoid bumping into others doing the same. It would be more effective for them to collectively pause, step off the floor, observe the new terrain and decide how to resist changes in the dance, but their furtive engagement with each obstacle keeps them too distracted to contemplate this option. And so they keep on moving, employing their energies and creativity in ways that further entangle them in an increasingly difficult and frustrating dance, rather than trying to move in ways that better serve their own—and others’ —needs.

Henri Matisse, Dance II Dance II, by Henri Matisse (from Wikipedia)

She with a number of useful suggestions about how to change things, and introduces this by saying:

Because so many academic articles are long on critique but short on solutions, I present a wide range of options, based on the reflections and actions of many academic activists both in the past and in the present, which can challenge and transform university relations in positive ways.

Every paragraph hit home. At York University, where I work, we’re going through a prioritization process using the method set out by Robert Dickeson. It’s being used at many universities, and everything about it is covered by Polster’s article. Every reaction she lists, we’ve had. Also, the university is moving to activity-based costing, a sort of internal market system, where some units (faculties) bring in money (from tuition) and all the other units (including the libraries) don’t, and so are cost centres. Cost centres! This has got people in the libraries thinking about how we can generate revenue. Becoming a profit centre! A university library! If thinking like that gets set in us deep the effects will be very damaging.

NDSR Applications Open, Projects Announced! / Library of Congress: The Signal

The Library of Congress, Office of Strategic Initiatives and the Institute of Museum and Library Services are pleased to announce the official open call for applications for the 2015 National Digital Stewardship Residency, to be held in the Washington, DC area.  The application period is from December 17, 2014 through January 30, 2015. To apply, go to the official USAJobs page link.

Looking down Pennsylvania Avenue.  Photo by Susan Manus

Looking down Pennsylvania Avenue. Photo by Susan Manus

To qualify, applicants must have a master’s degree or higher, graduating between spring 2013 and spring 2015, with a strong interest in digital stewardship. Currently enrolled doctoral students are also encouraged to apply. Application requirements include a detailed resume and cover letter, undergraduate and graduate transcripts, two letters of recommendation and a creative video that defines an applicant’s interest in the program.  (Visit the NDSR application webpage for more application information.)

For the 2015-16 class, five residents will be chosen for a 12-month residency at a prominent institution in the Washington, D.C. area.  The residency will begin in June, 2015, with an intensive week-long digital stewardship workshop at the Library of Congress. Thereafter, each resident will move to their designated host institution to work on a significant digital stewardship project. These projects will allow them to acquire hands-on knowledge and skills involving the collection, selection, management, long-term preservation and accessibility of digital assets.

We are also pleased to announce the five institutions, along with their projects, that have been chosen as residency hosts for this class of the NDSR. Listed below are the hosts and projects, chosen after a very competitive round of applications:

  • District of Columbia Public Library: Personal Digital Preservation Access and Education through the Public Library.
  • Government Publishing Office: Preparation for Audit and Certification of GPO’s FDsys as a Trustworthy Digital Repository.
  • American Institute of Architects: Building Curation into Records Creation: Developing a Digital Repository Program at the American Institute of Architects.
  • U.S. Senate, Historical Office: Improving Digital Stewardship in the U.S. Senate.
  • National Library of Medicine: NLM-Developed Software as Cultural Heritage.

The inaugural class of the NDSR was also held in Washington, DC in 2013-14. Host institutions for that class included the Association of Research Libraries, Dumbarton Oaks Research Library, Folger Shakespeare Library, Library of Congress, University of Maryland, National Library of Medicine, National Security Archive, Public Broadcasting Service, Smithsonian Institution Archives and the World Bank.

George Coulbourne, Supervisory Program Specialist at the Library of Congress, explains the benefits of the program: “We are excited to be collaborating with such dynamic host institutions for the second NDSR residency class in Washington, DC. In collaboration with the hosts, we look forward to developing the most engaging experience possible for our residents.  Last year’s residents all found employment in fields related to digital stewardship or went on to pursue higher degrees.  We hope to replicate that outcome with this class of residents as well as build bridges between the host institutions and the Library of Congress to advance digital stewardship.”

The residents chosen for NDSR 2015 will be announced by early April 2015. Keep an eye on The Signal for that announcement. For additional information and updates regarding the National Digital Stewardship Residency, please see our website.

See the Library’s official press release here.

New CopyTalk webinar archive available / District Dispatch

Sign reading "Best Practices next exit"

Photo by Barry Dahl

An archive of the CopyTalk webinar “Introducing the Statement of Best Practices in Fair Use of Collections Containing Orphan Works of Libraries, Archives and Other Memory Institutions” is now available. The webinar was hosted in December 2014 by the ALA and was presented by speaked Dave Hansen (UC Berkeley and UNC Chapel Hill) and Peter Jaszi (American University).

In this webinar, the speakers will introduce the “Statement of Best Practices in Fair Use of Collections Containing Orphan Works for Libraries, Archives, and Other Memory Institutions.” This Statement, the most recent community-developed best practices in fair use, is the result of intense discussion group meetings with over 150 librarians, archivists, and other memory institution professionals from around the United States to document and express their ideas about how to apply fair use to collections that contain orphan works, especially as memory institutions seek to digitize those collections and make them available online. The Statement outlines the fair use rationale for use of collections containing orphan works by memory institutions and identifies best practices for making assertions of fair use in preservation and access to those collections.

Watch the webinar

CopyTalks are scheduled for the first Thursday of even numbered months.

Archives of two earlier webinars are also available:

International copyright with Janice Pilch from Rutgers University Library)

Open licensing and the public domain: tools and policies to support libraries, scholars and the public with Tom Vollmer from the Creative Commons

The post New CopyTalk webinar archive available appeared first on District Dispatch.

Getting Started with GIS / LITA

Layout 1Coming for the New Year: Learning Opportunities with LITA

LITA will have multiple learning opportunities available over the upcoming year. Including hot topics to keep your brain warm over the winter. Starting off with:

Getting Started with GIS (Geographic Information Systems)

Instructor: Eva Dodsworth, University of Waterloo

Offered: January 12 – February 9, 2015, with asynchronous weekly lectures, tutorials, assignments, and group discussion. There will be one 80 minute lecture to view each week, along with two tutorials and one assignment that will take 1-3 hours to complete, depending on the student. Moodle login info will be sent to registrants the week prior to the start date.

WebCourse Costs: LITA Member: $135 ALA Member: $195 Non-member: $260

Register Online, page arranged by session date (login required)

Here’s the Course Page

Getting Started with GIS is a three week course modeled on Eva Dodsworth’s LITA Guide of the same name. The course provides an introduction to Geographic Information Systems (GIS) in libraries. Through hands on exercises, discussions and recorded lectures, students will acquire skills in using GIS software programs, social mapping tools, map making, digitizing, and researching for geospatial data. This three week course provides introductory GIS skills that will prove beneficial in any library or information resource position.

No previous mapping or GIS experience is necessary. Some of the mapping applications covered include:

  • Introduction to Cartography and Map Making
  • Online Maps
  • Google Earth
  • KML and GIS files
  • ArcGIS Online and Story Mapping
  • Brief introduction to desktop GIS software

Participants will gain the following GIS skills:

  • Knowledge of popular online mapping resources
  • ability to create an online map
  • an introduction to GIS, GIS software and GIS data
  • an awareness of how other libraries are incorporating GIS technology into their library services and projects

Instructor: Eva Dodsworth is the Geospatial Data Services Librarian at the University of Waterloo Library where she is responsible for the provision of leadership and expertise in developing, delivering, and assessing geospatial data services and programs offered to members of the University of Waterloo community. Eva is also an online part-time GIS instructor at a number of Library School programs in North America.

Register Online, page arranged by session date (login required)

Re-Drawing the Map Series

Don’t forget the final session in the series is coming up January 6, 2015. You can attend this final single session or register for the series and get the recordings of the previous two sessions on Web Mapping and OpenStreetMaps. Join LITA instructor Cecily Walker for:

Coding maps with Leaflet.js

Tuesday January 6, 2015, 1:00 pm – 2:00 pm Central Time
Instructor: Cecily Walker

Ready to make your own maps and go beyond a directory of locations? Add photos and text to your maps with Cecily as you learn to use the Leaflet JavaScript library.

Register Online, page arranged by session date (login required)

Webinar Costs: LITA Member $39 for the single session and $99 for the series.

Check out the series web page for all cost options.

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4268 or Mark Beatty,


Another round of foolishness with the DMCA / District Dispatch

Man face palms

Photo by hobvias sudoneighm

It’s that time again when the U.S. Copyright Office accepts proposals for exemptions to the anti-circumvention provision of the Digital Millennium Copyright Act (DMCA).


The DMCA (which added chaff to the Copyright Act of 1976) includes a new Chapter 12 regarding “technological protection measures” which is another name for digital rights management (DRM). The law says that it is a violation to circumvent (=hack) DRM that has been used by the rights holder to protect access to digital content. One cannot break a passcode that protects access to an online newspaper without being a subscriber, for example.

Here’s the problem: Sometimes DRM gets in the way of actions that are not infringements of copyright. Let’s say you have lawful access to an e-book (you bought the book, fair and square), but you are a person with a print disability, and you need to circumvent to enable text-to-speech (TTS) functionality which has been disabled by DRM. This is a violation of the circumvention provision. One would think that this kind of circumvention is reasonable, because it simply entails making a book accessible to the person that purchased it. Reading isn’t illegal (in the United States).

Because Congress thought lawful uses of protected content may be blocked by technology, it included in the DMCA a process to determine when circumvention should be allowed- the 1201 rulemaking. Every three years, the Copyright Office accepts comments from people who want to circumvent technology for lawful purposes. These people must submit a legal analysis of why an exemption should be allowed, and provide evidence that a technological impediment exists. The Copyright Office reviews the requests, considers if any requests bear scrutiny, holds public hearings, reads reply comments, writes a report, and makes a recommendation to the Librarian of Congress who then determines if any of the proposals are warranted. (The whole rigmarole takes 5-6 months). An exemption allows people with print disabilities to circumvent DRM to enable TTS for 3 years. After that length of time, the exemption expires, and the entire process starts over again. It is time consuming and costly, requires the collection of evidence, and legal counsel. The several days of public hearings are surreal. Attendees shake their heads in disbelief. Everyone moans and groans, including the Copyright Office staff. I am not exaggerating.

Ridiculous? Undoubtedly.

One would think that rights holders would just say “sure, go ahead and circumvent e-books for TTS, we don’t care.” But they do care. Some rights holders think allowing TTS will cut into their audiobook market. Some rights holders think that TTS is an unauthorized public performance and therefore an infringement of copyright. Some authors do not want their books read aloud by a computer, feeling it degrades their creative work. This madness can be stopped if Congress eliminates, or at least amends, this DMCA provision. Why not make exemptions permanent?

In the meantime…

The Library Copyright Alliance (LCA), of which ALA is a member, participates in the triennial rulemaking. Call us crazy. We ask, “What DRM needs to be circumvented this time around?” This question is hard to answer because it is difficult to know what library users can’t do that is a lawful act because DRM is blocking something. We solicit feedback from the library community, but response is usually meager because the question requires proving a negative.

For the last couple of rulemaking cycles, LCA focused on an exemption for educators (and students in media arts programs) that must circumvent DRM on DVDs in order to extract film clips for teaching, research and close study. To be successful, we need many examples of faculty and teachers who circumvent DRM to meet pedagogical goals or for research purposes. Right now, this circumvention allows educators to exercise fair use. BUT this fair use will no longer be possible if we cannot prove it is necessary.

For those librarians and staff who work with faculty, we ask for examples! We want to extend the exemption to K-12 teachers, so school librarians: we need to hear from you as well. Heed this call! Take a moment to help us survive this miserable experience on behalf of educators and learners.

NOTE: Ideally, we would like examples on or before January 15th, 2015, but will accept examples through January 28th, 2015


Contact Carrie Russell at ALA’s Office for Information Technology Policy at Or call 800.941.8478.

The post Another round of foolishness with the DMCA appeared first on District Dispatch.

Economic Failures of HTTPS / David Rosenthal

Bruce Schneier points me to Assessing legal and technical solutions to secure HTTPS, a fascinating, must-read analysis of the (lack of) security on the Web from an economic rather than a technical perspective by Axel Arnbak and co-authors from Amsterdam and Delft universities. Do read the whole paper, but below the fold I provide some choice snippets.

Arnbak et al point out that users are forced to trust all Certificate Authorities (CAs):
A crucial technical property of the HTTPS authentication model is that any CA can sign certificates for any domain name. In other words, literally anyone can request a certificate for a Google domain at any CA anywhere in the world, even when Google itself has contracted one particular CA to sign its certificate.
Many CAs are untrustworthy on their face:
What’s particularly troubling is that a number of the trusted CAs are run by authoritarian governments, among other less trustworthy institutions. Their CAs can issue a certificate for any Web site in the world, which will be accepted as trustworthy by browsers of all Internet users.
The security practices of even leading CAs have proven to be inadequate:
three of the four market leaders got hacked in recent years and that some of the “security” features of these services do not really provide actual security.
Customers can't actually buy security, only the appearance of security:
Information asymmetry prevents buyers from knowing what CAs are really doing. Buyers are paying for the perception of security, a liability shield, and trust signals to third parties. None of these correlates verifiably with actual security. Given that CA security is largely unobservable, buyers’ demands for security do not necessarily translate into strong security incentives for CAs.
There's little incentive for CAs to invest in better security:
Negative externalities of the weakest-link security of the system exacerbate these incentive problems. The failure of a single CA impacts the whole ecosystem, not just that CA’s customers. All other things being equal, these interdependencies undermine the incentives of CAs to invest, as the security of their customers depends on the efforts of all other CAs.
They conclude:
Regardless of major cybersecurity incidents such as CA breaches, and even the Snowden revelations, a sense of urgency to secure HTTPS seems nonexistent. As it stands, major CAs continue business as usual. For the foreseeable future, a fundamentally flawed authentication model underlies an absolutely critical technology used every second of every day by every Internet user. On both sides of the Atlantic, one wonders what cybersecurity governance really is about.

Are QR Codes Dead Yet? / LITA

The QR Code Is Alive (Meme)

It’s a meme!

Flipping through a recent issue of a tech-centric trade publication that shall not be named, I was startled to see that ads on the inside flap and the back cover both featured big QR codes. Why was I startled? Because techies, including many librarians, have been proclaiming the death of the QR code for years. Yet QR codes cling to life, insinuating themselves even into magazines on information technology. In short, QR codes are not dead. But they probably ought to be.

Not everywhere or all at once, no. I did once see this one librarian at this one conference poster session use his smartphone to scan a giant QR code. That was the only time in five years I have ever seen anyone take advantage of a QR code.

When reading a print magazine, I just want to roll with the print experience. I don’t want to grab my phone, type the 4-digit passcode, pull up the app, and hold the camera steady. I want to read.

I’d rather snap a photo of the page in question. That way, I can experience the ad holistically. I also can explore the website at leisure rather than being whisked to a non-mobile optimized web page where I must fill out 11 fields of an online registration form to which UX need not apply.

So . . . Should I Use A QR Code?

Thursday Threads: Google Maps is Good, DRM is Bad, and Two-factor Authentication can be Ugly / Peter Murray

Looking at maps, Eastern Carolina University Digital Collections.

Receive DLTJ Thursday Threads:

by E-mail

by RSS

Delivered by FeedBurner

Three threads this week: how mapping technologies have come such a long way in the past few years, and why explaining digital rights management is bad for your sanity, a cautionary tale for those trying to be more conscious about security their digital lives.

Feel free to send this to others you think might be interested in the topics. If you find these threads interesting and useful, you might want to add the Thursday Threads RSS Feed to your feed reader or subscribe to e-mail delivery using the form to the right. If you would like a more raw and immediate version of these types of stories, watch my Pinboard bookmarks (or subscribe to its feed in your feed reader). Items posted to are also sent out as tweets; you can follow me on Twitter. Comments and tips, as always, are welcome.

The Huge, Unseen Operation Behind the Accuracy of Google Maps

The maps behind those voices are packed with far more data than most people realize. On a recent visit to Mountain View, I got a peek at how the Google Maps team assembles their maps and refines them with a combination of algorithms and meticulous manual labor—an effort they call Ground Truth. The project launched in 2008, but it was mostly kept under wraps until just a couple years ago. It continues to grow, now covering 51 countries, and algorithms are playing a bigger role in extracting information from satellite, aerial, and Street View imagery.

- The Huge, Unseen Operation Behind the Accuracy of Google Maps, by Greg Miller, Wired, 8-Dec-2014

A fascinating look at the application of machine learning to map-making. What used to be hand-done (and is still hand-done with projects like Open Street Map) is now machine done with human review.

Things That Make the Librarian Angry: Enforcing artificial scarcity is a bad role for a public institution

Having a waiting list for library ebooks is really stupid, on the face of it. As a librarian I’m pretty savvy about digital content—enough to know that patrons want it, lots of it. However, we have a short list of ways that we can offer it in a lendable fashion. At work I keep my game face on. At home I just want to tell people the truth, the frustrating truth: offering digital content in this way has short term benefits but long term negative consequences.

- Things That Make the Librarian Angry, by Jessamyn West, The Message — Medium, 12-Dec-2014

Jessamyn leads the layperson through the limited choices that librarians need to make as they select ebooks paired with her frustration at not being able to always say what she wants to say to patrons. Digital is different, and when we try to make it behave like physical objects we find that the analogous processes break down.

The Dark Side of Apple&aposs Two-Factor Authentication

I’d turned two-factor on my Apple ID in haste when I read Mat Honan’s harrowing story about how his Mac, iPhone and other devices were wiped when someone broke into his iCloud account. That terrified me into thinking about real security for the first time.
When I finally had time to investigate the errors appearing on my machine, I discovered that not only had my iCloud account been locked, but someone had tried to break in. Two-factor had done its job and kept the attacker out, however, it had also inadvertently locked me out.
The Apple support page relating to lockouts assured me it would be easy to recover my account with a combination of any two of either my password, a trusted device or the two-factor recovery key. When I headed to the account recovery service, dubbed iForgot, I discovered that there was no way back in without my recovery key. That’s when it hit me; I had no idea where my recovery key was or if I’d ever even put the piece of paper in a safe place.

- The Dark Side of Apple&aposs Two-Factor Authentication, by Owen Williams, The Next Web, 8-Dec-2014

Two factor authentication — when you sign into a system with both something you know (your password) and something you have (like an app that generates a sequence of numbers based on the current time) — is an important step to increase the security of your online identity. But as with all things dealing with security (i.e. choosing strong, unique passwords and not sharing accounts), it isn’t always easy to do it right. It takes effort and forethought (as in this case, the need to print and safely store that long string of random letters) to do effectively.

Thank You to Our Outgoing CEO / Open Knowledge Foundation

This is a joint blog post by Open Knowledge CEO Laura James and Open Knowledge Founder and President Rufus Pollock.

In September we announced that Laura James, our CEO, is moving on from Open Knowledge and we are hiring a new Executive Director.

From Rufus: I want to express my deep appreciation for everything that Laura has done. She has made an immense contribution to Open Knowledge over the last 3 years and has been central to all we have achieved. As a leader, she has helped take us through a period of incredible growth and change and I wish her every success on her future endeavours. I am delighted that Laura will be continuing to advise and support Open Knowledge, including joining our Advisory Council. I am deeply thankful for everything she has done to support both Open Knowledge and me personally during her time with us.

From Laura: It’s been an honour and a pleasure to work with and support Open Knowledge, and to have the opportunity to work with so many brilliant people and amazing projects around the world. It’s bittersweet to be moving on from such a wonderful organisation, but I know that I am leaving it in great hands, with a smart and dedicated management team and a new leader joining shortly. Open Knowledge will continue to develop and thrive as the catalyst at the heart of the global movement around freeing data and information, ensuring knowledge creates power for the many, not the few.

Sufia - 4.3.1 / FOSS4Lib Recent Releases

Release Date: 
Wednesday, December 17, 2014

Last updated December 17, 2014. Created by Peter Murray on December 17, 2014.
Log in to edit this page.

The 4.3.1 release of Sufia includes the ability to store users' ORCID identifiers and display them in the user profile, and includes enhancements to usage statistics. It also includes support for Blacklight 5.8 and a good number of bugfixes. Thanks to Carolyn Cole, Michael Tribone, Valerie Maher, Adam Wead, Misty DeMeo, and Mike Giarlo for their work on this release.

View the upgrade notes and a complete changelog on the release page:

GPO is now the Government Publishing Office / District Dispatch

In addition to funding the federal government, H.R. 83, the appropriations bill that was signed into law last night by President Obama, officially The GPO Buildingchanges the name of the federal agency that helps to ensure a more transparent government. Language in section 1301 renames the Government Printing Office to the Government Publishing Office and while the agency will still retain its well-known initials (GPO), this name will better represent their work moving forward.

Davita Vance-Cooks, now titled the Director of the Government Publishing Office (previously Public Printer), stated in their press release, “this is a historic day for GPO. Publishing defines a broad range of services that includes print, digital, and future technological advancements. The name Government Publishing Office better reflects the services that GPO currently provides and will provide in the future”.  ALA looks forward to continuing our work with the newly renamed agency!

The post GPO is now the Government Publishing Office appeared first on District Dispatch.

DPLA Contributes to National History Day in Missouri / DPLA

Serendipity can still top search. I learned about the Digital Public Library of America (DPLA) not online, but in a print article about efforts in Minnesota to share its history digitally. It was intriguing. Thus began my exploration of DPLA: search, sign up, receive some emails, open and read a few, get intrigued and before you know it you are applying to be a DPLA Community Rep.

Brent Schondelmeyer, DPLA Community Rep in Missouri

Brent Schondelmeyer, DPLA Community Rep in Missouri

I live in Missouri – in the vast Middle Border country – which is an odd amalgamation of several histories; border state, Midwest, gateway to the west and approaching its bicentennial of its entrance into the Union. Missouri: the land of Mark Twain, Jesse James, Harry Truman, Scott Joplin, Charlie Parker and Jon Hamm. I was intrigued by the way in which DPLA provides a new way to discover, organize and share a history already known, and to deliver it digitally. The approach is disruptive – often a feature of technology innovation – and challenging to old-school approaches to doing history. Being involved with local and state historical societies you can get a skewed view of the general public interest given that these membership organizations are challenged to renew their base and reassess their role.

But your harsh assessment changes after attending a National History Day in Columbia (MO). It had been some time since I had seen that many folks at a history-related event outside of a local appearance by David McCullough. I left that experience renewed in the thought that local and regional history has a promising, but perhaps, different future. Young students want to discover, share and interpret stories of places, events, and ideas. But they likely will do it new ways. Search and discover begins not at the library, but on an electronic device.

This year DPLA is an official sponsor of National History Day in Missouri. With the excellent assistance of the DPLA staff, teaching guides and materials were prepared sharing DPLA resources related to this year’s theme: Leadership and Legacy in History. Also a prize was created for the student whose work made the best use of DPLA-related resources at the state finals next spring. My simple hope is these young folks will move from searching online, to helping get the history of our state online. Otherwise we may lose much near-history of the Show-Me State or have it hidden to others. We have extensive records of centuries past – paper records stored, few finding aids and limited public hours for research in libraries and archives. Getting online – the indexes, if not actual documents, images and audio files – is essential. And before more recent history is lost–not to the dustbins of history, but landfills–as this “greatest generation” leaves us, we have a chance learn what they did, know and experience.

Working Together to Clarify Approaches to Library Linked Data / HangingTogether

bibframe-newlogo As we announced earlier in the month, we have been communicating and collaborating with the Library of Congress over our different approaches to library linked data.

The Library of Congress is developing BIBFRAME, which is slated to eventually replace MARC by providing the added benefits that will accrue to using a linked data solution for our library metadata. Meanwhile, we here at OCLC have a different set of use cases, largely around syndicating library data to the wider web, and we have chosen to base our efforts on the metadata standard most widely adopted by web search engines,

schemaWe feel that these approaches are not in competition and by better understanding our different approaches we can all learn about how best to make our data assets available on the web as linked data. The first major step in this process is a whitepaper, currently in collaborative development by OCLC and LC staff, that will compare and contrast our different approaches. The goal is to publish this in time for ALA Midwinter at the end of January 2015, so watch for it!

About Roy Tennant

Roy Tennant works on projects related to improving the technological infrastructure of libraries, museums, and archives.

Register Now for LITA Midwinter Institutes / LITA

lita at midwinter 2015Whether you’ll be attending Midwinter or are just looking for a great one day continuing education event in the Chicago/Midwest area, we hope you’ll join us.

When? All workshops will be held on Friday, January 30, 2015, from 8:30-4:00 at McCormick Place in Chicago IL.

Cost for LITA Members: $235 (ALA $350 / Non-ALA $380, see below for details)

Here’s this year’s terrific line up:

Developing mobile apps to support field research
Instructor: Wayne Johnston, University of Guelph Library

Researchers in most disciplines do some form of field research. Too often they collect data on paper which is not only inefficient but vulnerable to date loss. Surveys and other data collection instruments can easily be created as mobile apps with the resulting data stored on the campus server and immediately available for analysis. The apps also enable added functionality like improved data validity through use of authority files and capturing GPS coordinates. This support to field research represents a new way for academic libraries to connect with researchers within the context of a broader research date management strategy.

Introduction to Practical Programming
Instructor: Elizabeth Wickes, University of Illinois at Urbana-Champaign

This workshop will introduce foundational programming skills using the Python programming language. There will be three sections to this workshop: a brief historical review of computing and programming languages (with a focus on where Python fits in), hands on practice with installation and the basics of the language, followed by a review of information resources essential for computing education and reference. This workshop will prepare participants to write their own programs, jump into programming education materials, and provide essential experience and background for the evaluation of computing reference materials and library program development. Participants from all backgrounds with no programming experience are encouraged to attend.

From Lost to Found: How user Testing Can Improve the User Experience of Your Library Website
Instructors: Kate Lawrence, EBSCO Information Services; Deirdre Costello, EBSCO Information Services; Robert Newell, University of Houston

When two user researchers from EBSCO set out to study the digital lives of college students, they had no idea the surprises in store for them. The online behaviors of “digital natives” were fascinating: from students using Google to find their library’s website, to what research terms and phrases students consider another language altogether: “library-ese.” Attendees of this workshop will learn how to conduct usability testing, and participate in a live testing exercise via Participants will leave the session with the knowledge and confidence to conduct user testing that will yield actionable and meaningful insights about their audience.

More information about Midwinter Workshops.

Registration Information:
LITA members get one third off the cost of Mid-Winter workshops. Use the discount promotional code: LITA2015 during online registration to automatically receive your member discount. Start the process at the ALA web sites:

Conference web site:
Registration start page: 
LITA Workshops registration descriptions:

When you start the registration process and BEFORE you choose the workshop, you will encounter the Personal Information page. On that page there is a field to enter the discount promotional code: LITA2015
As in the example below. If you do so, then when you get to the workshops choosing page the discount prices, of $235, are automatically displayed and entered. The discounted total will be reflected in the Balance Due line on the payment page.


Please contact the LITA Office if you have any registration questions.

Goobi / FOSS4Lib Updated Packages

Last updated December 17, 2014. Created by Peter Murray on December 17, 2014.
Log in to edit this page.

Goobi is an open source software application for digitisation projects and workflow management in libraries, museums and archives.

Goobi allows to model, manage and supervise freely definable production processes. Goobi includes importing data from library catalogues, scanning and content-based indexing and the digital presentation of results in standardised formats.

Package Type: 
Development Status: 
Operating System: 
Technologies Used: 
Programming Language: 
Open Hub Stats Widget: 

This is How I Work (Bryan J. Brown) / ACRL TechConnect

Editor’s Note: This post is part of ACRL TechConnect’s series by our regular and guest authors about The Setup of our work.


After being tagged by Eric Phetteplace, I was pleased to discover that I had been invited to take part in the “This is How I Work” series. I love seeing how other people view work and office life, so I’m happy to see this trend make it to the library world.

Name: Bryan J. Brown (@bryjbrown)

Location: Tallahassee, Florida, United States

Current Gig: Web Developer, Technology and Digital Scholarship, Florida State University Libraries

Current Mobile Device: Samsung Galaxy Note 3 w/ OtterBox Defender cover (just like Becky Yoose!). It’s too big to fit into my pants pocket comfortably, but I love it so much. I don’t really like tablets, so having a gigantic phone is a nice middle ground.

Current Computer: 15 inch MacBook Pro w/ 8GB of RAM. I’m a Linux person at heart, but when work offers you a free MBP you don’t turn it down. I also use a thunderbolt monitor in my office for dual-screen action.

Current Tablet: 3rd gen. iPad, but I don’t use it much these days. I bought it for reading books, but I strongly prefer to read them on my phone or laptop instead. The iPad just feels huge and awkward to hold.

One word that best describes how you work: Structured. I do my best when I stay within the confines of a strict system and/or routine that I’ve created for myself, it helps me keep the chaos of the universe at bay.

What apps/software/tools can’t you live without?

Unixy stuff:

  • Bash: I’ve tried a few other shells (tcsh, zsh, fish), but none have inspired me to switch.
  • Vim: I use this for everything, even journal entries and grocery lists. I have *some* customizations, but it’s pretty much stock (except I love my snippets plugin).
  • tmux: Like GNU Screen, but better.
  • Vagrant: The idea of throwaway virtual machines has changed the way I approach development. I do all my work inside Vagrant machines now. When I eventually fudge things, I can just run ‘vagrant destroy’ and pretend it never happened!
  • Git: Another game changer. I shouldn’t have waited so long to learn about version control. Git has saved my bacon countless times.
  • Anaconda: I’m a Python fan, but I like Python 3 and the scientific packages. Most systems only have Python 2, and a lot of the scientific packages fail to build for obscure reasons. Anaconda takes care of all that nonsense and allows you to have the best, most current Python goodness on any platform. I find it very comforting to know that I can use my favorite language and packages everywhere no matter what.
  • Todo.txt-CLI: A command line interface to the Todo.txt system, which I am madly in love with. If you set it to save your list to Dropbox, you can manage it from other devices, too. My work life revolves around my to-do list which I mostly manage at my laptop with Todo.txt-CLI.


  • Dropbox: Keeping my stuff in order across machines is a godsend. All my most important files are kept in Dropbox so I can always get to them, and being able to put things in a public folder and share the URL is just awesome.
  • Google Drive: I prefer Dropbox better for plain storage, but the ability to write documents/spreadsheets/drawings/surveys at will, store them in the cloud, share them with coworkers and have them write along with you is too cool. I can’t imagine working in a pre-Drive world.
  • Trello: I only recently discovered Trello, but now I use it for everything at work. It’s the best thing for keeping a group of people on track with a large project, and moving cards around is strangely satisfying. Also you can put rocket stickers on cards.
  • Quicksilver for Mac: I love keyboard shortcuts. A lot. Quicksilver is a Mac app for setting up keyboard shortcuts for everything. All my favorite apps have hotkeys now.
  • Todo.txt for Android: A nice mobile interface for the Todo.txt system. One of the few apps I’ve paid money for, but I don’t regret it.
  • Plain.txt for Android: This one is kind of hard to explain until you use it. It’s a mobile text editor for taking notes that get saved in Dropbox, which is useful in more ways than you can imagine. Plain.txt is my mobile interface to the treasure trove of notes I usually write in Vim on my laptop. I keep everything from meeting notes to recipes (as well as the previously mentioned grocery lists and journal entries) in it. Second only to Todo.txt in helping me stay sane.

What’s your workspace like?

My office is one of my favorite places. A door I can shut, a big whiteboard and lots of books and snacks. Who could ask for more? I’m trying out the whole “standing desk” thing, and slowly getting used to it (but it *does* take some getting used to). My desk is multi-level (it came from a media lab that no longer exists where it held all kinds of video editing equipment), so I have my laptop on a stand and my second monitor on the level above it so that I can comfortably look slightly down to see the laptop or slightly up to see the big display.


What’s your best time-saving trick?

Break big, scary, complicated tasks into smaller ones that are easy to do. It makes it easier to get started and stay on track, which almost always results in getting the big scary thing done way faster than you thought you would.

What’s your favorite to-do list manager?

I am religious about my use of Todo.txt, whether from the command line or with my phone. It’s my mental anchor, and I am obsessive about keeping it clean and not letting things linger for too long. I prioritize things as A (get done today), B (get done this week), C (get done soon), and D (no deadline).

I’m getting into Scrum lately, so my current workflow is to make a list of everything I want to finish this week (my sprint) and mark them as B priority (my sprint backlog, either moving C tasks to B or adding new ones in manually). Then, each morning I pick out the things from the B list that I want to get done today and I move them to A. If some of the A things are complicated I break them into smaller chunks. I then race myself to see if I can get them all done before the end of the day. It turns boring day-to-day stuff into a game, and if I win I let myself have a big bowl of ice cream.

Besides your phone and computer, what gadget can’t you live without?

Probably a nice, comfy pair of over-the-ear headphones. I hate earbuds, they sound thin and let in all the noise around you. I need something that totally covers my ears to block the outside world and put me in a sonic vacuum.

What everyday thing are you better at than everyone else?

I guess I’m pretty good at the whole “Inbox Zero” thing. I check my email once in the morning and delete/reply/move everything accordingly until there’s nothing left, which usually takes around 15 minutes. Once you get into the habit it’s easy to stay on top.

What are you currently reading?

  • The Information by James Gleick. I’m reading if for Club Bibli/o, a library technology bookclub. We just started, so you can still join if you like!
  • Pro Drupal 7 Development by Todd Tomlinson and John K. VanDyk. FSU Libraries is a Drupal shop, so this is my bread and butter. Or at least it will be once I get over the insane learning curve.
  • Buddhism Plain and Simple by Steve Hagen. The name says it all, Steve Hagen is great at presenting the core parts of Buddhism that actually help you deal with things without all the one hand clapping nonsense.

What do you listen to while you work?

Classic ambient artists like Brian Eno and Harold Budd are great when I’m in a peaceful, relaxed place, and I’ll listen to classical/jazz if I’m feeling creative. Most of the time though it’s metal, which is great for decimating to-do lists. If I really need to focus on something, any kind of music can be distracting so I just play static from This blocks all the sound outside my office and puts me in the zone.

Are you more of an introvert or an extrovert?

Introvert for sure. I can be sociable when I need to, but my office is my sanctuary. I really value having a place where I can shut the door and recharge my social batteries.

What’s your sleep routine like?

I’ve been an early bird by necessity since grad school, the morning is the best time to get things done. I usually wake up around 4:30am so I can hit the gym when it opens at 5am (I love having the whole place to myself). I start getting tired around 8pm, so I’m usually fast asleep by 10pm.

Fill in the blank: I’d love to see _________ answer these same questions.

Richard Stallman. I bet he’d have some fun answers.

What’s the best advice you’ve ever received?

Do your best. As simple as it sounds, it’s a surprisingly powerful statement. Obviously you can’t do *better* than your best, and if you try your best and fail then there’s nothing to regret. If you just do the best job you can at any given moment you’ll have the best life you can. There’s lots of philosophical loopholes buried that perspective, but it’s worked for me so far.

Sufia 4.3.1 released / Hydra Project

We are pleased to announce the release of version 4.3.1 of the Sufia gem.  (The 4.3.0 version was a misfire, so the upgrade path goes from 4.2.0 directly to 4.3.1.)

The 4.3.1 release of Sufia includes the ability to store users’ ORCID identifiers and display them in the user profile, and includes enhancements to usage statistics. It also includes support for Blacklight 5.8 and a good number of bugfixes.

Thanks to Carolyn Cole, Michael Tribone, Valerie Maher, Adam Wead, Misty DeMeo, and Mike Giarlo for their work on this release.

View the upgrade notes and a complete changelog on the release page:

Submit Your OR2015 Proposal: Conference System Now Open / Hydra Project

Of interest to the Hydra Community:

The Tenth International Conference on Open Repositories, OR2015, will be held June 8-11, 2015 in Indianapolis (Indiana, USA). The organizers are pleased to invite you to contribute to the program.

The conference theme is LOOKING BACK, MOVING FORWARD: OPEN REPOSITORIES AT THE CROSSROADS. It is an opportunity to reflect on and to celebrate the transformative changes in repositories, scholarly communication, and research data over the last decade. More critically, however, it will also help to ensure that open repositories continue to play a key role in supporting, shaping, and sharing those changes and an open agenda for research and scholarship.

The organizers invite you to review the full call for proposals here:, and to submit your proposal here: by January 30, 2015. There are several different formats provided to encourage your participation in this year’s conference, all described on the OR2015 website.


The Open Repositories Steering Committee is pleased to announce the release of the new OR Code of Conduct The OR Code of Conduct underscores the OR Conference core value of openness by providing a welcoming and positive experience for everyone, whether they are in a formal session or a social setting, or are taking part in activities online.


  • 30 January 2015: Deadline for submissions and Scholarship Programme applications
  • 27 March 2015: Submitters notified of acceptance to general conference
  • 10 April 2015: Submitters notified of acceptance to Interest Groups
  • 8-11 June 2015: OR2015 conference

The conference system is now open and  is linked from the conference web site: We look forward to welcoming you to Indianapolis!



Program Co-Chairs

  • Holly Mercer, University of Tennessee
  • William J Nixon, University of Glasgow
  • Imma Subirats, FAO of the United Nations


Essential Tools for the Essentialist Life / LITA

stress Royalty Free Clip Art

At this time of year, I’m always feeling rushed and a bit worn down. I recently read the book Essentialism: The Disciplined Pursuit of Less by Greg McKeown. While some of the essentialist lifestyle seems a bit impossible in this day and age I was challenged to think about the essentialsin my life.

I love technology (I blog for LITA!), but that doesn’t mean I want to be or should be using technology all the time. It is amazing how technology use can creep into all areas of my life, causing me to work weird hours, look at my Twitter account instead of talking to the people I am with, etc. I work hard to create a buffer in my life to allow time for sleep, leisure, and to minimize stress.

To create this buffer, I have discovered that there are tools to assist me in both my professional and personal life. Here are a few of my favorite technology tools (yes, technology tools!) that allow me to stay organized and ultimately, help me to minimize stress to create more time for other things.

Electronic Calendars: I use my work calendar extensively to keep track of meetings and appointments. On my phone and iPad I sync my personal and work calendars for easy referral.

Wunderlist: I recently started using Wunderlist to help me keep track of everything from shopping lists to what I need to bring to a meeting. My favorite thing about this app is that I can share my lists with people! The shared feature makes tag-teaming the shopping so much easier!

Instapaper: This app helps me keep track of all the articles I want to read, but don’t have time to when I see them. I send articles to Instapaper from Twitter and then read them when I have more time.

Feedly: This is my favorite app for keeping up-to-date with all the blogs I follow. It has both a web interface and an app that I use on my iPhone and iPad.

vacation Royalty Free Clip Art

The next book I hope to read is The Art of Stillness: Adventures in Going Nowhere by Pico Iyer.  I want to tackle this one over the holiday break.


Editorial: These Are A Few Of Our Favorite Things / In the Library, With the Lead Pipe

Darlington Bookplate

In our last editorial of the year, the In the Library with the Lead Pipe Editorial Board is looking back at 2014. As we did in early January, we’re sharing some of our favorite non-Lead Pipe articles, essays, speeches, or posts from the previous twelve months.


In honor of Lead Pipe’s new status as a CC-BY journal, I’m only considering works published in journals that have adopted CC-BY licensing.

Evviva Weinraub Lajoie, Trey Terrell, Susan McEvoy, Eva Kaplan, Ariel Schwartz, and Esther Ajambo. Using Open Source Tools to Create a Mobile Optimized, Crowdsourced Translation Tool. Code4Lib Journal, 24.

How badass is this project? Too badass to format its citation correctly. It would be a crime to et al anyone associated with this article or even obscure their names by Lastname, Firstnaming them, because what they’ve done and how they’ve written it up fills me with hope for libraries, library journals, and open source technology, and even an extra jolt of hope for the world. A team of 2.5 FTEs and 4 student employees at Oregon State University Libraries & Press teamed with Maria’s Libraries, a nonprofit that works in rural Kenya to, well, do what the paper says. One challenge, other than the ones related to technological infrastructure and money: 40 spoken languages. When people ask if libraries are still relevant, tell them this story. And it’s not like the library and press at Oregon State isn’t busy serving its own students, too. Among other things, they’re publishing open textbooks, and writing up that project beautifully as well (see below).

Also, be sure to read Kristina Spurgin’s, “Getting What We Paid for: a Script to Verify Full Access to E-Resources,” and Kelley McGrath’s superb editorial introduction to Issue 26, “On Being on The Code4Lib Journal Editorial Committee.”

Browning, R. Creating an Online Television Archive, 1987–2013. International Journal of Digital Curation, 9(1), 1-11.

Lagoze, C. eBird: Curating Citizen Science Data for Use by Diverse Communities. International Journal of Digital Curation, 9(1), 71-82.

Robinson, A. From Princess to Punk: Digitisation in the Fashion Studio. International Journal of Digital Curation, 9(1), 292-312.

The International Journal of Digital Curation is a great journal! I don’t know why I hadn’t encountered it before researching this year-end write-up, though now I’m going to make it a point to read through its archives. The journal’s papers and articles are all so interesting and well written that I refuse to limit myself to  just one. Do yourself a favor. Read all three.

Maxwell, J., & Armen, H. Dreams Reoccurring: The Craft of the Book in the Age of the Web. Journal of Electronic Publishing, 17(1).

The money quote: “A big part of what makes books and book culture loveable and compelling is the craft of the book. Publishing is of course an industrial activity—it is perhaps the very prototype for industrial activity. And in literature there is, of course, art. But craft is a third category in between. Beyond storytelling, beyond reaching an audience, beyond filtering and curating and marketing, there is also the business of making things. And especially: making things that last.”

Even without the Hüsker Dü allusion in the title or the Frank Chimero quote in the text itself, I would have found myself nodding appreciatively throughout this essay. I think we can take it as given that there is more good stuff to read now than at any time in history, almost certainly by multiple orders of magnitude. But I’m not convinced there’s more great stuff, perhaps because the mechanisms that encourage “more good” simultaneously discourage, and possibly even punish, great.

Royster, P. Foxes Propose New Guidelines for Henhouse Design: Comments on NISO’s Proposed Open Access Metadata Standards. Journal of Librarianship and Scholarly Communication, 2(3).

And I quote, “Frankly, the publishers need to put their house in order before presuming to prescribe new metadata standards that will perpetuate their uneven and self-serving administration of the rights they have wrested from the academic laboring class.” Hallelujah!

Also, be sure to read Micah Vandegrift and Josh Bolick, “‘Free to All’: Library Publishing and the Challenge of Open Access,” and (as I alluded to above) Shan Sutton and Faye Chadwell, “Open Textbooks at Oregon State University: A Case Study of New Opportunities for Academic Libraries and University Presses.”

Real, B., Bertot, J. C., Jaeger, P. T. Rural public libraries and digital inclusion: Issues and challenges. Information Technology and Libraries, 33(1), 6-24.

Rock solid scholarship on an important, overlooked topic. The kind of article ITAL does well.


The pieces I’ve chosen aren’t academic in nature, but each resonated deeply for a variety of reasons.

Joy, Erica. The Other Side of Diversity. Medium, 4 Nov 2014.

Diversity in tech workforces was a hot issue this year, what with the rise of #Gamergate and the continued efforts of Blacks and other People of Colour to draw attention to the lack of ethnic diversity in tech companies. Erica Joy’s article grabbed my attention because she wrote poignantly about how it feels to be “the only” minority in the room, and the bargains we must make with ourselves and others to exist in these spaces. Making your way never comes without a price, and Joy writes about this clearly and unflinchingly.

West, Jessamyn. Things That Make The Librarian Angry. Medium, 12 Dec 2014.

When a piece begins with the quote “Enforcing artificial scarcity is a bad role for a public institution,” public librarians can’t help but take notice. Jessamyn West argues that the deals that libraries make with publishers to provide ebooks to patrons results in cognitive dissonance for professionals who talk reverently of easy access to information and intellectual freedom. West admits that being a librarian who provides access to DRM-protected materials places her in a personal moral quandary. She suggests that the solution to this problem may rest with intellectual workers daring to step forward and challenging these restrictions.


Not especially relating to libraries in any way, but I have been absolutely loving the rise of makeup tutorials as social commentary. See Megan MacKay’s Ray Rice Inspired Makeup Tutorial and Jian Ghomeshi Inspired Makeup Tutorial and tadelesmith’s GamerGate Makeup Tutorial. I would love to read more on this phenomenon.

Sarah Wanenchak, Apple’s Health App: Where’s the Power?

In line with recent discussions about whether libraries/librarians are capable of being neutral, and whether neutral is even a desirable goal, this article discussed how design cannot be neutral because it will always reflect the designers’ assumptions.

“The design of things – pretty much all things – reflects assumptions about what kind of people are going to be using the things, and how those people are going to use them. That means that design isn’t neutral. Design is a picture of inequality, of systems of power and domination both subtle and not. Apple didn’t consider what people with eating disorders might be dealing with; that’s ableism. Apple didn’t consider what menstruating women might need to do with a health app; that’s sexism.”

See also, Astra Taylor and Joanne McNeil, The Dads of Tech

#critlib Twitter chats

I reactivated my Twitter account just to follow/participate in these. Fantastic librarians having really important conversations “about critical perspectives on library practice.”

Nashville Library, All About the Books, No Trouble

Ending on a cute note:

“Our Nashville Public Library team celebrates library cards in this adaptation of Meghan Trainor’s performance of “All About That Bass,” as seen on The Tonight Show with Jimmy Fallon. We had a little bit of fun showcasing how easily Nashvillians can borrow, download and stream books, music and movies with a free library card.”


It seems like my picks are always about the big picture. I blame it on my reaction to the “in the weeds” nature of librarians, and my need to step back, reflect, and think beyond the walls of what I know and with which I am comfortable.

Critical Journeys: How 14 librarians came to embrace critical practice by Robert Schroder. Library Juice Press, 2014

Through 14 interviews, Bob Schroeder1 captures current thinking and reflection about critical theory from a diversity of librarians.  What I love so much about this book is that an interview with a relatively new-to-the-profession librarian, Dave Ellenwood, exists side-by-side with established career librarians and theorists such as John Buschman. (I stumbled upon Buschman’s Dismantling the Public Sphere while researching a paper in library school, and have since been trying to remain tied to theory in my practice.) Throughout the interviews many themes emerge: the woefully poor job library schools do in incorporating critical theory into their curricula; the reflective balance needed to put theory into practice; and a commitment on behalf of these librarians to improve their communities through engagement with critical ideas. Additionally, I am impressed that Bob has taken traditional scholarship (as defined by traditional promotion and tenure committees) and morphed it into a series of stories and relationships. As librarians I don’t think we take enough time to learn from one another in the way that Bob has presented his work. It is evident that his approach to learn from these individuals is very much indebted to his engagement with and immersion in indigenous research methods. This book and the librarians featured in it ask us to reflect on our own practice, and hopefully will inspire future readers as the individuals featured in it have inspired me as I read.

Lawrence, Eton. “Strategic thinking: A discussion paper.” Research Directorate, Public Service Commission of Canada (1999).

So what’s a 15-year-old white paper doing on my list? This year, as with every year, the library where I work is trying to plan. The problem is libraries in general seem to approach planning crisis aversion with little forethought. Our roadblocks to progress are seemingly endless, yet if we cannot position ourselves beyond implementing band-aids of temporary staffing, covering the costs of journal inflation, and servilely reacting to the needs and whims of boards, university presidents, other administrative leaders, and the individuals we serve, we are positioning ourselves for obsolescence. Lawrence’s paper outlines how to think strategically: “…strategic thinking involves thinking and acting within a certain set of assumptions and potential action alternatives as well as challenging existing assumptions and action alternatives, potentially leading to new and more appropriate ones” (p. 4). I’d say that as librarians and libraries we’re not good at doing this. This paper proffers food for thought for all of us. Send it it your boss and your boss’s boss today.

Wheelahan, L. (2007). How competency‐based training locks the working class out of powerful knowledge: a modified Bernsteinian analysis. British Journal of Sociology of Education, 28(5), 637–651. doi:10.1080/01425690701505540

Because of the work I’ve been doing this year on the internally grant funded Digital Badges for Creativity and Critical Thinking project at my institution, I’ve been thinking a lot about learning outcomes, how to integrate library learning outcomes and content into courses, and generally how to improve students’ critical thinking skills. As such, I’ve spent a portion of the year reading literature in this arena, and yet, I’m bothered by what seems to be an acknowledged theme and concern about competency-based education: the neoliberalization of it. Yet there doesn’t seem to be that much concern by the library profession about this trend as a whole.2 (Maybe we’re too concerned with staffing band-aids and budget cuts to reflect on this dangerous trend affecting education as a whole?)

Leesa Wheelahan’s paper (and further, her 2012 book, Why Knowledge Matters to Curriculum) outlines for me some serious concerns I have about the growing learning outcomes or competency-based approach to education, and yet I remain passionate about learning outcomes. Wheelahan’s analysis has made me reflective and concerned: how can I work with learning outcomes, improve students’ skills and critical thinking, but also empower students at the same time? Wheelahan contends that competency-based educational approaches, rather than liberating students, keeps students locked into the same social and economic power structures in which they currently exist. Competency-based approaches do not liberate and empower students, they merely reinforce existing social and economic standing, and deny students access to critical knowledge, discovery, empowerment, and transformation. As a profession, we are late to engage with these ideas.


I’m on a leave of absence from my position as a librarian this academic year, so I took a step away from library literature. Here are some other things I came across:

Rodenberg, Ryan M. A Law Professor Walks Into a Creative Writing Workshop. Pacific Standard Magazine. 16 Sep 2014.

So much academic writing is awful, awful, awful. Rodenberg nails it with his suggestion:

“First, tenure-track assistant professors in the sciences and professional programs should be actively encouraged to take at least one creative non-fiction writing workshop during their pre-tenure period… If professors are to have any chance of being anything even close to a quasi-public intellectual, this should be mandatory. A poorly written academic article on an already-esoteric topic is destined to have zero impact.”

Cohen-Rottenberg, Rachel. 10 Questions About Why Ableist Language Matters, Answered. Everyday Feminism. 7 Nov 2014.

This article was originally published in 2013, but I didn’t see it until it was crossposted in November of this year. Reading the piece opened my eyes to the different ways language is used as a tool of oppression, and I have been thinking about it ever since.

King, Andrew David. The Weight of What’s Left [Out]: Six Contemporary Erasurists on Their Craft. The Kenyon Review Blog. 6 Nov 2012.

Yes, yes, it was published in 2012… but this is so my jam and I just came across the interview this year. Erasure poems are created by removing words from an existing text and framing the result on the page as a poem. This craft interview between King and six writers has further grounded my own work—I am currently working on a manuscript of erasure poems sourced from Shia LaBeouf media interviews.

House, Naomi. Why I Quit My Library Job and Why I No Longer Want One. INALJ. 29 July 2014.

As always, House is inspiring and level-headed. Her post was one of the things that prompted me to think long and hard about my own career options, encouraging me to take a leap into the unknown.

Schwartz, Tony, and Porath, Christine. Why You Hate Work. The New York Times Sunday Review. 30 May 2014.

I have… thoughts about the way we work. As I have grappled with these thoughts over the last year, I have sought out different perspectives on why I might be feeling dissatisfied or disengaged. It’s heartening to know that I’m not the only one encountering unsustainable work models—and that there are things we can all do better. A good read for any supervisor or manager.

Thompson, Derek. Quit Your Job. The Atlantic. 5 Nov 2014.

“…young people aren’t any more likely to quit today than they were in the 1970s or 1980s. But once they leave, young people today are more likely to try out an entirely new job.” This article kind of blew my mind.


Edson, Michael Peter. Dark Matter. Medium, 19 May 2014.

In this piece Michael Peter Edson compares online engagement in the GLAM sector to astrophysics when Vera Rubin discovered ‘dark matter’ in the 1960s. Focussing particularly on museums, Edson powerfully argues that whilst we may feel that digitisation, online engagement, and the sharing of data have come a long way in the last two or three decades, in reality we are barely starting:

Despite the best efforts of some of our most visionary and talented colleagues, we’ve been building, investing, and focusing on only a small part of what the Internet can do to help us accomplish our missions. 90% of the universe is made of dark matter—hard to see, but so forceful that it seems to move every star, planet, and galaxy in the cosmos. And 90% of the Internet is made up of dark matter too—hard for institutions to see, but so forceful that it seems to move humanity itself.

What I like about this piece is that Edson is not criticising or sniping. Rather, he argues that there are enormous untapped opportunities for museums, libraries, and other cultural institutions to further our missions by embracing the open and read-write nature of the web. Edson tells the story of Rubin’s discovery as a way of sharing the excitement and wonder she felt, and the advances that became possible once her discovery was confirmed.

Markman, Chris & Zavras, Constantine. BitTorrent and Libraries: Cooperative Data Publishing, Management and Discovery. D-Lib Magazine, March/April 2014.

In this article Markman and Zavras explain how the BitTorrent protocol works, and argue that it provides huge opportunities for libraries to increase both efficiency and openness. Whilst I’m certainly no BitTorrent expert, this article makes it easy to understand the principles. Markman and Zavras point out that BitTorrent is extremely efficient at transferring data (this is one of the reasons it is favoured by people downloading large video files), making it ideal for libraries with slow or poor quality internet connections, or with large data transfer needs. Other opportunities are less obvious, but just as useful. From DIY LOCKSS to better analytics, Markman and Zavras provide plenty of intriguing reasons to take a look at BitTorrent—and none of them involve Game of Thrones.

Song, Steve. The Morality of Openness. Many Possibilities, 19 June 2014.

Language matters. In June, Steve Song highlighted the problems with ‘openness’ as a goal. Song argues that whilst open access to information is important, most of the time what people really want is something slightly different. His post resonated with me, as an open-access advocate who works for local government. Those who argue for completely transparent government are often also the same people who complain that the public service is often inefficient and risk-averse. These are simply two sides of the same coin, however. Song argues that often when people say ‘openness’ they really mean ‘trust’. This is an important discussion for those of us working in libraries of all types.

Your Turn

What’s the best thing you read in 2014?

  1. Bob is a co-worker and good colleague of mine. He wrote a series of two articles for Lead Pipe this year on indigenous research methods: Exploring Critical and Indigenous Research Methods with a Community: Part 1 – The Leap and Exploring Critical and Indigenous Research Methods with a Research Community: Part II – The Landing.
  2. My search for neolibera* AND education in Library, Information Science & Technology Abstracts retrieved a mere 15 articles, the most relevant of which was Joshua Beatty’s September 2014 Lead Pipe article, “Locating Information Literacy within Institutional Oppression.”

Google Sunsets Freebase – Good News For Wikidata? / Richard Wallis

fb Google announced yesterday that it is the end of the line for Freebase, and they have “decided to help transfer the data in Freebase to Wikidata, and in mid-2015 we’ll wind down the Freebase service as a standalone project”.wd

As well as retiring access for data creation and reading, they are also retiring API access – not good news for those who have built services on top of them.  The timetable they shared for the move is as follows:

Before the end of March 2015
– We’ll launch a Wikidata import review tool
– We’ll announce a transition plan for the Freebase Search API & Suggest Widget to a Knowledge Graph-based solution

March 31, 2015
– Freebase as a service will become read-only
– The website will no longer accept edits
– We’ll retire the MQL write API

June 30, 2015
– We’ll retire the Freebase website and APIs[3]
– The last Freebase data dump will remain available, but developers should check out the Wikidata dump

The crystal ball gazers could probably have predicted a move such as this when Google employed, the then lead of Wikidata, Denny Vrandečić a couple of years back. However they could have predicted a load of other outcomes too. ;-)

In the long term this should be good news for Wikidata, but in the short term they may have a severe case of indigestion as they attempt to consume data that will, in some estimations, treble the size of Wikidata adding about 40 million Freebase facts into its current 12 million.  It won’t be a simple copy job.

Loading Freebase into Wikidata as-is wouldn’t meet the Wikidata community’s guidelines for citation and sourcing of facts — while a significant portion of the facts in Freebase came from Wikipedia itself, those facts were attributed to Wikipedia and not the actual original non-Wikipedia sources. So we’ll be launching a tool for Wikidata community members to match Freebase assertions to potential citations from either Google Search or our Knowledge Vault, so these individual facts can then be properly loaded to Wikidata.

There are obvious murmurings on the community groups about things such as how strict the differing policies for confirming facts are, and how useful the APIs are. There are bound to be some hiccups on this path – more of an arranged marriage than one of love at first sight between the parties.

I have spent many a presentation telling the world how Google have based their Knowledge Graph on the data from Freebase, which they got when acquiring Metaweb in 2010.

So what does this mean for the Knowledge Graph?  I believe it is a symptom of the Knowledge Graph coming of age as a core feature of the Google infrastructure.  They have used Freebase to seed the Knowledge Graph, but now that seed has grow into a young tree fed by the twin sources of Google search logs, and the rich nutrients delivered by structured data embedded in millions of pages on the web. Following the analogy, the seed of Freebase, as a standalone project/brand, just doesn’t fit anymore with the core tree of knowledge that Google is creating and building.  No coincidence that  they’ll “announce a transition plan for the Freebase Search API & Suggest Widget to a Knowledge Graph-based solution”.

As for Wikidata, if the marriage of data is successful, it will establish it as the source for open structured data on the web and for facts within Wikipedia.

As the live source for information that will often be broader than the Wikipedia it sprang from, I suspect Wikidata’s rise will spur the eventual demise of that other source of structured data from Wikipedia – DBpedia.   How in the long term will it be able to compete, as a transformation of occasional dumps of Wikipedia, with a live evolving broader source?   Such a demise would be a slow process however – DBpedia has been the de facto link source for such a long time, its URIs are everywhere!

However you see the eventual outcomes for Frebase, Wikidata, and DBpedia, this is big news for structured data on the web.

MarcEdit 6 Update Posted / Terry Reese

A new update has been posted.  The changes are noted below:

  • Enhancement: Installation changes – for administrators, the program will now allow for quite installations and an option to prevent users from enabling automated updates.  Some IT admins have been asking for this for a while.  The installation program will take an command-line option: autoupdate=no, to turn this off.  The way this is disabled (since MarcEdit manages individual profiles) is a file will be created into the program directory that if present, will present automatic updates.  This file will be removed (or not recreated) if this command-line isn’t set – so users doing automated installations will need to remember to always set this value if they wish to prevent this option from being enabled.  I’ve also added a not in the Preferences window noting if the administrator has disabled the option.
  • Bug Fix: Swap Field Task List – one of the limiters wasn’t being passed (the process one field per swap limiter)
  • Bug Fix: Edit Field Task List – when editing a control field, the positions text box wasn’t being shown. 
  • Bug Fix: Edit Field Regular Expression options – when editing control fields, the edit field function evaluated the entire field data – not just the items to be edited.  So, if I wanted to use a regular expression to evaluate two specific values, I couldn’t.  This has been corrected.
  • Enhancement: Linked Data Linker – added support for FAST headings. 
  • Bug Fix: Linked Data Linker – when processing data against LC’s, some of the fuzzy matching was causing values to be missed.  I’ve updated the normalization to correct this.
  • Enhancement: Edit Subfield Data – Moving Field data – an error can occur if the field having data moved to is a control field, and the control field is smaller than the position where the data should be moved to.  An error check has been added to ensure this error doesn’t pop up.
  • Bug Fix: Auto Translation Plug-in – updated code because some data was being dropped on translation, meaning that it wouldn’t show up in the records.

Update can be found at: or via the automated updating tool.  The plug-in updates can be downloaded via the Plug-in Manager within the MarcEdit Application.


Usability Testing for Greater Impact: A Primo Case Study / Information Technology and Libraries

This case study focuses on a usability test conducted by four librarians at Texas Tech University. Eight students were asked to complete a series of tasks using OneSearch, the TTU Libraries’ implementation of the Primo discovery tool. Based on the test, the team identified three major usability problems, as well as potential solutions. These problems typify the difficulties patrons face while using library search tools, but have a variety of simple solutions.


Practical Relevance Ranking for 11 Million Books, Part 3: Document Length Normalization / Library Tech Talk (U of Michigan)

Relevance is a complex concept which reflects aspects of a query, a document, and the user as well as contextual factors. Relevance involves many factors such as the user's preferences, task, stage in their information-seeking, domain knowledge, intent, and the context of a particular search. This post is the third in a series by Tom Burton-West, one of the HathiTrust developers, who has been working on practical relevance ranking for all the volumes in HathiTrust for a number of years.

Families and Work Institute seeks library and museum input / District Dispatch

Little boy readingIf your library or museum has organized programs to help children develop their skills and their mind, the Families and Work Institute (FWI) asks that you fill out their brief survey. Data pulled from this survey will aid FWI in developing a nation report on the roles that libraries and museums play in supporting children, families, and the professionals who work with them.

For the past 14 years, Mind in the Making, a program of FWI, has been sharing research on what we can do to help children thrive now and in the future. They have been calling attention to the importance of early brain development and promoting Executive Function skills, which study after study reveals have been a critical missing piece in efforts to promote school readiness and school, work and life success.

Surveys should be submitted by December 22, 2014 to be included. More information is available at the IMLS blog, Museums and Libraries: Be a Part of our Brain Building Journey.

The post Families and Work Institute seeks library and museum input appeared first on District Dispatch.

Leaving Mozilla as staff / Jenny Rose Halperin

December 31 will be my last day as paid staff on the Community Building Team at Mozilla.

One year ago, I settled into a non-stop flight from Raleigh, NC to San Francisco and immediately fell asleep. I was exhausted; it was the end of my semester and I had spent the week finishing a difficult databases final, which I emailed to my professor as soon as I reached the hotel, marking the completion of my coursework in Library Science and the beginning of my commitment to Mozilla.

The next week was one of the best of my life. While working, hacking, and having fun, I started on the journey that has carried me through the past exhilarating months. I met more friendly faces than I could count and felt myself becoming part of the Mozilla community, which has embraced me. I’ve been proud to call myself a Mozillian this year, and I will continue to work for the free and open Web, though currently in a different capacity as a Rep and contributor.

I’ve met many people through my work and have been universally impressed with your intelligence, drive, and talent. To David, Pierros, William, and particularly Larissa, Christie, Michelle, and Emma, you have been my champions and mentors. Getting to know you all has been a blessing.

I’m not sure what’s next, but I am happy to start on the next step of my career as a Mozillian, a community mentor, and an open Web advocate. Thank you again for this magical time, and I hope to see you all again soon. Let me know if you find yourself in Boston! I will be happy to hear from you and pleased to show you around my hometown.

If you want to reach out, find me on IRC: jennierose. All the best wishes for a happy, restful, and healthy holiday season.

Down with the divide – support the E-rate program / District Dispatch

Close up on a hand on a computer mouse.Librarians know how essential the E-rate has been and will be to meeting their communities’ needs for high-speed, broadband internet service and public access to the internet in the 21st century. That’s why ALA fought hard to create the program and, for the past 18 months, to encourage the Federal Communications Commission to dramatically increase the program’s funding and streamline its application procedures.

Libraries did it! Starting in 2015, an additional $1.5 billion will be available to libraries across the country that they can use to further narrow and ultimately close the digital divide . . . and funds will be easier to apply for.

According to the FCC, the E-rate modernization will make the program more efficient, maximize the use of ratepayer funds, and will provide support for libraries and schools across the country.  An FCC fact sheet notes that “the demand for broadband is growing at least 50% per year, which means that total bandwidth costs will continue to grow even with significant broadband price reductions…Chairman Wheeler’s…$1.5 billion cap increase is consistent with all schools and libraries achieving the long-term goals…because Wi-Fi within every classroom and library space is an essential element of 21st century learning.”

Some Members of Congress have expressed concerns with the FCC action and may not fully appreciate the urgent need in our library community for E-rate modernization. Some have gone as far as questioning the justification of the E-rate program’s existence at all.

Your help is needed to ensure Congress does not overturn the additional E-rate funding for our patrons. We urge you to contact your members of Congress during the December recess and inform them of what services E-rate enables you to supply to your community.

Our message to Congress:

  • Libraries across the country are far behind the broadband capacity they need. A 21st century E-rate program with additional funding will allow libraries to offer state-of-the-art connectivity and critical services to patrons. Many patrons can only access the internet through libraries.
  • The current 20th Century E-rate program has failed to keep pace with inflationary cost increases and has resulted in cost-prohibitive commercially available connectivity. Bringing the program into the 21st Century ensures libraries can secure affordable high-speed connectivity for their patrons.
  • The increasing demands on patrons to connect to the internet – for employment and entrepreneurship, education, community engagement, and individual empowerment – has placed tremendous need for greater bandwidth and faster access.
  • E-rate modernization benefits patrons at libraries of all sizes and in communities across the country, whether urban, suburban, or rural.
  • Please provide Congress with examples of the range of programs and services you offer to patrons benefiting the local community.

We have previously reported on E-rate here and here. Click here to take action now.

The post Down with the divide – support the E-rate program appeared first on District Dispatch.

From DIY to Working Together: Using a Hackerspace to Build Community : keynote from Scholars Portal Day 2014 / Mita Williams

On December 3rd, I gave a keynote for Scholars Portal Day.  The slide deck was made using BIG and is available online.  Thank you to Scholars Portal for inviting me to be with one of my favourite communities.

You can’t tell how many apples are in a seed.

In May of 2010, I, Art Rhyno, Nicole Noel and late and sorely missed Jean Foster hosted an unconference at the central branch of the Windsor Public Library. 

Unconferences are seemingly no longer in vogue, so just in case you don’t know, an unconference is a conference where the topics of discussion are determined by those in the room who gather and disperse in conversation as their interests dictate. 

The unconference was called WEChangeCamp and it was one several ChangeCamp unconferences that occurred across the country at that time.

At this particular unconference, 40 people from the community came together to answer this question: “How can we re-imagine Windsor-Essex as a stronger and more vibrant community?”

And on that day the topic of a Windsor Hackerspace was suggested by a young man who I later learned was working on his doctorate in electrical engineering.  What I remember of that conversation four years ago was Aaron explaining the problem at hand: he and his friends needed regular infusions of money to rent a place to build a hackerspace so they needed a group of people who would pay monthly membership fees. But they couldn’t get paying members until they could attract them with a space.

Shortly thereafter, Aaron - like so many other young people in Windsor- left the city for work elsewhere. It’s a bit of an epidemic here. We have the second highest unemployment rate in Canada and it’s been said that youth unemployment rate in Windsor is at a staggering 20%.

In Aaron’s case, he moved to Palo Alto, California to do robotics work in an automotive R&D lab.

In the meantime back in Windsor, in May 2012, I helped host code4lib North at the University of Windsor.  We had the pleasure to host librarians from many OCUL libraries over those two days as well as staff from the Windsor Public Library. Also in the audience was Doug Sartori. Doug had helped in the development of the WPL’s CanGuru mobile library application. He came to code4lib north because he was was curious about the first generation Raspberry pi that John Fink of McMaster had brought with him.  You have to remember that in 2012 that the Raspberry Pi - the $40 computer card - was still never very new in the world.

A year later, in May 2013, Windsor got its first Hackerspace when Hackforge was officially opened. The Windsor Public Library graciously lent Hackforge the empty space in the front of their Central Branch that was previously a Woodcarver’s Museum.

When Hackforge launches, Doug Sartori is president and I’m on the board of directors.

In our 20 months of our existence, I’m proud to say that Hackforge has accomplished quite a lot for itself and for our community.

We’ve co-hosted three hackathons along with the local technology accelerator WETech Alliance.

The first hackathon was called HackWE - and it lasted a weekend, was hosted at the University of Windsor and was based on the City of Windsor’s Open Data Catalogue.

HackWE 2.0 was a 24-hour hackathon based on residential energy data collected by Smart Meters and was part of a larger Ontario Apps for Energy Challenge.

And the third HackWE 3.0 - which happened just this past October -  had events stretched over a week and based on open scientific data in celebration of Science and Technology week.

We’ve hosted our own independent hackathons as well. Last year Hackforge held a two week Summer Games event for people who wanted to try their hand at video game design. Everyone who completed a game won a trophy.  My own video game won the prize for being the Most endearing.

But in general, our members are more engaged in the regular activities of Hackforge.

They include our bi-weekly Tech Talks that our members give to each other and the wider public, on such topics as Amazon Web Services, slide rules, writing Interactive fiction with JavaScript, and using technology in BioArt.

We have monthly Maptime events in the space. Maptime is an open learning environment for all levels related to digital map making but there is a definite an emphasis on support for the beginner.

This photo is from our first Windsor Maptime event which was dedicated to OpenStreetMap. There are Maptime chapters all around the world, and the next Maptime Toronto meeting is December 11th, if you are curious and if you near or in the GTA.

The Hackforge Software Guild meets weekly to work on personal projects as well as practicing pair programming on coding challenges called katas.  For example, one of the first kata challenges was to write a program that would correctly write out the lyrics of 99 bottles of beer on the wall and one of more recent is how to code bowling scores.

We also have an Open Data Interest group and we are going to launch our own Open Data portal for Windsor’s non-profit community in 2015.  We’re able to do this because this year we have received Trillium funding to hire a part-time coordinator and to small pay stipends to people to help with this work.

Our first dataset is likely going to be a community asset map that was compiled by the Ford City Renewal group.  Ford City is one of several neighbourhoods in Windsor in which more than 30% of the population is have income levels that at poverty level. Average incomes of those from the the City of Windsor as a whole isn’t actually that much less than average for all of Canada - its just that we’re just the most economically polarized urban area in the country.  That’s one of the reasons why, in January Hackforge is going to be working with Ford City Renewal to host a build your computer event for young people in the neighborhood.

As well, our 3 year Trillium grant also funds another part-time coordinator who matches individuals seeking technology experience with non-profits such as the Windsor Homeless Coalition who need technology work and support. 

Hackforge has also collaborated with the Windsor Public Library to put on co-hosted events such as the Robot Sumo contest.

 And we’ve worked with the City of Windsor to produce persistence of vision bicycle wheels for the their WAVES light and sound art festival.  I know it’s difficult to see but in the photo on the screen is a bicycle wheel with a narrow set of lights that are strapped to three spokes on the wheel. When the wheel spins, the lights animate and give the impression that there’s an image in the wheel - it only works with the human eye - because of our persistence of vision - and it’s something that really come across in a photo very well.

[here's a video!]

Also, the City of Windsor commissioned us to build a Librarybox for their event which I thought was really cool!

And like most other Hackerspaces, we have 3D printers. We have robotic kits. We have soldering irons, and we have lots and lots of spare electronic and computer parts. But unlike most other hackerspaces who charge their members $30 to $50 a month to join and make use their space, our hackerspace is currently free to members who pay for their membership with volunteer work.

This brings us to today in the last days of 2014.

2014 is also the year that Aaron came back to us from California. He’s now my fellow board member at Hackforge.  And, incidentally, so is Art Rhyno, who  - if you don’t know - is a fellow librarian from the University of Windsor.

I was asked by Scholars Portal if I could share some of my experiences with Hackforge in light of today’s theme of building community.  And that is what my talk will be about today: how to use a hackerspace to build community. And I will do so by expanding on five themes. 

But as you know know - we are only 2 years old, and so - this talk is really about just the beginning steps we’ve been taking and those steps that we are still trying to take.  We admittedly have a long way to go.

Helping out with Hackforge has been a very rich and rewarding experience and I’ve learned much from it. And it’s also been hard work and sometimes it has been very time consuming.

All those decisions we made as we started our hackerspace were the first ones we’ve ever had to make for our new organization. This process was exhilarating but it also was occasionally exhausting.  Which brings us to our first theme:

Institutions reduce the choices available to their members

The reason why starting up an organization is so exhausting can be found in Ronald Coase’s work. Coase is famous for introducing the concept of transaction costs to explain the nature and limits of firms and that earned him the Nobel Prize in Economics in 1991. Now I haven’t read his Nobel prize winning work, myself. I was first introduced to Coase when I read a book last year called The org: the underlying logic of the office by Ray Fisman and Tim Sullivan.

I also read Coase being referenced in a blog post by media critic Clay Shirky that was about about the differences between new media and  old media. It’s Shirky’s words on the screen right now:

These frozen choices are what gives institutions their vitality — they are in fact what make them institutions. Freed of the twin dangers of navel-gazing and random walks, an institution can concentrate its efforts on some persistent, medium-sized, and tractable problem, working at a scale and longevity unavailable to its individual participants.

Further on in his post Shirky explains what he means by this through an example of what happens at a daily newspaper:

The editors meet every afternoon to discuss the front page. They have to decide whether to put the Mayor’s gaffe there or in Metro, whether to run the picture of the accused murderer or the kids running in the fountain, whether to put the Biker Grandma story above or below the fold. Here are some choices they don’t have to make at that meeting: Whether to have headlines. Whether to be a tabloid or a broadsheet. Whether to replace the entire front page with a single ad. Whether to drop the whole news-coverage thing and start selling ice cream. Every such meeting, in other words, involves a thousand choices, but not a billion, because most of the big choices have already been made.

When you are starting a new organization or any new venture, really, every small decision can sometime seem to bog you down.  There is navel-gazing and random walks.

We got bogged down at the beginning of Hackforge. We actually received the keys to the space in the Windsor Public Library in October of 2012.  Why the delay? We had decided that we would launch the opening of our space with a homemade keypass locking system for the doors because we thought it wouldn’t take much time at all. 

And if we were considering how long it would take one talented person to build such a system by themselves, then maybe we would been right. But instead, we were very wrong. And looking back at it, now it seems obvious why this was the case:

We had a set of people who have never worked together before, who don’t necessarily even speak the same programming languages, working without an authority structure, in a scarcely born organization with no promise that we will succeed or survive, nor sure promise of reward.

Now it’s very important for me say that this so I'm absolutely clear - I am not complaining about our volunteers!!!

Hackforge would not have succeeded if it weren’t for those very first volunteers who made Hackforge happen in those early days when we were starting with nothing.

And the same holds to this day. When we say that Hackforge is made of volunteers, what we are really saying is that Hackforge = volunteers. 

Our volunteers are especially remarkable because -- like all  volunteers - they give up their own time that’s left over after their pre-existing commitments to work, school, family and friends. In volunteer work, every interaction is a gift. But, that being said, not every promise in a volunteer organization is one that is fulfilled. Sometimes you learn the hard way that first thing on Tuesday means 3pm.

But the delay wasn’t just from the building of the system. Once it was built, we then we had to make sure that the keypass system was okay with the library and that it was okay with the fire marshall. And we had to figure out how who was going to make the key cards, how they were going to be distributed and how we would use to decide who would get a keycard to the space and who would not.  Ultimately, it took us 8 months to figure this all of this out.

I wanted to explicitly mention this observation because I’ve noticed that within our own institution of libraries that sometimes when a new group or committee is started up, there is the occasional individual who interprets the slow goings and long initial discussions of the first meetings as, at best, extreme inefficiency, and at worst, a sign of imminent failure.

When in fact, we should recognize that slow starts are normal.

Culture is to a organization as community is to a city

New organizations and new ventures happen slowly and furthermore, they should happen slowly because each decision made is one that further that defines the “how” of “what an organization is”.  Are we, as an organization, formal or informal?  Who takes the minutes at meetings?  Do we need to give a notice of motion? Do we do our own books or do we hire an accountant? Do we provide food at our events?  Do we sell swag or do we give it away? How should we fundraise?  How do we deal with bad actors?  Every decision further defines the work that we do.

It’s very important to take the time to take these steps slowly in order to make sure that the way you do things match up with the why you do things.  As I think we can appreciate in libraryland, once institutions reduce choices of their members it is very difficult - although not impossible to open them up again for rethinking and refactoring.

One of reasons why Hackforge has been very successful in its brief existence - is that it was formed with clearly articulated reasons and clear guiding principles that continue to help us shape the form of our work. And I know this, because the vision of what Hackforge should be was told to be me when I was invited to serve of the board when Hackforge began and, I can attest to the fact, that it is the same the as the one we have now.

Now, there are many different types of hacker and makerspaces: some are dedicated to artists, others to entrepreneurs, while others are dedicated to the hobbyist.   Hackforge - in less than 140 characters has been described as this: Hackforge supports capacity building in the community and supporting a culture of mentorship and inclusivity.

More specifically, we exist to help with youth retention in Windsor. We aim to be a place where individuals who work or want to work in technology can find support from each other.

I know it might sound strange to you that we believe that our local IT industry needs support, especially when we read about the excesses of Silicon Valley on a regular basis.

But in Windsor, there are not many options for those with a technology background to find work and so, despite of the impression we give to those pursuing a career in STEM, tech jobs in Windsor can be poorly paid and the working conditions can be very problematic.

Many of the provisions in the labour law - the ones that entitle employees to set working hours, to breaks between and within shifts, to overtime and even time to eat - have exemptions for those who work in IT.  I’ve been told that the only way to get a raise while working in IT in this town is to find a better paying job.

The IT industry sometimes treats people as if they were machines themselves.

Hackforge was built as a response to this environment. It was build in hopes that it could help  grow something better.  At Hackforge we know our strength does not come from the machines that we have in our space, but our amazing members and the time and work that they give to others.   

I mean, we love 3D printers because they are a honeypot that brings curious folks into our space, but the secret is we are not really about 3D printers.

And yet if you look at all of what our media coverage we receive, you would think we’re just another makerspace that loves 3D printers and robots.

This is why it is SO important to be visible with your values, which is our second theme.

Show your work

One of the challenges that we have at Hackforge is that we don’t have very many women in our ranks.  Women make up half of our board of directors but our larger membership is not representative of the Windsor community and it’s likely not representative in the other aspects of identity, for that matter, either.

We know that if we wanted to change this situation, it would require sustained work on our part. And so when we had our official launch of Hackforge last year, we, as part of the event, hosted a Women in Technology Panel that featured four women who work in IT, including the very successful Girl Develop IT from Detroit, all of whom both shared their experiences and offer strategies to make the field of technology a more inclusive environment and better place for everyone.

In the audience for that panel discussion was a representative of WEST. WEST is a local non-profit group who works and stands for Women’s Enterprise Skills Training. Starting next year, with the support of another Ontario Trillium grant, Hackforge and WEST are going to be launching a project that will offer free computer skills training workshops for women as well as trying to create a community of support, and continue to advocate for women in the IT field.

So I can’t stress this enough. You have to do your work in public if you want your future collaborators to find you.

I have also another Women in Technology story to start our third theme.

So remember I told you about unconferences? Well, the Hackforge members who run the Software Guild do something similar.  Sometimes instead of coding, the folks do something like this.  They write down all thing the things they want to talk about, vote for the topics and then talk the most voted topics within strict time limits. But they don’t call it an unconference:

They call it LEAN COFFEE.

I love it. It’s so adorable.

Anyway, at one of these Lean Coffee sessions, our staff coordinator suggested the topic Women in Technology.  And the response she received was this: We know there’s a problem because Hackforge doesn’t have enough women. But we are not sure how to fix this. 

To me, I found this statement very encouraging. 

Its sad, but in these these times, when people can admit that there’s a problem without any deflection or allocation of blame is actually very refreshing.

I mean, within librarianship - we have some organizations who consistently organize speaking events made up of mostly men. Whenever I raise this matter I usually told that if the speaking topic is not about gender, then it’s not about gender. In other words, they tell me that there is no problem.

But sometimes there is a problem.

Look at this photo:  from this you would never guess that it was taken in a city that is over 80% African American.  This photo from the first meeting of Maptime Detroit that I attended last month.  One of the first things that was said during the evening’s introduction was a simple statement by the organizer.  “I want to acknowledge who isn’t this in room”  And what followed was a plan to hold the next Maptime meetings, not in the mid-town Tech Incubator, but within the various neighbourhoods in the city and alongside partner organizations already working with Detroiters where they live.

So before we can be more inclusive, we need to recognize when we are not.

We can start by acknowledging who isn’t in the room. It isn’t hard to do.

Quinn Norton wrote a lovely essay about this called Count. Speaking of counting, we are now at theme four.

A mailing list is not a community

What you might find surprising is that - for Hackforge being a gathering of people who generally love love love the Internet, is that we really don’t even have a strong online space for folks to hang out in, with the exception of our IRC channel.  We used to have forum software, but is was so overwhelmed with spam on a daily basis it was almost immediately rendered unusable. 

Also, Hackforge doesn’t even have a listserv mailing list. 

And I would go as far to say that one of the reasons why Hackforge has been as successful as we have been is in part, because that we *don’t* have a mailing list.

There’s a website that’s called Running a Hackerspace that is a collection of animated gifs that metaphorically capture the essence of Running a hackerspace. I think it’s particularly telling that there are many recurrent topics that arise this Tumblr: like the complaints that folks don’t clean up after themselves.

(And this is when I confess that when I drop by Hackforge, I am also sometimes made sad).

But the most prevalent theme in the blog is mailing list rage.

You would think this would have been a solved problem by now: how do you support project work that is done asynchronously and dispersed over geography. Many open source communities are finding that the traditional tools of mailing lists, forum software, and IRC channels are not doing enough in helping their communities do good work together. More often than not, these technologies seem to be better than boosting the noise rather than the signal.

Distributed companies like Wordpress are moving from IRC to software platforms such as Slack. As I’ve mentioned before,  I’m involved with a largely self-organized group called Maptime and we also make use of Slack, which is essentially user friendly IRC, chat, and messaging along with images, file sharing, archiving and social media capture.

At Hackforge, we’ve recently decided to use the Jira issue tracker to manage the hacking work that we need to do in the space and we will be switching to Nation Builder software to manage our members and member communications. When activists, non-profits, and political parties are using software like Nation Builder to manage the contact info, the interests, and the fundraising of tens of thousands of people, it makes me wonder when libraries are going to start using similar software to manage the relationships it has with its community.

And at a time when my neighbours who rent the skating rink for collective use, use volunteer management software to figure out who’s turn it is to bring the hot chocolate, I would like to suggest that libraries perhaps could start using similar software to - at least - manage our internal work and communications as well.  Good tools make great communication possible within organizations and our communities. They are are worth the investment.

Invest in but do not outsource community management

Before I end my presentation with this last theme, I do want to offer a caveat to everything I’ve said.  If you asked all of the people who have been involved in Hackforge - those who have come by our events, spent time in the space, or even volunteered some mentoring at an event - if you asked them if they felt they were part of a community, I think most people asked would probably say, no.  I think we have a wonderful group of people who have contributed to Hackforge and  I think we have a group of people who have even found friends at Hackforge, but I think we still can’t call the whole of what we do "a community" - at least not yet. 

Hackforge is approaching its 2nd birthday and this talk has been a wonderful excuse to reflect on what we do well and what we still need to work on.

What works for us are regular events, contests and Hackathons. We are well aware of the limitations of hackathons and how they produce imperfect work but, for us, it seems to be that  that pre-defined limits and deadlines produce more work and generate more interest and excitement than unstructured free time seems to.

Unlike many hackerspaces, we don’t tend to have many group projects. The door project - as you have learned - was one of few group projects, and that one took longer than expected. In our early months, we also had a LED sign project that was never completed and actually resulted in some people leaving Hackforge in frustration.

We are a volunteer organization and as such, by the process of evolution, we are a place for the patient and the forgiving. Sometimes we have gotten our first impressions wrong.

One of the largest challenges I think we have as an organization is to be more accessible to beginners.  In fact, that the feedback that we’ve been getting.

Aaron recently had a tech talk about tech talks and the message he received was that Hackforge should provide more sessions for beginners.  And this is a particular challenge that we haven’t really addressed yet. We’re luckily that Hackforge has people who are both generous with their time and not afraid of public speaking and give tech talks. But many of our speakers don’t preface their talks with an introduction that a newbie could understand. They are so excited to have fellow experts in the crowd and they jump right into the code or electrical specs or what have you.

Likewise, it’s amazing and wonderful that we have regular supportive events like our member’s coding katas in which those who work with software can practice and share their coding practice with others. But at the moment, we don’t really have anything for those who want to learn how to code.  And you might not be shocked to hear this, but Hackforge’s machines like our 3D printers - lack even the most basic documentation on how to use the machines.

Without expanding the work of communicating, documenting, explaining, and teaching, Hackforge won’t be able to attract new members. 

Hackforge started as a top down organization.  Our job as board has been to the build the systems that will allow more of the day to day work of the Hackforge to move from the board to our community and program managers.  We were able to hire our managers in the middle of this year and already, they have made wonderful contributions to Hackforge.  Our next challenge will be how to move more of the operational work of the managers to the members themselves.

In other words, the challenge for Hackforge is to ensure that the work that needs to be done - all of that communicating, documenting, explaining, teaching - needs to be embraced by all of its members as a community of practice.  And through this practice, it’s hoped we can  build a community.

So, those are my five themes for building community with a hackerspace:

Institutions reduce the choices available to their members (so choose carefully)
Show your work (so future collaborators can find you).
Acknowledge who isn’t in the room (Count is only the start).
A mailing list is not a community (Invest in tools that do better).
Invest but do not outsource community management.

The work of figuring how to get a bunch of people to come together and face a shared challenge isn’t just the way the build a community.  This is also how political movements begin.  It’s also how a game begins. I would like to thanks to Scholars Portal for giving me the opportunity to begin Scholars Portal Day with you all.

Hardware I/O Virtualization / David Rosenthal

At, Timothy Prickett Morgan has an interesting post entitled A Rare Peek Into The Massive Scale Of AWS. It is based on a talk by Amazon's James Hamilton at the re:Invent conference. Morgan's post provides a hierarchical, network-centric view of the AWS infrastructure:
  • Regions, 11 of them around the world, contain Availability Zones (AZ).
  • The 28 AZs are arranged so that each Region contains at least 2 and up to 6 datacenters.
  • Morgan estimates that there are close to 90 datacenters in total, each with 2000 racks, burning 25-30MW.
  • Each rack holds 25 to 40 servers.
AZs are no more than 2ms apart measured in network latency, allowing for synchronous replication. This means the AZs in a region are only a couple of kilometres apart, which is less geographic diversity than one might want, but a disaster still has to have a pretty big radius to take out more than one AZ. The datacenters in an AZ are not more than 250us apart in latency terms, close enough that a disaster might take all the datacenters in one AZ out.

Below the fold, some details and the connection between what Amazon is doing now, and what we did in the early days of NVIDIA.

Amazon uses custom-built hardware, including network hardware, and their own network software. Doing so is simpler and more efficient than generic hardware and software because they only need to support a very restricted set of configurations and services. In particular they build their own network interface cards (NICs). The reason is particularly interesting to me, as it is to solve exactly the same problem that we faced as we started NVIDIA more than two decades ago.

The state-of-the-art of PC games, and thus PC graphics, were based on Windows, at that stage little more than a library on top of MS-DOS. The game was the only application running on the hardware. It didn't have to share the hardware with, and thus need the operating system (OS) to protect it from, any other application. Coming from the Unix world we knew how the OS shared access to physical hardware devices, such as the graphics chip, among multiple processes while protecting them (and the operating system) from each other. Processes didn't access the devices directly, they made system calls which invoked device driver code in the OS kernel that accessed the physical hardware on their behalf.

We understood that Windows would have to evolve into a multi-process OS with real inter-process protection. Our problem, like Amazon's, was two-fold; latency and the variance of latency. If the games were to provide arcade performance on mid-90s PCs, there was no way the game software could take the overhead of calling into the OS to perform graphics operations on its behalf. It had to talk directly to the graphics chip, not via a driver in the OS kernel.

If there would have been only a single process, such as the X server, doing graphics this would not have been a problem. Using the Memory Management Unit (MMU), the hardware provided to mediate access of multiple processes to memory, the OS could have mapped the graphic chip's IO registers into that process' address space. That process could access the graphics chip with no OS overhead. Other processes would have to use inter-process communications to request graphics operations, as X clients do.

SEGA's Virtua Fighter on NV1
Because we expected there to be many applications simultaneously doing graphics, and they all needed low, stable latency, we needed to make it possible for the OS safely to map the chip's registers into multiple processes at one time. We devoted a lot of the first NVIDIA chip to implementing what looked to the application like 128 independent sets of I/O registers. The OS could map one of the sets into a process' address space, allowing it to do graphics by writing directly to these hardware registers. The technical name for this is hardware I/O virtualization; we pioneered this technology in the PC space. It provided the very low latency that permitted arcade performance on the PC, despite other processes doing graphics at the same time. And because the competition between the multiple process' accesses to their virtual I/O resources was mediated on-chip as it mapped the accesses to the real underlying resources, it provided very stable latency without the disruptive long tail that degrades the user experience.

Amazon's problem was that, like PCs running multiple graphics applications on one real graphics card, they run many virtual machines (VMs) on each real server. These VMs have to share access to the physical network interface card (NIC). Mediating this in software in the hypervisor imposes both overhead and variance. Their answer was enhanced NICs:
The network interface cards support Single Root I/O Virtualization (SR-IOV), which is an extension to the PCI-Express protocol that allows the resources on a physical network device to be virtualized. SR-IOV gets around the normal software stack running in the operating system and its network drivers and the hypervisor layer that they sit on. It takes milliseconds to wade down through this software from the application to the network card. It only takes microseconds to get through the network card itself, and it takes nanoseconds to traverse the light pipes out to another network interface in another server. “This is another way of saying that the only thing that matters is the software latency at either end,” explained Hamilton. SR-IOV is much lighter weight and gives each guest partition on a virtual machine its own virtual network interface card, which rides on the physical card.
This, as shown on Hamilton's graph, provides much less variance in latency:
The new network, after it was virtualized and pumped up, showed about a 2X drop in latency compared to the old network at the 50th percentile for latency on data transmissions, and at the 99.9th percentile the latency dropped by about a factor of 10X.
The importance of reducing the variance of latency for Web services at Amazon scale is detailed in a fascinating, must-read paper, The Tail At Scale by Dean and Barroso.

Amazon had essentially the same problem we had, and came up with the same basic hardware solution - hardware I/O virtualization.

Gifts for archivists and librarians: from the practical to the luxurious / HangingTogether

We asked for suggestions for gifts that would be suitable for librarians or archivists and the community responded! Thank you so much for all the wonderful and thoughtful gift ideas!

Here are the nominations: if you have other ideas please leave them in the comments below. To ensure that you get what you want, think about leaving this page on computers in your reading room or information commons — I’m sure that certain someone will get the hint.

Lumio Book Lamp

Lumio Book Lamp

Practical gifts: some information professionals are very focused on getting the job done. For these folks, a gift that helps them do the work at hand is just the thing. Gifts in this category include:

  • A mobile scanner: Laura suggests that perhaps the Flip-Pal might be useful for those who are zipping around “scanning madly.”
  • Of course it’s not all about shelving books or arranging collections. We also attend lots of meetings and conferences. How about a fountain pen? Nadia Nasr suggests the Cross Stratford as a nice looking model that’s affordable.
  • For all that professional reading, what about a book shaped lamp? Lumio’s book lamp (although pricey) was suggested by Stephanie as being “pretty rad.” Comes in dark walnut and blonde maple to compliment any decor.
  • What is more painful that losing your place in a book? Hunting around for a bookmark. Lynn Jones suggests the Albatros Bookmark — you never need to look for your bookmark because it’s in the book — it also places itself.  Comes in packs of 6.
  • What about a card catalog shaped flash drive? These will be available soon from Unshelved. Thanks to Carol Street for the suggestion.
The Archivist Wine

The Archivist Wine

Food and drink: everyone likes to eat and drink. Here are some suggestions vetted by librarians and archivists

  • The chefs among us might appreciate cookbooks from historical societies. Melissa M. loves her cookbooks from the King’s Landing Historical Settlement in New Brunswick. I couldn’t find those online but you can find plenty of good ideas in Cookbook Finder. I noted that King’s Landing does have an historic inn that serves period food, so check with your local historical society!
  • Beer for archivists: Although I normally hate to reinforce stereotypes about archivists that involve either attics or cellars, I was pleased to hear Jill Tatem’s nomination for Cellar Dweller, which is only available at the Great Lakes Brewing Company in Cleveland. Since this is the site of the 2015 Society of American Archivists annual meeting, I know many archivists will take a rain check on this brew.
  • A toast to archivists! From Sonoma Estate Vintners, the Archivist. Pick your poison: cab, chardonnay, or pinot noir. The description includes the word “appraise” so you know you are in the right place.


Pride and Prejudice Tote

Pride and Prejudice Tote

Clothing and accessories: suggestions range from items that are practical to those that show your style.

  • Melissa M. says, “every processing archivist could use steel-toed boots (required for the first archival job I ever had, and I actually managed to find quite a stylish pair).” Melissa was not able to find her boots, which fetched compliments outside the workplace, but perhaps something like these engineer boots would work.
  • To go with your boots, perhaps some library card socks from NYPL? (Hat tip to Bruce Washburn.)
  • You can wear your heart on your sleeve, and now you can wear your favorite book, as a t-shirt, or water-resistant tote. From Lithographs. Also available, posters and (temporary) tatoos. From Lorcan Dempsey and Pam Kruger.
  • A favorite from last year was the microfiche jewelery from Oinx. Styles have been updated and now you and spread the “I’d rather be fiching” message via t-shirt and bumper sticker.
Mini Hollinger Document Cartons

Mini Hollinger Document Cartons

Little luxuries: sometimes it’s the little things

  • Candles are a great seasonal gift. You can choose between The Archivist candles from Greenmarket (lots of choose from, particularly if you like the idea of “fragrance records accumulated to preserve moments, stories, and people they represent”) and Library candles from Paddywax (which feature scents that will conjure your favorite author). Thanks to Casey Davis and Carol Street for calling these to our attention!
  • Hollinger boxes are a staple for archivists, and mini document boxes have long been a popular giveaway at conferences — so popular that Hollinger now sells them as a separate item. Jennifer suggests that in addition to being just plain adorable, they would be the perfect way to pop the question.
  • Cream for hands, dried out from processing documents and handling other materials, was a popular item on last year’s list. This year, Melissa M. recommends Lush’s Charity Pot lotion.

Can’t buy happiness: of course, the things that everyone really wants can’t be purchased. At the top of almost every information professional’s wish list is space (to put anything, as our anonymous contributor put it). Another thing that we’d all like to see is reflected in this lovely blog post by Maarja Krusten:

…the greatest gift you can give archivists and librarians is the opportunity to share physically and virtually the knowledge found in their collections and holdings.

Now, that sentiment is something I think we can all get behind! Happy holidays to all of you!

Link roundup December 15, 2014 / Harvard Library Innovation Lab

Beach balls, stickers, books, moveable cities, and figuring out what you want. This is a good batch of links.

Eyeo 2014 – Santiago Ortiz

The City That Is Moving Down the Road

Best Books of 2014 : NPR

Spinning Beach Ball of Death

“Elementary!” A Sleuth Activity for Personal Digital Archiving / Library of Congress: The Signal

Sherlock Holmes and Doctor Watson. Published in The Adventure of Silver Blaze, which appeared in The Strand Magazine

Sherlock Holmes and Doctor Watson. “The Adventure of Silver Blaze,” in The Strand Magazine. Illustration by Sidney Paget (1860-1908). On Wikimedia.

As large institutions and organizations continue to implement preservation processes for their digital collections, a smattering of self-motivated information professionals are trying to reach out to the rest of the world’s digital preservation stakeholders —  individuals and small organizations — to help them manage their digital collections.

Part of that challenge is just making people aware that:
1. Their digital possessions are at risk of becoming inaccessible
2. They need to take responsibility for preserving their own stuff
3. The preservation process is easy.

The Library of Congress offers personal digital archiving resources and takes an active role in outreach. [Watch for the announcement of Personal Digital Archiving 2015 next April in New York City.] And we are always happy to discover novel approaches by our colleagues to teaching personal digital archiving. Consider the work of one group of information professionals from Georgia.

The Society of Georgia Archivists, the Atlanta Chapter of ARMA International and the Georgia Library Association have collaborated on a curriculum for a personal digital archiving workshop that addresses the basic problems and solutions. Among the steps they outline, they emphasize the need to make files “findable.”

To that end they devised an activity called “Find the Person in the Personal Digital Archive” (the activity data set and all the workshop materials are free and available for download, reuse and remixing). The premise is simple and the game is fun but it drives home an important message about organizing your files. The producers created a folder filled with files and sub-folders — messy, disorganized files; pointless sub-folders; mis-named files; highly personal files mixed with business files; encrypted files and obsolete file formats, many sourced from the Open Preservation Foundation’s Format Corpus — and they invite people to participate in a forensics activity, to look through all the files and directories and try to piece together some information about the owner of the files.

Sample of random files in a folder

Courtesy of the Society of Georgia Archivists.

As the user looks through the folder, there are questions to answer, such as “How would you describe the contents?”, “How did the creator of the archive name and arrange the files?” and “How do the features of the archive (such as file names, organization scheme, file format, etc.) make some of the records easy to understand and some of them impossible to understand?”

Though the goal is to deduce the identity and fate of the owner through various clues and “Aha!” moments, in doing the activity the users ends up making judgments about what is useful (like descriptive file and folder names) and what is not (files called “things.xml” and “untitled.txt”). Poring over a fake mess such as this drives home a point: how do you organize your own personal stuff? If someone, such as a loved one, had to go through your digital files, how easy or difficult would it be for them to find specific files and make sense of it all? Are you leaving a mess for someone else to trudge through?

Wendy Hagenmaier, the outreach manager for the Society of Georgia Archivists, is one of the workshop producers. Hagenmaier wanted to reach beyond her community to demystify digital archives stewardship and explain to the general public why digital preservation matters and how they can preserve their own stuff. She researched other like-minded organizations in Georgia to find interested parties for the workshop. “This topic really seems to be taking off in public libraries,” said Hagenmaier,”and genealogists are very much interested in personal digital archiving, though I don’t know if the topic comes up in their circles on its own.”

Woman giving a slide presentation.

Michelle Kirk, president of the Atlanta Chapter of ARMA, gives a presentation. Photo courtesy of the Society of Georgia Archivists.

Hagenmaier — and her colleagues Michelle Kirk, Cathy Miller and Oscar Gittemeier — geared the workshop toward information professionals and encouraged the workshop attendees to go out and teach the workshop to others so that the message will reach the general public in a sort of trickle-down effect. So far she has presented the curriculum at a “train the trainer” webinar, a workshop and at a Georgia State Archives genealogy event.

The Society of Georgia Archivists also offer a Personal Digital Archiving Workshop Outreach Grant to help information professionals in Georgia promote the idea that librarians, archivists and records managers are a source of expertise for assisting individuals (the public, family members, students, corporate employees, etc.) with their personal digital archiving needs. The grant will be given to individuals who apply for the grant after hosting and teaching a workshop at their institutions or in their communities, using the curriculum materials designed by SGA, GLA and Atlanta ARMA.

Hagenmaier is fervent about getting the word out to people, making them aware that they casually create and use digital stuff in their everyday lives, so digital stewardship could and should be just as casual and effortless. She feels that knowledge of digital stewardship will empower people and assure them that their digital files can be safe if they keep them safe. She said that in the course of her work she sees in people a fear of the unknown, a huge anxiety about the fate of digital files. To illustrate her point she cites a moment during her genealogy conference presentation when she asked a group of genealogists, “How many of you think you will be able to access your digital files in ten years?” No one raised a hand.

“They are hopeful but not confident,” said Hagenmaier. “Personal digital archiving is still foreign to people. It is important for us to just get the word out that they can preserve their own stuff.”

Making Connections in the New Year / LITA

This new year, make a resolution to be more proactive, network and update your professional skills. Resolve to attend a professional conference, discussion or symposium!

Flickr, 2010

Flickr, 2010

GameDevHacker Conference
New York, January 28

The GameDevHacker conference is just around the corner. Combining the wits of three segments of the gaming industry, the gamers, developers and hackers, the conference aims at discussing future developments. The tagline for next year’s conference is “Past Trends and Future Bets.”


The Creativity Workshop
New York, February 20 – 23 & April 17 – 20

Do you have writers block, want to create dynamic programming or transform the way you view digital arts? The creativity workshop is geared toward professionals in the sciences, business, arts and humanities. Two 4-day workshops will be held this spring 2015.


2015 National STEM Education Conference
Cleveland, April 16-17

The typical STEMcon audience includes educators in the K-12 arena. However, if altering the current landscape of STEM education is important to you, STEMcon may be a great venue to voice those concerns. Participants will, among other topics, discuss using educational technology and bridging gender and ethnic divides in the science, technology, engineering and math fields.


8th Annual Emerging Technologies for Online Learning Symposium
Dallas, April 22 – 24

Perhaps you may not be interested in STEM education at the K-12 level, but almost everyone in the information field has either facilitated or participated in online learning technologies. Web-based technology will continue to alter delivery of instruction to students around the world. Network, share and learn about new trends with participants from around the nation.


Educause Annual Conference
Indianapolis and Virtual, October 27 – 30

If travel is an issue, Educause will hold a virtual conference in October of next year. The conference is geared toward IT professionals in higher education, but can be useful for students and novice practitioners. More information will be published in the spring of 2015.


Have a happy New Year LITAblog readers!