Planet Code4Lib

Code4libBC Day 2: Lightning Talks / Cynthia Ng

Day 2 of Code4libBC lightning talks! A Coal Miner’s History: Mapping Digitized Audio Interviews (Daniel Sifton) digitized audio interviews of coal miners need to build support and resources for class use built tool to visualize distribution of subjects in d3 maps of base locations using mapping tool and a lot of CSS with student summaries … Continue reading Code4libBC Day 2: Lightning Talks

Registration open for the 24 January 2017 Mashcat event in Atlanta / Mashcat

We are excited to announce that registration is now open to for the second face-to-face Mashcat event in North America, which be held on January 24th, 2017, at Georgia State University in Atlanta, Georgia. We invite you to view the schedule for the day as well as register at http://www.mashcat.info/2017-event/. We have a strict limit on the number of participants who can attend in person, so register early!

The event will also be streamed as a free webinar, so if you cannot attend in person, registration for the webinar will open in January.

If you run into any issues with registering, you can email gmcharlt AT gmail.com.

Be “more library” than ever / District Dispatch

In trying to make sense of the election results, a lot of people – including librarians – have wanted to “do” something to preserve democratic values. Increased civic engagement and advocacy is perhaps the obvious way to “do” something, but it is not effective unless many people are engaged, have a shared message and get off the couch. The March on Washington, Take Back the Night and peaceful Vietnam era “end the war” demonstrations are prime examples of what mobilization can achieve, but does today’s public really have the willpower and enthusiasm to take collective action? Or can we take baby steps as librarians to incrementally make a difference? rachel-maddow-wayback-machine

One thought is to be “more library” than ever. You are at work anyway so it’s not really a big lift, right? Being more library means ensuring and increasing access to information for all people; building the digital and physical infrastructure to use technology to enhance learning and creativity; defending freedom of speech, intellectual freedom, and fair use; and protecting the very notion of sharing.

Here’s a great example of being “more library.”

The Wayback Machine of the Internet Archive, founded by Brewster Kahle, was mentioned on The Rachel Maddow Show last Tuesday. The Wayback Machine with its stored web page history was used by Rachel to uncover statements that Alabama Governor Bentley — embroiled in a sex scandal — now swears he never said. When Bentley’s longtime security chief Wendell Ray Lewis revealed details of the scandal for the investigation, he was terminated and filed an unlawful termination suit. The Governor said that “all of the outrageous claims” made by Lewis were “based on worn-out internet rumors, fake news and street gossip.” The Wayback Machine proved otherwise. (One could say that the Wayback Machine revealed “pre-truth.”) By archiving the nation’s web history, Kahle continues to advance the mission of libraries (aka “more library”), and it makes a difference every day.

Now Kahle is seeking funds to make an archived copy of the Wayback Machine and store it in Canada to protect its existence.

Kahle said “On November 9th in America, we woke up to a new administration promising radical change,” writes founder Brewster Kahle. “It was a firm reminder that institutions like ours, built for the long-term, need to design for change. For us, it means keeping our cultural materials safe, private and perpetually accessible. It means preparing for a web that may face greater restrictions. It means serving patrons in a world in which government surveillance is not going away; indeed, it looks like it will increase.”

No matter what political party a librarian may be affiliated with, librarians believe in the fundamental tenets of librarianship (which look a lot like the fundamental tenets of our democracy). We all want fairness, public access to information and preservation of the cultural record. We know that libraries matter more now than ever before. My hope is that we will take this opportunity to shine, to protect the public interest and to really be “more library.”

The post Be “more library” than ever appeared first on District Dispatch.

Code4libBC Day 1: Lightning Talks / Cynthia Ng

Lightning talks from the first day of Code4libBC. Provincial Digital Library Update (Daniel Sifton/Caroline Daniels) Looking at what others have done * DPLA: gave presentation at code4lib portland about old system. New system based on PostgreSQL, Solr, Ruby-driven… * testing with VM using vagrant, ansible, elasticsearch, solr, ruby, rail console * Supplejack: 2nd generation platform, … Continue reading Code4libBC Day 1: Lightning Talks

CopyTalk webinar rescheduled / District Dispatch

 

Coffee cup

The Thursday, December 1 CopyTalk webinar has been rescheduled for Thursday, January 5.

Due to technical difficulties, today’s CopyTalk webinar on the Section 108 video project has been rescheduled for January 5th at 2pm Eastern/11am Pacific. The URL for the rescheduled webinar is the same:

http://ala.adobeconnect.com/copytalk/

For additional details about the planned webinar, please check out our previous post.

The post CopyTalk webinar rescheduled appeared first on District Dispatch.

Code4libBC Presentation: Getting Things Done: Discovering Efficiencies in Workflow / Cynthia Ng

This lightning talk was presented at Code4lib BC 2016. For a copy of the slides, please see the presentation on SpeakerDeck (also below) or the version on GitHub. Context When I first started working in libraries, I did a lot of my work by myself. Certainly, I consulted other people that had an impact on … Continue reading Code4libBC Presentation: Getting Things Done: Discovering Efficiencies in Workflow

Wisdom is Learned: An Interview with Applications Developer Ashley Blewer / Library of Congress: The Signal

 

blewer_headshot

Blewer at the Association of Moving Image Archivists Conference, 2016

Ashley Blewer is an archivist, moving image specialist and developer who works at the New York Public Library. In her spare time she helps develop open source AV file conformance and QC software as well as standards such as Matroska and FFV1. Shes a three time Association of American Moving Image Archivists’ AV Hack Day hackathon winner and a prolific blogger and presenter who is committed to demystifying tech and empowering her peers in the library profession.

Describe what you do as an applications developer at the New York Public Library.

We have a lot of different applications here but I work specifically on the repository team and our priority right now is digital preservation and automated media ingest. So my day to day involves working on several different applications. We run different applications that run into each other — sets of microservice suites. I’m the monitor of these pipelines, getting images that have been digitized or video that has been digitized through to long-term digital preservation as well as enabling access on our various endpoints such as digitalcollections.nypl.org and archives.nypl.org. This involves communicating with other stakeholders, communicating with developers on my team and writing code for each of those applications, doing code review and pushing that live to the different applications… It’s very much a full stack position.

The job is more unique on my team because we work on such a broad array of applications. What I find exciting about this job is that I get to touch a lot of different types of code in my day job and I’m not just working on one application. Right now I’m working on dealing with a couple bugs related to associating URIs to subject headings in our metadata management system. Sometimes the application doesn’t work as it should so I do bug fixes in that regard. Some things that I will be working on this week are integrating a connection between our archives portal displaying video live within it rather than linking out to a different website, automating audio transcoding from preservation assets, and contributing some core functionality upgrades to our Digital Collections site. Recently something that I did that was more access-based was we migrated our display of video assets from a proprietary closed-source system to an open-source rendering system.

We follow loosely an agile planning system. Right now we meet weekly because our priorities are very vast and they’re changing pretty quickly, so every Monday we meet with stakeholders and we talk about all the things we need to tackle over the week and what needs to be done and then we get to work. There’s around 16 total developers at NYPL but my team has three.

I was playing with some of the apps youve made, and Im fascinated with the Barthes Tarot and the Portable Auroratone. Could you walk me through your creative process for these?

barthestarot,png

A card from the Barthes Tarot app.

These are good examples because they’re different in the sense that with the Barthes Tarot I was reading Barthes’ A Lover’s Discourse and thinking about how I could potentially use that in a randomized way to do fortune telling for myself. This is almost embarrassing, right, but maybe someone [would want to use it] to try to solve a romance-based problem, like getting their fortune told. I originally wanted to map it to I Ching, which was something that Barthes and other philosophers were interested in, but it ended up being too technically difficult, so I got lazy and downgraded it to tarot. And then I knew I could put this together by doing a random draw of the data and just pull that out. Technically it ended up not being too difficult of a problem to solve because I made it easier.

The Portable Auroratone is the opposite in that I found a [software] library that automatically generated really interesting colors and I wondered how I could use it in some sort of way. I thought about the Auroratone I had seen at some symposium [ Orphan Film Symposium 8, 2013 ] six years ago and I thought “Oh, ok, it kind of looked like that,” and I turned it into that. So one of these apps was me having a philosophical dilemma and the other one was me having a technical library that I wanted to integrate into something and I had to mesh an idea with that.

I get a lot of compliments on Twitter bots like @nypl_cats and @nypl_dogs which I also just made very quickly as a one off. I did that while I was finalizing my paperwork to work here, actually. I thought if I’m going to get this job I might as well learn how to use their API. The API is something else that I work on now so I was familiarizing myself with this tool that I will eventually push code to support.

You constantly share what youre learning and advocate for continued learning in our profession through your blog, presentations, etc.  How do you find the time to share so prolifically and why do you think its important to do so?

blewerhelpingtweet

Yeah, I just came back from AMIA and I do really remember when at conferences why I do these things. As far as the first part of where I find the time, I don’t know, but I have been reflecting on how I’m maybe naturally introverted and this is something that I do to ramp up my own energy again, by working on something productive. Where other people might need to be out drinking with friends in order to chill, I need to be alone to chill, so it gives me more time to spend building different applications.

How do I summarize why I think this is important? I think about the positions I’ve been at and how I’ve thought about how I get to where I want to be and if those resources don’t exist then someone needs to build them. It’s so crucial to have a mentor figure in place to help you get to where you want to be and allowing people to discover that, especially related to technical issues. People just assume that the work I do in my day job now is much harder than it actually is, so if I can lower that barrier we can have more people learning to do it and more people can be more efficient in their jobs. Overall I think educating and empowering people helps the field much more substantially than if people are doing it alone in silos.

blewerassumequote

Can you talk about your career path to becoming a web applications developer?

I went to undergrad not really knowing what I wanted to do. I went to a state school because it was almost free and graphic design was the most practical of the art degrees you could get, and in a lot of ways librarianship is a practical advanced degree that people get as well. Coming to the point that I am now which is in a very technical role at a library I sort of see what I was doing as a response to the gendered feedback that I’d grown up with. I wrote an article about this before – where I didn’t necessarily feel comfortable studying something like computer science but then graphic design was still very computer- focused, technically-focused that was maybe more “appropriate” for me to do. I was encouraged to do that as opposed to being discouraged from doing something that I was already good at, which would have been something like computer science.

What skills do digital librarians and archivists need? Is learning to code necessary?

A lot of people are getting on board with learning to code and how everybody has to do that and I don’t necessarily feel that’s true, that’s not everyone’s interest and skill set, but I do think having an understanding of how systems work and what is possible is one hundred percent required. Light skills in that regard help people go a long way. I think that – and this is echoed by people similar to me – once you realize how powerful writing a script can be and automating dull aspects of your job, the more that you’re inclined to want to do it. And like what I said earlier – the more efficient we can be the better we are as archivists.

You do so much to contribute to the profession outside of your work at NYPL as well- contributing to open source formats and workflows, sharing resources, building apps. How do you find time for it all and what else do you want to do?

I feel like I waste a lot of time in my down time. I feel that I’m not doing enough and people are like “How do you do so much?” But there’s so much work to be done! As far as what I want to do,  I don’t know, everything I’m doing right now. Maybe I’m like a child that’s still feasting on an endless amount of candy. Now I have these opportunities that I’ve wanted to have and I’m taking them all and saying yes to everything.

blewerfeastingquote

A lot of what I do may be considered homework. As a developer, the way to get better at developing is purely just to solve more development problems. Making small applications is the only way to boost your own skills. It’s not necessarily like reading OAIS and understanding it in the same way you might if you were an archivist doing archivist homework. [Referencing graphic design background] The first design you do is not going to be good so you just do it again and you do it again and it’s the same thing with programming. One of the things I try to articulate to archivists is that programming kind of hurts all the time. It takes a really long time to overcome, because yeah, in school, you read a book or you write a paper and you’re expected to produce this result that has to be an A. With programming you try something and that doesn’t work and you try it again and you try it again and you think “Oh I’m so stupid I don’t know what I’m doing,” and that’s normal. I know this about myself and I think that’s the hardest thing to overcome when you are trying to learn these skills. It’s refreshing that even the smartest senior developers that I work with who are just incredible at their jobs all the time, still will pound the desk and be like “I’m so stupid, I don’t get this!” Knowing that’s a normal part of how things get done is the hardest thing to learn.

I’m happy to constantly be failing because I feel like I’m always fumbling towards something. I do think librarians and archivists tend to be people that had very good grades without too much effort, moving forward in life and so as soon as they hit a wall in which they aren’t necessarily inherently good at something that’s when the learning cuts off and that’s when I try to scoop people up and say “Here’s a resource where it’s ok to be dumb.” Because you’re not dumb, you just don’t have as much knowledge as someone else.

What do you want to do next?

Closed captioning is one of the big problems I’m excited about solving next within NYPL or outside of NYPL, whichever. If you don’t have it and you have 200,000 video items and they all need closed captioning to be accessible how do you deal with that problem?


What are five sources of inspiration for you right now? 

Recompiler: Especially the podcast since I listen to it on my commute, it’s such a warm introduction to technical topics.

Halt & Catch Fire: Trying to find another thing to watch when I am sleepy but I really just only want to watch this show. The emphasis on women’s complex narratives and struggles/growth within this show is unlike any other show I’ve ever watched.

Shishito Peppers: Dude, one in every ten are hot! I thought this was a menu trying to trick me but turns out its true! I like the surprise element of snacking on these.

Godel, Escher, Bach: I feel like this is the programmer’s equivalent of Infinite Jest. Everyone says they’ll read it one day but never get around to it. It’s such a sprawling, complex book that ties together patterns in the humanities and technology. Anyway, I am trudging through it.

AA NDSR Blog: So inspiring to read about the work of emerging professionals in the field of a/v digital preservation!


 

New Lifeline broadband subsidy to be available 12/2—but options limited for now / District Dispatch

Starting December 2, new rules adopted by the Federal Communications Commission (FCC) governing the Lifeline program for low-income consumers will go into effect. Most significantly, the program subsidy may be applied for the first time to standalone broadband offered by eligible telecommunications carriers (ETCs) or Lifeline Broadband Providers (LBPs). It is important to note, however, that no new LBPs have been approved yet, and ETCs may seek forbearance from these rules. For this reason, there may be few available Lifeline-eligible broadband options to low-income consumers in the immediate term.

Starting December 2, the Lifeline subsidy may be applied to standalone broadband offered by Lifeline Broadband Providers.

Starting December 2, the Lifeline subsidy may be applied to standalone broadband.

Lifeline advocates (including ALA) continue to work with the FCC, the Universal Service Administrative Company (USAC) that administers the Lifeline (and other universal service programs like E-rate) program, and internet service providers to increase the available options and public awareness of these options. The most current information available for consumers about the program, eligibility and how to apply is available at www.LifelineSupport.org or by calling 888-641-8722 Ext. 1 or emailing LifelineSupport@usac.org for help.

For additional background, librarians and other digital inclusion advocates can review a recent archived USAC webinar, as well as read more about the program and recent rule changes here and here.

While not specific to the Lifeline program, non-profit EveryoneOn provides an online portal to explore low-cost broadband, low-cost devices and digital literacy training options by zip code, which is another resource librarians may share with patrons: www.EveryoneOn.org.

Stay tuned! Lifeline advocates are looking to spring 2017 to boost Lifeline awareness after more options have been added and new resources and information are available to help low-income people find the best service for them. We’ll keep you posted as we learn more.

The post New Lifeline broadband subsidy to be available 12/2—but options limited for now appeared first on District Dispatch.

Submit Your Nomination for a LITA Award / LITA

Did you know that LITA co-sponsors three different awards, all of which recognize achievements in the field of library technology? We’re currently accepting nominations for all of them, so nominate yourself or a colleague today!

Tom Hanks pressing send on a laptop

LITA/Ex Libris Student Writing Award

The LITA/Ex Libris Student Writing Award is given for the best unpublished manuscript on a topic in the area of libraries and information technology written by a student or students enrolled in an ALA-accredited library and information studies graduate program. The winning article is published in LITA’s refereed journal, Information Technology and Libraries (ITAL). $1,000  award and a certificate.
Nomination form (PDF); February 28, 2017 deadline

LITA/Library Hi Tech Award For Outstanding Communication for Continuing Education

This Award recognizes outstanding achievement in educating the profession about cutting edge technology through communication in continuing education within the field of library and information technology. It is given to an individual or institution for a single seminal work, or a body of work, taking place within (or continuing into) the preceding five years. $1,000 award and a plaque.
Nomination form; January 5, 2017 deadline

LITA/OCLC Frederick G. Kilgour Award for Research in Library and Information Technology

This award recognizes research relevant to the development of information technologies, in particular research showing promise of having a positive and substantive impact on any aspect of the publication, storage, retrieval and dissemination of information or how information and data are manipulated and managed. $2,000 award, an expense paid trip to the ALA Annual Conference (airfare and two nights lodging), and a plaque.
Nomination instructions; December 31, 2016 deadline

BITAG on the IoT / David Rosenthal

The Broadband Internet Technical Advisory Group, an ISP industry group, has published a technical working group report entitled Internet of Things (IoT) Security and Privacy Recommendations. It's a 43-page PDF including a 6-page executive summary. The report makes a set of recommendations for IoT device manufacturers:
In many cases, straightforward changes to device development, distribution, and maintenance processes can prevent the distribution of IoT devices that suffer from significant security and privacy issues. BITAG believes the recommendations outlined in this report may help to dramatically improve the security and privacy of IoT devices and minimize the costs associated with collateral damage. In addition, unless the IoT device sector—the sector of the industry that manufactures and distributes these devices—improves device security and privacy, consumer backlash may impede the growth of the IoT marketplace and ultimately limit the promise that IoT holds.
Although the report is right that following its recommendations would "prevent the distribution of IoT devices that suffer from significant security and privacy issues" there are good reasons why this will not happen, and why even if it did the problem would persist. The Department of Homeland Security has a similar set of suggestions, and so does the Internet Society, both with the same issues. Below the fold I explain, and point out something rather odd about the BITAG report. I start from an excellent recent talk.

I've linked before to the work of Quinn Norton. A Network of Sorrows: Small Adversaries and Small Allies is a must-read talk she gave at last month's hack.lu examining the reasons why the Internet is so insecure. She writes:
The pre­dic­tions for this year from some analy­sis is that we’ll hit seventy-five bil­lion in ran­somware alone by the end of the year. Some esti­mates say that the loss glob­al­ly could be well over a tril­lion this year, but it’s hard to say what a real num­ber is. Because in many ways the­se fig­ures can’t touch the real cost of inse­cu­ri­ty on the Internet. The cost of humil­i­a­tion and iden­ti­ty theft and pri­va­cy trad­ed away. The lost time, the wor­ry. The myr­i­ads of tiny per­son­al tragedies that we’ll nev­er hear about.
These large numbers conflict with estimates from companies as to the cost of insecurity. As I mentioned in You Were Warned, Iain Thomson at The Register reported that:
A study by the RAND Corporation, published in the Journal of Cybersecurity, looked at the frequency and cost of IT security failures in US businesses and found that the cost of a break-in is much lower than thought – typically around $200,000 per case. With top-shelf security systems costing a lot more than that, not beefing up security looks in some ways like a smart business decision.

Romanosky analyzed 12,000 incident reports and found that typically they only account for 0.4 per cent of a company's annual revenues. That compares to billing fraud, which averages at 5 per cent, or retail shrinkage (ie, shoplifting and insider theft), which accounts for 1.3 per cent of revenues.
Note, however, that 0.4% of global corporate revenue is still a whole lot of money flowing to the bad guys. The reason for the apparent conflict is that, because companies are able to use Terms of Service to disclaim liability, the costs fall largely on the (powerless) end user. Norton uses an example:
One media report in the US esti­mat­ed 8,500 schools in America have been hit with ran­somware this year. Now, the rea­son why I think it’s real­ly inter­est­ing to point out the American fig­ures here is this is also a nation­al sys­tem where as of last year, half of all stu­dents in US pub­lic schools qual­i­fy for pover­ty assis­tance. Those are the peo­ple pay­ing the­se ran­somwares. And it’s hard to get a real fig­ure because most schools are hid­ing this when it hap­pens.
Her audience was people who can fix the security problems:
most peo­ple who are pulling a pay­check in this field are not inter­act­ing with the pain that most peo­ple are expe­ri­enc­ing from net­work inse­cu­ri­ty. Because you end up work­ing for peo­ple who pay. ... That high school can’t afford any­one in this room. And that means that so much of this pain and inse­cu­ri­ty in the world isn’t read­i­ly vis­i­ble to the peo­ple who work in the field, who are sup­posed to be fix­ing it.
The potential fixers are not putting themselves in the shoes of those suffering the problem:
Because in the end, one of the con­flicts that comes up over this, one of the rea­sons why users are seen as a point of inse­cu­ri­ty, is because get­ting the job done is more impor­tant than get­ting it done secure­ly. And that will always be in con­flict.
This is where Norton's talk connects to the BITAG report. The report's recommendations show no evidence of understanding how things look to either the end users, who are the ISP's customers, or to the manufacturers of IoT devices.

First, the view from the ISP's customers. They see advertising for, webcam baby monitors or internet-enabled door-locks. They think it would be useful to keep an eye on baby or open their front door from wherever they are using their smartphone. They are not seeing:
WARNING: everyone on the Internet can see your baby!
or:
WARNING: this allows the bad guys to open your front door!
They may even know that devices like this have security problems, but they have no way to know whether one device is more secure than another and, lets face it, none of these devices is actually "secure" compared to things people think of as secure, such as conventional door locks. They all have vulnerabilities that, with the passage of time, will be exploited. Even if the vendor followed the BITAG recommendations, there would be windows of time between the bad guys finding the vulnerability and the vendor distributing a patch when the bad guys would be exploiting it.

They are definitely not seeing a warning on the router they got from their ISP saying:
WARNING: this router gives the bad guys the password to your bank account!
After all, they pretty much have to trust their ISP. Nor are they seeing:
WARNING: This device can be used to attack major websites!
Even if the customer did see this warning, the fate of major websites is not the customer's problem.

Customers aren't seeing these warnings because no-one in the IoT device supply chain knows that these risks exist, nor is anyone motivated to find out. Even if they did know they wouldn't be motivated to tell the end user either prior to purchase, because it would discourage the purchase, or after the purchase, because thanks to Terms of Service it is no longer the vendor's problem.

Expecting end users to expend time and effort fixing the security issues of their IoT devices before disaster strikes is unrealistic. As Norton writes:
If you are sit­ting in this room, to some degree peo­ple are pay­ing you to use a long pass­word. People are pay­ing you to wor­ry about key man­age­ment. If you are a trash col­lec­tor or radi­ol­o­gist or a lawyer, this takes away from your work day.
Second, the view from the IoT device manufacturer. In June 2014 my friend Jim Gettys, who gained experience in high-volume low-cost manufacturing through the One Laptop Per Child project and the OpenWrt router software effort, gave a talk at Harvard's Berkman Center entitled (In)Security in Home Embedded Devices. It set out the problems IoT device manufacturers have in maintaining system security. It, or Bruce Schneier's January 2014 article The Internet of Things Is Wildly Insecure — And Often Unpatchable that Jim inspired are must-reads.

The IoT device supply chain starts with high-volume, low-margin chip vendors, who add proprietary "binary blobs" to a version of Linux. Original device manufacturers (ODMs), again a low-margin business, buy the chips and the software and build a board. The brand-name company buys the board, adds a user interface, does some quality assurance, puts it in a box and ships it. Schneier explains:
The problem with this process is that no one entity has any incentive, expertise, or even ability to patch the software once it’s shipped. The chip manufacturer is busy shipping the next version of the chip, and the ODM is busy upgrading its product to work with this next chip. Maintaining the older chips and products just isn’t a priority.
The result is:
the software is old, even when the device is new. For example, one survey of common home routers found that the software components were four to five years older than the device. The minimum age of the Linux operating system was four years. The minimum age of the Samba file system software: six years. They may have had all the security patches applied, but most likely not. No one has that job. Some of the components are so old that they’re no longer being patched.
Because the software is old, many of its vulnerabilities will have been discovered and exploited. No-one in the supply chain has the margins to support life-long software support, quality assurance and distribution. Even it were possible to provide these functions, a competitor providing them would price them selves out of the market. The BITAG recommendations would work in a different world, but in this one the supply chain has no ability nor resources to implement them.

Bruce Schneier recently testified to the House Energy & Commerce Committee, pointing out the reason why, even if the BITAG recommendations were in effect, the problem wouldn't be solved:
These devices are a lower price margin, they’re offshore, there’s no teams. And a lot of them cannot be patched. Those DVRs are going to be vulnerable until someone throws them away. And that takes a while. We get security [for phones] because I get a new one every 18 months. Your DVR lasts for five years, your car for 10, your refrigerator for 25. I’m going to replace my thermostat approximately never. So the market really can’t fix this.
There are already enough insecure IoT devices on the network to bring down the Internet. Millions more are being added every week. And they aren't going away any time soon.

So, to conclude, what is odd about the report? As far as I can see, there is nothing in the report from the Broadband Internet Technical Advisory Group about what the Broadband Internet industry can do to fix the security issues the report raises. It lays the blame for the problem squarely on the IoT device industry. Very convenient, no?

There clearly are things the broadband industry could do to help. Intel's Schrecker has made one proposal, but it is equally impractical:
As for coping with the threat we face now, courtesy of millions of pathetically insecure consumer IoT devices, Schrecker’s proposed solution sounds elegantly simple, in theory at least: “Distribute, for example, gateways. Edge gateways that can contain a DDoS and are smart enough to talk to each other and help contain them that way.”
ISPs haven't deployed even the basic BCP38 filtering, which would ensure that packets had valid source addresses, and thus make DDoS attacks traceable. But they're going to buy and deploy a whole lot of new hardware? Note that the Mirai DDoS botnet technology has recently been upgraded to spoof source addresses:
Propoet also advertised another new feature, which is the ability to bypass some DDoS mitigation systems by spoofing (faking) the bot's IP address. Previous versions of the Mirai malware didn't include this feature.

2sec4u confirmed in a private conversation that some of the newly-spawned Mirai botnets can carry out DDoS attacks by spoofing IP addresses.
The upgraded technology is used in a botnet four times bigger than the one that took out Dyn last month. It rents for $50-60K/month, nothing compared to the damage it can do. Mirai has been updated with some zero-day exploits to which somewhere between 5M and 40M home routers appear to be vulnerable. Estimating 30% utilization of the 5M resource at $50K/month suggests Mirai-based botnets are a $2.2M/year business.

Schrecker is right about the seriousness of the DDoS threat:
If the operators behind these IoT-enabled botnets were to “point them at industry” instead of smaller targets such as individual journalists’ websites, as happened with infosec researcher Brian Krebs, the impact on the world economy could be “devastating”, he added.
ISPs could do more to secure IoT devices, for example by detecting devices with known vulnerabilities and blocking access to and from them. But this would require a much higher level of user support than current ISP business models could support. Again, an ISP that "did the right thing" would price themselves out of the market.

There is plenty of scope for finger-pointing about IoT security. Having industry groups focus on what their own industry could do would be more constructive than dumping responsibility on others whose problems they don't understand. But it appears in all cases that there is are collective action and short-termism problems. Despite the potential long-term benefits, individual companies would have to take actions against their short-term interests, and would be out-competed by free-riders.

Charlie Wapner to serve as Senior Research Associate / District Dispatch

I am pleased to announce the appointment of Charlie Wapner as a Senior Research Associate in ALA’s Office for Information Technology Policy (OITP). In this role, Charlie will provide research and advice on the broad array of issues addressed by OITP, and especially as needed to advocate with the three branches of the federal government and communicate with the library community.

Charlie Wapner, newly appointed Senior Research Associate at OITP

Charlie Wapner, newly appointed Senior Research Associate at OITP

Charlie will be familiar to District Dispatch readers as he was a Senior Information Policy Analyst here in OITP in 2014-16. Among his contributions for ALA included the completion of two major reports. He completed a major report, “Progress in the Making: 3D Printing Policy Considerations Through the Library Lens,” which attracted library and general press coverage (e.g., Charlie contributed to a piece by the Christian Science Monitor), and he was invited to write an article for School Library Journal.

OITP’s work on entrepreneurship was launched by Charlie through the development and publication of “The People’s Incubator: Libraries Propel Entrepreneurship” (.pdf), a 21-page white paper that describes libraries as critical actors in the innovation economy and urges decision makers to work more closely with the library community to boost American enterprise. The paper is rife with examples of library programming, activities and collaborations from across the country. Charlie’s work is the basis for our current policy advocacy and the creation of a brief on libraries and entrepreneurship and small business.

Charlie came to ALA in March 2014 from the Office of Representative Ron Barber (Ariz.), where he was a legislative fellow. Earlier, he also served as a legislative correspondent for Representative Mark Critz (Penn.). Charlie also interned in the offices of Senator Kirsten Gillibrand (N.Y.) and Governor Edward Rendell (Penn.). After completing his B.A. in diplomatic history at the University of Pennsylvania, Charlie received his M.S. in public policy and management from Carnegie Mellon University.

The post Charlie Wapner to serve as Senior Research Associate appeared first on District Dispatch.

The Non-Reader Persona / LibUX

The saga of the user experience of ebooks continues. An in-time-for-Thanksgiving breakdown by Pew Research Center’s Andrew Perrin looks at the demographics of Americans who don’t read any books whatsoever – and as bleak as that sounds, I think in the spirit of the weekend we should be thankful.

Why’s that? Well, we in libraries could do better about knowing who not to cater to.

This data helps us better understand our non-adopters.

Given the share that hasn’t read a book in the past year, it’s not surprising that 19% of U.S. adults also say they have not visited a library or a bookmobile in the past year. The same demographic traits that characterize non-book readers also often apply to those who have never been to a library. Andrew Perrin

A man sitting on a wooden bench in a grassy field, reading

@benwhitephotography

Who are “non-adopters”?

I am on record generally thinking that personas aren’t particularly useful in design, but there are three I like:

  • First adopters perceive an immediate need for a service or product. Once offered, they’re on board.
  • Late adopters probably see your service favorably – but there’s no rush. Maybe the price isn’t right, or it doesn’t quite solve a job they need done just yet. They’ll come around.
  • Non adopters are disinterested and aren’t likely to use your service, period.

You organize your design and development strategy around these: first adopters will adopt, generate feedback, some income — or whatever numbers matter to your organization, whether that’s foot traffic, registration, and so on — and create word-of-mouth that in time will loop-in late adopters. Each type of user values the features of your service differently, but because first adopters are core to reaching others, you prioritize your early efforts for them.

Identifying non-adopters is useful in the short-term so you don’t waste your time catering to them. It sounds crass, but features non-adopters like that first- and late-adopters don’t aren’t to be mistaken for features that will engage non-adopters.

They’re red-herrings.

Are non-adopters driving our decision making?

Earlier this year in an episode about library usage and trends for Metric: A UX Podcast, we observed how the support for libraries in a separate Pew survey outweighed their actual usage, and feedback about which services to provide differed noticeably between those who use libraries and those who don’t. As the trends in public libraries move toward makerspaces, 3d-printing and the like, libraries need to be very clear about who precisely is asking for these.

Bar chart that visualizes traditional library usage.

When asked why they visit public libraries in person, large numbers of library users cite fairly traditional reasons. These include borrowing printed books (64% of library visitors do this, down slightly from the 73% who did in 2012, but similar to the 66% who did so in 2015) or just sitting and reading, studying, or engaging with media (49%, identical to the share who did so in 2012). John B. Horrigan

It’s hard to tell whether this chart demonstrates actual interest in the use of 3d printers or other high-tech devices, or whether these services weren’t yet available in the respondents’ community. I’d guess for many it was the latter. We can probably chalk some of this up to lack of awareness.

Even so, the trend is clear.

Libraries are putting real steam behind this service category. At this time there are 730 libraries plotted in Amanda’s map of 3d printers in libraries – and growing.

The question is whether meaningful investment in these features engage users as much or more than others. Do we know? Libraries don’t need to make profit, but there’s some concern about the impact failure might have on experimentation in the future – let alone on the overall impact on community support during election season.

Appealing to the wrong users might have gross consequences on the user experience of everyone else – especially if it knocks libraries off the user-centric bandwagon all together.

What better way to scare library administration from iterative design thinking than going full-bore without the prerequisite user research, burning time and budget into projects that patrons don’t care about?

Non-adopters in the long-term

In the long term, non-adopters deserve a second look. They define the boundaries of our practical decision-making but they also represent potential users of new services.

For most organizations and companies, non-adopters are a lost cause. The target-audience of adopters is narrowly defined by use cases. Reaching non-adopters demands a tangential service that meets and entirely unrelated need, but the overhead for designing, developing, and supporting these can be too much.

Libraries are unique in that “disparate community services” — academic or public — are sort of what they’re about. Collecting and distributing, teaching, entertaining, and advocating exemplify this, which now defines the makeup of what people think libraries do and why there is high public support. It doesn’t seem that much of a stretch to branch into software development, W3C standards-making, the block chain, makerspaces, 3d printing, media labs, coworking, and more.

Organizationally libraries are pre-positioned to extend into new service categories more naturally than others.

The challenge is to iterate sustainably.

Non-readers are likely to not be library users

Or, more optimistically, non-readers are likely to not be library users yet. There are opportunities to engage them, but the point of this whole thread is to not make light of the risk when you are budget- or time- or talent-constrained.

Andrew determined non-readers tend to be

  • adults with a high-school degree or less
  • less likely to own smartphones or tablets
  • at or below a household income of $30,000 per year
  • potentially older: 29% of adults ages 50 and up have not a read a book in the past year

How the non-reader persona impacts library design

The lack of a smartphone doesn’t rule-out that non-readers use the web. In fact, we know from the kind of work we do both that the digital divide is real and, more importantly, that libraries play an important role bridging that gap by providing free internet and access — even lending devices, in some cases – having done so since the ’90s. Increasingly even reluctant internet users must become internet users when applying for work or participating in government, assistance for which also fall within the boundaries of what libraries do.

None of this really matters however if the library web presence, which is increasingly the cornerstone for even tangible library services (like circulation), isn’t designed to reach the greatest common denominator of device support. There are people who don’t own a smartphone intentionally, but for many it, the data plan, and internet access is cost prohibitive. These users might have old phones, old browsers, low data threshold, slow internet, or just lack familiarity with or comfort using the internet.

To even hope of reaching these folks imply that our websites must

  • be reasonably backward compatible with older browsers
  • fit as many device shapes and screen sizes as possible
  • go easy on the page weight (see “what does my site cost“?)
  • be accessible

let alone emphasizing easy onboarding of new patrons in our physical spaces, ensuring here also accessibility, findability, and affordance.

This means that library websites that aren’t progressively enhanced, mobile-first, responsive, lightweight and fast (use this system of measurements) are almost guaranteed to fail to engage this group.

Registration opens for National Library Legislative Day 2017 / District Dispatch

Library advocates attending NLLD 2016 stand in a photobooth, holding signs like "Library Strong" and "I love libraries"

Photo Credit: Adam Mason

We are happy to announce that registration for the 43rd annual National Library Legislative Day is open. This year, the event will be held in Washington, D.C. on May 1-2, 2017, bringing hundreds of librarians, trustees, library supporters, and patrons to Washington, D.C. to meet with their Members of Congress and rally support for libraries issues and policies. As with previous years, participants will receive advocacy tips and training, along with important issue briefings prior to their meetings. Featured issues include:

  • Library funding
  • Privacy and surveillance reform
  • Copyright modernization
  • Access to government information
  • Affordable broadband access
  • Net neutrality protection

Participants at National Library Legislative Day have the option of taking advantage of a discounted room rate by booking at the Liaison. To register for the event and find hotel registration information, please visit the website.2017-nlld-banner

Want to see a little more? Check out the photos from last year!

We also offer a scholarship opportunity to one first-time participant at National Library Legislative Day. Recipients of the White House Conference on Library and Information Services Taskforce (WHCLIST) Award receive a stipend of $300 and two free nights at a D.C. hotel. For more information about the WHCLIST Award, visit our webpage.

I hope you will consider joining us!

For more information or assistance of any kind, please contact Lisa Lindle, ALA Washington’s Grassroots Communications Specialist, at llindle@alawash.org or 202-628-8140.

The post Registration opens for National Library Legislative Day 2017 appeared first on District Dispatch.

OpenTrialsFDA presents prototype as finalist for the Open Science Prize / Open Knowledge Foundation

For immediate release

Open Knowledge International is thrilled to announce that the OpenTrialsFDA team is presenting its prototype today at the BD2K Open Data Science Symposium in Washington, DC as finalist for the Open Science Prize. The Open Science Prize is a global science competition to make both the outputs from science and the research process broadly accessible. From now until 6 January 2017, the public is asked to help select the most promising, innovative and impactful prototype from among the six finalists – of which one will receive the grand prize of $230,000.

OpenTrialsFDA is a collaboration between Dr. Erick Turner (a psychiatrist-researcher and transparency advocate), Dr. Ben Goldacre (Senior Clinical Research Fellow in the Centre for Evidence Based Medicine at the University of Oxford) and the team behind OpenTrials at Open Knowledge International.  

OpenTrialsFDA works on making clinical trial data from the FDA (the US Food and Drug Administration) more easily accessible and searchable. Until now, this information has been hidden in the user-unfriendly Drug Approval Packages that the FDA publishes via its dataportal Drugs@FDA. These are often just images of pages, so you cannot even search for a text phrase in them. OpenTrialsFDA scrapes all the relevant data and documents from the FDA documents, runs Optical Character Recognition across all documents, links this information to other clinical trial data, and now presents it through a new user-friendly web interface at fda.opentrials.net.

otfda_prototype_search

Explore the OpenTrialsFDA search interface

Any user can type in a drug name, and see all the places where this drug is mentioned in an FDA document. Users can also access, search and present this information through the application programming interfaces (APIs) the team will produce. In addition, the information has been integrated into the OpenTrials database, so that the FDA reports are linked to reports from other sources, such as ClinicalTrials.gov, EU CTR, HRA, WHO ICTRP, and PubMed.

The prototype will provide the academic research world with important information on clinical trials in general, improving the quality of research, and helping evidence-based treatment decisions to be properly informed. Interestingly, the FDA data is unbiased, compared to reports of clinical trials in academic journals. Dr. Erick Turner explains: “With journal articles everything takes place after a study has finished, but with FDA reviews, there is a protocol that is submitted to the FDA before the study has even started. So the FDA learns first of all that the study is to be done, which means it can’t be hidden later. Secondly it learns all the little details, methodological details about how the study is going to be done and how it is going to be analyzed, and that guards against outcome switching.”

Dr Ben Goldacre: “These FDA documents are hugely valuable, but at the moment they’re hardly ever used. That’s because – although they’re publicly accessible in the most literal sense of that phrase – they are almost impossible to search, and navigate. We are working to make this data accessible, so that it has the impact it deserves.”

Voting for the Open Science Prize finalists is possible through  http://event.capconcorp.com/wp/osp: more information on OpenTrialsFDA is available from fda.opentrials.net/about and from the team’s video below.

 

Editor’s notes

Dr. Ben Goldacre
Ben is a doctor, academic, writer, and broadcaster, and currently a Senior Clinical Research Fellow in the Centre for Evidence Based Medicine at the University of Oxford. His blog is at www.badscience.net and he is @bengoldacre on twitter. Read more here. His academic and policy work is in epidemiology and evidence based medicine, where he works on various problems including variation in care, better uses of routinely collected electronic health data, access to clinical trial data, efficient trial design, and retracted papers. In policy work, he co-authored this influential Cabinet Office paper, advocating for randomised trials in government, and setting out mechanisms to drive this forwards. He is the co-founder of the AllTrials campaign. He engages with policy makers. Alongside this he also works in public engagement, writing and broadcasting for a general audience on problems in evidence based medicine. His books have sold over 600,000 copies.

Dr. Erick Turner
Dr. Erick Turner is a psychiatrist-researcher and transparency advocate. Following a clinical research fellowship at the NIH, he worked for the US Food and Drug Administration (FDA), acting as gatekeeper for new psychotropic drugs seeking to enter the US market. In 2004 he published a paper drawing researchers’ attention to the Drugs@FDA website as a valuable but underutilized source of unbiased clinical trial data. Dissatisfied with the continuing underutilization of Drugs@FDA, he published a paper in the BMJ in order to encourage wider use of this trove of clinical trial data.

Open Knowledge International
https://okfn.org   
Open Knowledge International is a global non-profit organisation focussing on realising open data’s value to society by helping civil society groups access and use data to take action on social problems. Open Knowledge International addresses this in three steps: 1) we show the value of open data for the work of civil society organizations; 2) we provide organisations with the tools and skills to effectively use open data; and 3) we make government information systems responsive to civil society.

Open Science Prize
https://www.openscienceprize.org/res/p/finalists/
The Open Science Prize  is a collaboration between the National Institutes of Health and the Wellcome Trust, with additional funding provided by the Howard Hughes Medical Institute of Chevy Chase, Maryland.  The Open Data Science Symposium will feature discussions with the leaders in big data, open science, and biomedical research while also showcasing the finalists of the Open Data Science Prize, a worldwide competition to harness the innovative power of open data.

pockets of people / Harvard Library Innovation Lab

we hosted a bunch of amazing visitors earlier this week (knight prototype workshop!) and we were fortunate enough to gather everyone for dinner. after drinks were served, i used my phone’s camera and swooped into each booth aka pocket of people.

swooping into these pockets of people is surprisingly meaningful and rich — i very much get a distinct sense for the vibe/mood/energy at each table. this swoop in and pan pattern is deep.

what should i do with these clips? feels like there’s some coolness here but i can’t seem to grab it. ideas?

ALA seeks nominations for 2017 James Madison awards / District Dispatch

The American Library Association’s (ALA) Washington Office is calling for nominations for two awards to honor individuals or groups who have championed, protected and promoted public access to government information and the public’s right to know.

The James Madison Award, named in honor of President James Madison, was established in 1986 to celebrate an individual or group who has brought awareness to these issues at the national level. Madison is widely regarded as the Father of the Constitution and as the foremost advocate for openness in government.

James Madison Award logo

James Madison Award logo

The Eileen Cooke Award honors an extraordinary leader who has built local grassroots awareness of the importance of access to information. Cooke, former director of the ALA Washington Office, was a tireless advocate for the public’s right to know and a mentor to many librarians and trustees.

Both awards are presented during Freedom of Information (FOI) Day, an annual event on or near March 16, Madison’s birthday.

Nominations should be submitted to the ALA Washington Office no later than January 20, 2017. Submissions should include a statement (maximum one page) about the nominee’s contribution to public access to government information, why it merits the award and one seconding letter. Please include a brief biography and contact information for the nominee.

Send e-mail nominations to Jessica McGilvray, Deputy Director for the ALA Office of Government Relations, at jmcgilvray@alawash.org. Submissions can also be mailed to:

James Madison Award / Eileen Cooke Award
American Library Association
Washington Office
1615 New Hampshire Avenue, NW
Washington, D.C. 20009-2520

The post ALA seeks nominations for 2017 James Madison awards appeared first on District Dispatch.

CopyTalk webinar: Section 108 video project / District Dispatch

Starting in the late 1970s academic libraries built collections of VHS titles with an emphasis on supporting classroom teaching. On average, academic libraries have more than 3,000 VHS tapes. Cup of coffee with beans.

Eclipsed by robust and rapid adoption of DVDs, the VHS era is now over. But a crisis is welling for libraries. Of the hundreds of thousands of VHS recordings commercially released, a substantial number never were released on DVD or in streaming format. To compound matters, industry experts estimate that various forces converge against VHS (age of tapes, irreparable and irreplaceable equipment, retirement of VHS technicians) ultimately making the format inaccessible by 2027.

Under Section 108 of U.S. Copyright law, libraries have an available remedy to this problem. The law allows duplication of some items if prior to duplication, a reasonable search determined that an unused copy of the title is not available.

This session presents a cooperative database, established to capture the search efforts for current distribution of VHS video titles, and to identify titles eligible for duplication under Section 108.

Our speaker will be deg farrelly, who has been a media librarian for 40 years, the last 25 at Arizona State University. He has played instrumental roles at multiple companies in the development of streaming video collections and licensing, including the first PDA, the first subscription and the first EBA models. Co-investigator of two national studies, The Survey of Academic Library Streaming Video (2013) and Academic Library Streaming Video Revisited (2015), farrelly writes and presents frequently on issues related to streaming video.

Thursday, December 1st at 2pm Eastern/11am Pacific for our hour long free webinar. Join us!

Go to http://ala.adobeconnect.com/copytalk/ and sign in as a guest. You’re in.

This  free webinar program is brought to you by OITP’s copyright education subcommittee. Space is limited, but all CopyTalk webinars are archived.

The post CopyTalk webinar: Section 108 video project appeared first on District Dispatch.

German DSpace User Group Meeting 2017 / FOSS4Lib Upcoming Events

Date: 
Thursday, September 21, 2017 -
09:00 to 17:00
Supports: 

Last updated November 30, 2016. Created by Peter Murray on November 30, 2016.
Log in to edit this page.

Following meetings in 2014, 2015, and 2016, we are happy to announce that there will be a fourth German DSpace User Group Meeting in 2017. The German DSpace User Group Meeting 2017 will be organized by Fraunhofer IRB, Stuttgart University Library and The Library Code. It will take place at University of Stuttgart on Thursday, 21st September 2017. So please, save the date!  Further information will be sent out.
Information about the German DSpace User Group may be found here.

MarcEdit Update / Terry Reese

In what’s become a bit of a tradition, I took some of my time over the Thanksgiving holiday to work through a few things on my list and put together an update (posted last night).  Updates were to all versions of MarcEdit and cover the following topics:

Windows/Linux:

* Enhancement: Dedup Records – addition of a fuzzy match option
* Enhancement: Linked Data tweaks to allow for multiple rules files
* Bug Fix: Clean Smart Characters can now be embedded in a task
* Enhancement: MARC Tools — addition of a MARC=>JSON processing function
* Enhancement: MARC Tools — addition of a JSON=>MARC processing function
* Behavior Change: SPARQL Browser updates — tweaks make it more simple at this point, but this will let me provide better support
* Dependency Updates: Updated Saxon XML Engine
* Enhancement: Command-Line Tool: MARC=>JSON; JSON=>MARC processes added to the command-line tool
* Enhancement: continued updates to the Automatic updater (due to my webhost continuing to make changes)
* removal of some deprecated dependencies

Mac OS

* Enhancement: Dedup Records – addition of a fuzzy match option
* Enhancement: Linked Data tweaks to allow for multiple rules files
* Enhancement: MARC Tools — addition of a MARC=>JSON processing function
* Enhancement: MARC Tools — addition of a JSON=>MARC processing function
* Behavior Change: SPARQL Browser updates — tweaks make it more simple at this point, but this will let me provide better support
* Dependency Updates: Updated Saxon XML Engine
* Enhancement: continued updates to the Automatic updater (due to my webhost continuing to make changes)
* Enhancement: Linked data enhancement — allow selective collection processing
* Enhancement: MarcEditor: Smart Character Cleaner added to the Edit ShortCuts menu
* removal of some deprecated dependencies

Couple notes about the removal of deprecated dependencies.  These were mostly related to a SPARQL library that I’d been using – but having some trouble with due to changes a few institutions have been making.  It mostly was a convenience set of tools for me, but they were big and bulky.  So, I’m rebuilding exactly what I need from core components and shedding the parts that I don’t require.

Couple other notes – I’ll be working this week on adding the Edit Shortcuts functionality into the Mac versions task manager (that will bring the Windows and Mac version back together).  I’ll also be working to do a little video recording on some of the new stuff just to provide some quick documentation on the changes.

You can download from the website: http://marcedit.reeset.net/downloads or assuming my webhost hasn’t broke it, the automatic downloader.  And I should not, the automatic downloader will now work differently – it will attempt to do a download, but if my host causes issues, it will automatically direct your browser to the file for download following this update.

–tr

70TB, 16b docs, 4 machines, 1 SolrCloud / State Library of Denmark

At Statsbiblioteket we maintain a historical net archive for the Danish parts of the Internet. We index it all in Solr and we recently caught up with the present. Time for a status update. The focus is performance and logistics, not net archive specific features.

Hardware & Solr setup

Search hardware is 4 machines, each with the same specifications & setup:

  • 16 true CPU-cores (32 if we count Hyper Threading)
  • 384GB RAM
  • 25 SSDs @ 1TB (930GB really)

Each machine runs 25 Solr 4.10 instances @ 8GB heap, each instance handling a single shard on a dedicated SSD. Except for machine 4 that only has 5 shards, because it is being filled. Everything coordinated by SolrCloud as a single collection.

netarchive_search_overview_20161129

Netarchive SolrCloud search setup

As the Solrs are the only thing running on the machines, it follows that there are at least 384GB-25*8GB = 184GB free RAM for disk cache on each machine. As we do not specify Xms, this varies somewhat, with 200GB free RAM on last inspection. As each machine handles 25*900GB = 22TB index data, the amount of disk cache is 200GB/22TB = 1% of index size.

Besides the total size of 70TB / 16 billion documents, the collection has some notable high-cardinality fields, used for grouping and faceting:

  • domain & host: 16 billion values / 2-3 million unique
  • url_norm: 16 billion values / ~16 billion unique
  • links: ~50 billion values / 20 billion+ unique

Workload and performance

The archive is not open to the public, so the amount of concurrent users is low, normally just 1 or 2 at a time. There are three dominant access patterns:

  1. Interactive use: The typical request is a keyword query with faceting on a 4-6 fields (domain, year, mine-type…), sometimes grouped on url_norm and often filtered on one or more of the facets.
  2. Corpus probing: Batch extraction jobs, such as using the stats component for calculating the size of all harvested material, for a given year, for all harvested domains separately.
  3. Lookup mechanism for content retrieval: Very experimental and used similarly to CDX-lookups + Wayback display. Such lookups are searches for 1-100 url_field:url pairs, OR’ed together, grouped on the url_field and sorted on temporal proximity to a given timestamp.

Due to various reasons, we do not have separate logs for the different scenarios. To give an approximation of interactive performance, a simple test was performed: Extract all terms matching more that 0.01% of the documents, use those terms to create fake multi-term queries (1-4 terms) and perform searches for the queries in a single thread.

compare_no-facets_sparse-facets_20161130

Non-faceted (blue) and faceted (pink) search in the full net archive, bucketed by hit count

The response times for interactive use lies within our stated goal of keeping median response times < 2 seconds. It is not considered a problem that queries with 100M+ hits takes a few more seconds.

The strange low median for non-faceted search in the 10⁸-bucket is due to query size (number of terms in the query) impacting search-speed. The very fast single-term queries dominates this bucket as very few multi-term queries gave enough hits to land in the bucket. The curious can take a look at the measurements, where the raw test result data are also present.

Morale: Those are artificial queries and should only be seen as a crude approximation of interactive use. More complex queries, such as grouping on billions of hits or faceting on the links-field, can take minutes. The so-far-discovered extremely worst-case is 30 minutes.

Secret sauce

  1. Each shard is fully optimized and the corpus is extended by adding new shards, which are build separately. The memory savings from this are large: No need for the extra memory needed for updating indexes (which requires 2 searchers to be temporarily open at the same time), no need for large segment→ordinal maps for high-cardinality faceting.
  2. Sparse faceting means lower latency, lower memory footprint and less GC. To verify this, the performance test above was re-taken with vanilla Solr faceting.
compare_sparse-facets_solr-facets_20161130

Vanilla Solr faceting (blue) and sparse faceting (pink) search in the full net archive, bucketed by hit count

Lessons learned so far

  1. Memory. We started out with 256GB of RAM per machine. This worked fine until all the 25 Solr JVMs(machine had expanded up to their 8GB Xmx, leaving ~50GB or 0.25% of the index size free for disk cache. At that point the performance tanked, which should not have come as a surprise as we tested this scenario nearly 2 years ago. Alas, quite foolishly we had relied on the Solr JVMs not expanding all the way up to 8GB.Upping the memory per machine to 384GB, leaving 200GB or 1% of index size free for disk cache ensured that interactive performance was satisfactory.
    An alternative would have been to lower the heap for each Solr. The absolute minimum heap for our setup is around 5GB per Solr, but that setup is extremely vulnerable to concurrent requests or memory heavy queries. To free enough RAM for satisfactory disk caching, we would have needed to lower the heaps to 6GB, ruling out faceting on some of the heavier fields and in general having to be careful about the queries issued. Everything works well with 8GB, with the only Out Of Memory incidents having been due to experiment-addicted developers (aka me) issuing stupid requests.
  2. Indexing power: Practically all the work on indexing is being done in the Tika-analysis phase. It took about 40 CPU-core years to build the current Version 2 of the index; in real-time it took about 1 year. Fortunately the setup scales practically linear, so next time we’ll try to allocate 12 power houses for 1 month instead of 1 machine for 12 months.
  3. Automation: The current setup is somewhat hand-held. Each shard is constructed by running a command, waiting a bit less than a week, manually copying the constructed shard to the search cloud and restarting the cloud (yes, restart).In reality it is not that cumbersome, but a lot of time was wasted with the indexer having finished, with noone around to start the next batch. Besides, the excessively separated index/search setup means that the content currently being indexed into an upcoming shard cannot be searched.

Looking forward

  1. Keep the good stuff: We are really happy about the searchers being non-mutable and on top of fully optimized shards.
  2. Increase indexing power: This is a no-brainer and “only” a question of temporary access to more hardware.
  3. Don’t diss the cloud: Copying raw shards around and restarting the cloud is messy. Making each shard a separate collection would allow us to use the collections API for moving them around and an alias to access it all as a single collection.
  4. Connect the indexers to the searchers: As the indexers only handle a single shard at a time, they are fairly easy to scale so that they can also function as searchers for the shards being build. The connection is trivial to create if #3 is implemented.
  5. Upgrade the components: Solr 6 will give us JSON faceting, Streaming, SQL-ish support, graph traversal and more. These aggregations would benefit both interactive use and batch jobs.We could do this by upgrading the shards with Lucene’s upgrade tool, but we would rather perform a whole re-index as we have also need better data in the index. A story for another time.

 


DPLA and Library of Congress Announce New Collaboration / DPLA

The Library of Congress today signed a memorandum of understanding (MOU) with the Digital Public Library of America to become a Content Hub and will ultimately share a significant portion of its rich digital resources with DPLA’s database of digital content records.

The first batch of records will include 5,000 items from three major Library of Congress map collections – the Revolutionary War, Civil War, and panoramic maps.

“We are pleased to make the Digital Public Library of America a new door through which the public can access the digital riches of the Library of Congress,” said Librarian of Congress Carla Hayden. “We will be sharing some beautiful, one-of-a-kind historic maps that I think people will really love. They are available online and I hope even more people discover them through DPLA.”

“We couldn’t be more thrilled to collaborate closely with the Library of Congress, to work with them on the important mission of maximizing access to our nation’s shared cultural heritage,” said DPLA’s Executive Director Dan Cohen, “and we deeply appreciate not only the Library’s incredible collections, but also the great efforts of the Librarian and her staff.”

“The Library of Congress’s extraordinary resources will be exponentially more available to everyone in the United States through DPLA. This partnership will benefit everyone, from curious thinkers to scholars,” said Amy Ryan, President of DPLA’s Board of Directors.

UnitedStates1783

“The United States of America laid down from the best authorities, agreeable to the Peace of 1783,” one of 5,000 Library of Congress maps that will soon be discoverable in DPLA. More info

The Digital Public Library of America, the product of a widely shared vision of a national digital library dating back to the 1990s, was launched with a planning process bringing together 40 leaders from libraries, foundations, academia and technology projects in October, 2010 followed by an intense community planning effort that culminated in 2013. Its aim was to supersede the silo effect many digitization efforts were subject to. Based in Boston, the board of directors includes leading public and research librarians, technologists, intellectual property scholars, and business experts from across the nation. Its goal is to create “an open, distributed network of comprehensive online resources that would draw on the nation’s living heritage from libraries, universities, archives, and museums in order to educate, inform, and empower everyone in current and future ­generations.”

The Library of Congress expects to add a significant portion of its digital items to the original trio of collections over time, covering other collections such as photos, maps and sheet music.

Library of Congress items already appear in the DPLA database. Earlier in this decade, the Library digitized more than 100,000 books in its collections as part of its membership in the HathiTrust and the Biodiversity Heritage Library, both current partners with the DPLA. As a result, those books are already in the DPLA’s collections through those partners.

The Digital Public Library of America strives to contain the full breadth of human expression, from the written word, to works of art and culture, to records of America’s heritage, to the efforts and data of science. Since launching in April 2013, it has aggregated more than 14 million items from more than 2,000 institutions. The DPLA is a registered 501(c)(3) non-profit.

The Library of Congress is the world’s largest library, offering access to the creative record of the United States—and extensive materials from around the world—both on site and online. The Library is the main research arm of the U.S. Congress and the home of the U.S. Copyright Office. Explore collections, reference services and other programs and plan a visit at loc.gov, access the official site for U.S. federal legislative information at congress.gov, and register creative works of authorship at copyright.gov.

Talks at the Library of Congress Storage Architecture Meeting / David Rosenthal

Slides from the talks at last September's Library of Congress Storage Architecture meeting are now on-line. Below the fold, links to and commentary on three of them.

Fontana's 2015 analysis
Robert Fontana updated his invaluable survey of storage media trends with 2015 numbers and more detailed discussion. You need to go read the whole thing; extracts cannot do it justice.

Many of the conclusions he drew are similar to those from my post The Future of Storage and earlier posts:
  • The Kryder rates for tape, hard disk and NAND flash are similar, and in the 20%/yr range. The days of 40%/yr are gone for good.
  • The impact of this change on the costs of storing data for the long haul has yet to sink in. As Fontana says "Storage is more valuable, less replaceable, and must be reliable for longer time periods".
  • No medium's $/GB is outpacing the others by a huge margin, although over time flash is gaining ground.
  • Fontana EB shipped
    Total Exabytes shipped is increasing linearly, not exponentially, at around 77EB/yr. Storage is not undergoing Mind-Boggling Growth; the IDC numbers for "data generated" have nothing to do with storage demand.
  • Total revenue is about flat, now with more of the dollars going to flash and less to hard disk.
  • Last year flash shipped 83EB and hard disk shipped 565EB. For flash to displace hard disk immediately would need 32 new state-of-the-art fabs at around $9B each or nearly $300B in total investment. So not going to happen.
  • But over the next 4 years Fontana projects NAND flash shipments will grow to 400EB/yr versus hard disk shipments perhaps 800EB/yr. So there will be continued gradual erosion of hard disk market share.
I've long admired the work of Kestutis Patiejunas on Facebook's long-term storage systems. He and Sam Merat presented An Optical Journey: Building the largest optical archival data storage system at Facebook. They described the collaboration between Facebook and Panasonic to get the prototype optical storage system that attracted attention when it was announced in 2014 into production. They deployed 10s of Petabytes of 100GB BluRay disks and found a disk failure rate of 0.015%. In 2017 they expect to deploy 100s of PB of a second-generation system with 300GB disks, and in 2018 to achieve Exabyte scale with a third generation using 500GB disks. Panasonic showed the production hardware at the meeting.

I've consistently been skeptical of the medium-term prospects for DNA storage, as in my post debunking Nature's reporting on a paper from Microsoft. Karin Strauss and Luis Ceze, from the team behind that paper, presented A DNA-Based Archival Storage System. Despite my skepticism, I believe this team is doing very important work. The reason is the same as why Facebook's work on optical storage is interesting; it is the system aspects not the media themselves that are important.

The Microsoft team are researching what a DNA-based storage system would look like, not just trying to demonstrate the feasibility of storing data in DNA. For example, they discuss how data might be encoded in DNA to permit random access. Although this is useful research, the fact remains that DNA data storage requires a reduction in relative synthesis cost of at least 6 orders of magnitude over the next decade to be competitive with conventional media, and that currently the relative write cost is increasing, not decreasing.

Git for Data Analysis – why version control is essential for collaboration and for gaining public trust. / Open Knowledge Foundation

Openness and collaboration go hand in hand. Scientists at PNNL are working with the Frictionless Data team at Open Knowledge International to ensure collaboration on data analysis is seamless and their data integrity is maintained.

I’m a computational biologist at the Pacific Northwest National Laboratory (PNNL), where I work on environmental and biomedical research. In our scientific endeavors, the full data life cycle typically involves new algorithms, data analysis and data management. One of the unique aspects of PNNL as a U.S. Department of Energy National Laboratory is that part of our mission is to be a resource to the scientific community. In this highly collaborative atmosphere, we are continuously engaging research partners around the country and around the world.

collaborationImage credit: unsplash (public domain)

One of my recent research topics is how to make collaborative data analysis more efficient and more impactful. In most of my collaborations, I work with other scientists to analyze their data and look for evidence that supports or rejects a hypothesis. Because of my background in computer science, I saw many similarities between collaborative data analysis and collaborative software engineering. This led me to wonder, “We use version control for all our software products. Why don’t we use version control for data analysis?” This thought inspired my current project and has prompted other open data advocates like Open Knowledge International to propose source control for data.

Openness is a foundational principle of collaboration. To work effectively as a team, people need to be able to easily see and replicate each other’s work. In software engineering, this is facilitated by version control systems like Git or SVN. Version control has been around for decades and almost all best practices for collaborative software engineering explicitly require version control for complete sharing of source code within the development team. At the moment we don’t have a similarly ubiquitous framework for full sharing in data analysis or scientific investigation. To help create this resource, we started Active Data Biology. Although the tool is still in beta-release, it lays the groundwork for open collaboration.

customizationwithactivedata

The original use case for Active Data Biology is to facilitate data analysis of gene expression measurements of biological samples. For example, we use the tool to investigate the changing interaction of a bacterial community over time; another great example is the analysis of global protein abundance in a collection of ovarian tumors. In both of these experiments, the fundamental data consist of two tables: 1) a matrix of gene expression values for each sample; 2) a table of metadata describing each sample. Although the original instrument files used to generate these two simple tables are often hundreds of gigabytes, the actual tables are relatively small.

To work effectively as a team, people need to be able to easily see and replicate each other’s work.

After generating data, the real goal of the experiment is to discover something profoundly new and useful – for example how bacteria growth changes over time or what proteins are correlated with surviving cancer. Such broad questions typically involve a diverse team of scientists and a lengthy and rigorous investigation. Active Data Biology uses version control as an underlying technology to ease collaboration between these large and diverse groups.

stalemateActive Data Biology creates a repository for each data analysis project. Inside the repository live the data, analysis software, and derived insight. Just as in software engineering, the repository is shared by various team members and analyses are versioned and tracked over time. Although the framework we describe here was created for our specific biological data application, it is possible to generalize the idea and adapt it to many different domains.

An example repository can be found here. This dataset originates from a proteomics study of ovarian cancer. In total, 174 tumors were analyzed to identify the abundance of several thousand proteins. The protein abundance data is located in this repository. In order to more easily analyze this with our R based statistical code, we also store the data in an Rdata file (data.Rdata). Associated with this data file is a metadata table which describes the tumor samples, e.g. age of the patient, tumor stage, chemotherapy status, etc. It can be found at metadata.tsv (For full disclosure, and to calm any worries, all of the samples have been de-identified and the data is approved for public release.)

Data analysis is an exploration of data, an attempt to uncover some nugget which confirms a hypothesis. Data analysis can take many forms. For me it often involves statistical tests which calculate the likelihood of an observation. For example, we observe that a set of genes which have a correlated expression pattern and are enriched in a biological process. What is the chance that this observation is random? To answer this, we use a statistical test (e.g. a Fisher’s exact test). As the specific implementation might vary from person to person, having access to the exact code is essential. There is no “half-way” sharing here. It does no good to describe analyses over the phone or through email; your collaborators need your actual data and code.

In Active Data Biology, analysis scripts are kept in the repository. This repository had a fairly simple scope for statistical analysis. The various code snippets handled data ingress, dealt with missing data (a very common occurrence in environmental or biomedical data), performed a standard test and returned the result. Over time, these scripts may evolve and change. This is exactly why we chose to use version control, to effortlessly track and share progress on the project.

We should note that we are not the only ones using version control in this manner. Open Knowledge International has a large number of GitHub repositories hosting public datasets, such as atmospheric carbon dioxide time series measurements. Vanessa Bailey and Ben Bond-Lamberty, environmental scientists at PNNL, used GitHub for an open experiment to store data, R code, a manuscript and various other aspects of analysis. The FiveThirtyEight group, led by Nate Silver, uses GitHub to share the data and code behind their stories and statistical exposés. We believe that sharing analysis in this way is critical for both helping your team work together productively and also for gaining public trust.

At PNNL, we typically work in a team that includes both computational and non-computational scientists, so we wanted to create an environment where data exploration does not necessarily require computational expertise. To achieve this, we created a web-based visual analytic which exposes the data and capabilities within a project’s GitHub repository. This gives non-computational researchers a more accessible interface to the data, while allowing them access to the full range of computational methods contributed by their teammates. We first presented the Active Data Biology tool at Nature’s Publishing Better Science through Better Data conference. It was here that we met Open Knowledge International. Our shared passion for open and collaborative data through tools like Git led to a natural collaboration. We’re excited to be working with them on improving access to scientific data and results.

logoOn the horizon, we are working together to integrate Frictionless Data and Good Tables into our tool to help validate and smooth our data access. One of the key aspects of data analysis is that it is fluid; over the course of investigation your methods and/or data will change. For that reason, it is important that the data integrity is always maintained. Good Tables is designed to enforce data quality; consistently verifying the accuracy of our data is essential in a project where many people can update the data.

One of the key aspects of data analysis is that it is fluid…For that reason, it is important that the data integrity is always maintained.

One of our real-world problems is that clinical data for biomedical projects is updated periodically as researchers re-examine patient records. Thus the meta-data describing a patient’s survival status or current treatments will change. A second challenge discovered through experience is that there are a fair number of entry mistakes, typos or incorrect data formatting. Working with the Open Knowledge International team, we hope to reduce these errors at their origin by enforcing data standards on entry, and continuously throughout the project.

I look forward to data analysis having the same culture as software engineering, where openness and sharing has become the norm. To get there will take a bit of education as well as working out some standard structures/platforms to achieve our desired goal.

New add-ons: the #makeITopen campaign by 4Science / DuraSpace News

From Susanna Mornati, Head of Operations 4Science

A new opportunity for add-ons that we call, the #makeITopen campaign, could represent a turning point in the Community support services already offered by 4Science.

At 4Science we truly believe in the value of openness and we are committed to release in open source everything we develop. In order to do so, we need the Community support and participation.

Announcing the German DSpace User Group Meeting 2017 / DuraSpace News

Following meetings in 2014, 2015, and 2016, we are happy to announce that there will be a fourth German DSpace User Group Meeting in 2017. The German DSpace User Group Meeting 2017 will be organized by Fraunhofer IRB, Stuttgart University Library and The Library Code. It will take place at University of Stuttgart on Thursday, 21st September 2017. So please, save the date!  Further information will be sent out.

Information about the German DSpace User Group may be found here.

NOW AVAILABLE: Fedora 4.6.1 and 4.7.0 Releases / DuraSpace News

From Andrew Woods, Fedora Tech Lead

Austin, TX  The Fedora Team is proud to announce the two releases: Fedora 4.6.1 and 4.7.0. The Fedora 4.7.0 release includes many fixes and improvements, documented below, as well as an upgrade to the backend datastore. Therefore, the Fedora 4.6.1 includes a patch to the underlying ModeShape that generates backups suitable for restore to Fedora 4.7.0.

Spotlighting the value of libraries in Washington, DC / District Dispatch

On November 17, the American Library Association (ALA) partnered with the Internet Association to hold a public session on advancing economic opportunity, targeted to the policy community in Washington, D.C. The session, chaired by ALA President Julie Todaro, was held at the Google DC office. The panel discussion was moderated by a reporter from The Hill and included representatives from Yelp and the Internet Association. The audience included a broad cross-section of Washington policy folks. The event was a great opportunity to educate an important national audience on the role and value of libraries in society.

ALA President-elect Jim Neal was a speaker at this session. In his remarks, Jim concisely articulated the many ways that libraries contribute to national goals and how library values strengthen libraries’ ability in serving the nation’s communities.

Four panelists sitting in chairs on a platform

“Here Comes Everybody” panelists (left to right): Ali Breland (moderator), The Hill; Chris Hooton, Internet Association; Jim Neal, ALA; Laurent Crenshaw, Yelp

Here are Jim’s remarks based on his presentation and responses to questions at the session:

Libraries, 120,000 of all types, public, school, academic, government, corporate, are an essential component of the national information infrastructure, and are critical leaders in their communities. We stand for freedom, learning, collaboration, productivity and accessibility. We are trusted, helping to address community concerns and championing our core values, including democracy, diversity, intellectual freedom and social responsibility.

By bringing together access to technology and Internet services and by providing a wide range of information resources, community knowledge and expert information professionals, 21st century libraries transform communities and lives, promote economic development, bridge the digital divide in this country and are committed to equity of access.

Libraries are centers for research and development. Libraries support literacy, in all of its elements. Libraries are spaces for convening, collaborating and creating. Libraries help people find training and jobs. Libraries are at the core of education and scholarship. Libraries provide access to basic and emerging technologies and the education which enables their effective use.

It has been the practice of the American Library Association to evaluate priorities, and program and funding opportunities in the context of a new Administration and Congress. We must sustain and grow federal funding for universal service, for broadband and wireless deployment in schools and libraries. We must create funding for library and school construction and renovation. We must focus these efforts on underserved communities, in our cities and in our rural areas. Individuals without dependable and open Internet access and without digital skills are clearly at a disadvantage when it comes to economic opportunity and quality of life. We must maintain and expand federal investment in our nation’s libraries.

People come to libraries physically and virtually for a variety of reasons. To read, to learn, to do schoolwork. To find job training and secure employment. To file taxes. To research community services, and health concerns. Libraries support developers, freelancers, contractors, not-for-profit organizations, small business owners, and researchers. Libraries provide materials and services for the print-disabled. Libraries serve the homeless, veterans, immigrants, prisoners, and the many individuals who are seeking to make transitions and to improve their lives.

Libraries need to look beyond the programs and the funding. We must forge radical new partnerships with the first amendment, civil rights, and technology communities to advance our information policy interests and our commitment to freedom, diversity and social justice. We must prepare for the “hard ball” of the policy wars. We must fight for net neutrality, for balanced copyright and fair use, for privacy and confidentiality in the face of expanded national security surveillance, for intellectual freedom and first amendment principles, for voting rights, for the transition of immigrants to citizenship, for the dignity of all individuals. We must fight against hate in all of its bigoted manifestations.

Libraries are about education, employment, entrepreneurship, empowerment and engagement. But we are also about the imperatives of individual rights and freedoms, and about helping and supporting the people in our communities.

The post Spotlighting the value of libraries in Washington, DC appeared first on District Dispatch.

All the Books / Karen Coyle

I just joined the Book of the Month Club. This is a throwback to my childhood, because my parents were members when I was young, and I still have some of the books they received through the club. I joined because my reading habits are narrowing, and I need someone to recommend books to me. And that brings me to "All the Books."

"All the Books" is a writing project I've had on my computer and in notes ever since Google announced that it was digitizing all the books in the world. (It did not do this.) The project was lauded in an article by Kevin Kelley in the New York Times Magazine of May 14, 2006, which he prefaced with:

"What will happen to books? Reader, take heart! Publisher, be very, very afraid. Internet search engines will set them free. A manifesto."

There are a number of things to say about All the Books. First, one would need to define "All" and "Books". (We can probably take "the" as it is.) The Google scanning projects defined this as "all the bound volumes on the shelves of certain libraries, unless they had physical problems that prevented scanning." This of course defines neither "All" nor "Books".

Next, one would need to gather the use cases for this digital corpus. Through the HathiTrust project we know that a small number of scholars are using the digital files for research into language usage over time. Others are using the the files to search for specific words or names, discovering new sources of information about possibly obscure topics. As far as I can tell, no one is using these files to read books. The Open Library, on the other hand, is lending digitized books as ebooks for reading. This brings us to the statement that was made by a Questia sales person many years ago, when there were no ebooks and screens were those flickery CRTs: "Our books are for research, not reading." Given that their audience was undergraduate students trying to finish a paper by 9:30 a.m. the next morning, this was an actual use case with actual users. But the fact that one does research in texts one does not read is, of course, not ideal from a knowledge acquisition point of view.

My biggest beef with "All the Books" is that it treats them as an undifferentiated mass, as if all the books are equal. I always come back to the fact that if you read one book every week for 60 years (which is a good pace) you will have read 3,120. Up that to two books a week and you've covered 6,240 of the estimated 200-300 million books represented in WorldCat. The problem isn't that we don't have enough books to read; the problem is finding the 3-6,000 books that will give us the knowledge we need to face life, and be a source of pleasure while we do so. "All the Books" ignores the heights of knowledge, of culture, and of art that can be found in some of the books. Like Sarah Palin's response to the question "Which newspapers form your world view?", "all of them" is inherently an anti-intellectual answer, either by someone who doesn't read any of them, or who isn't able to distinguish the differences.

"All the Books" is a complex concept. It includes religious identity; the effect of printing on book dissemination; the loss of Latin as a universal language for scholars; the rise of non-textual media. I hope to hunker down and write this piece, but meanwhile, this is a taste.

All the Books / Karen Coyle

I just joined the Book of the Month Club. This is a throwback to my childhood, because my parents were members when I was young, and I still have some of the books they received through the club. I joined because my reading habits are narrowing, and I need someone to recommend books to me. And that brings me to "All the Books."

"All the Books" is a writing project I've had on my computer and in notes ever since Google announced that it was digitizing all the books in the world. (It did not do this.) The project was lauded in an article by Kevin Kelley in the New York Times Magazine of May 14, 2006, which he prefaced with:

"What will happen to books? Reader, take heart! Publisher, be very, very afraid. Internet search engines will set them free. A manifesto."

There are a number of things to say about All the Books. First, one would need to define "All" and "Books". (We can probably take "the" as it is.) The Google scanning projects defined this as "all the bound volumes on the shelves of certain libraries, unless they had physical problems that prevented scanning." This of course defines neither "All" nor "Books".

Next, one would need to gather the use cases for this digital corpus. Through the HathiTrust project we know that a small number of scholars are using the digital files for research into language usage over time. Others are using the the files to search for specific words or names, discovering new sources of information about possibly obscure topics. As far as I can tell, no one is using these files to read books. The Open Library, on the other hand, is lending digitized books as ebooks for reading. This brings us to the statement that was made by a Questia sales person many years ago, when there were no ebooks and screens were those flickery CRTs: "Our books are for research, not reading." Given that their audience was undergraduate students trying to finish a paper by 9:30 a.m. the next morning, this was an actual use case with actual users. But the fact that one does research in texts one does not read is, of course, not ideal from a knowledge acquisition point of view.

My biggest beef with "All the Books" is that it treats them as an undifferentiated mass, as if all the books are equal. I always come back to the fact that if you read one book every week for 60 years (which is a good pace) you will have read 3,120. Up that to two books a week and you've covered 6,240 of the estimated 200-300 million books represented in WorldCat. The problem isn't that we don't have enough books to read; the problem is finding the 3-6,000 books that will give us the knowledge we need to face life, and be a source of pleasure while we do so. "All the Books" ignores the heights of knowledge, of culture, and of art that can be found in some of the books. Like Sarah Palin's response to the question "Which newspapers form your world view?", "all of them" is inherently an anti-intellectual answer, either by someone who doesn't read any of them, or who isn't able to distinguish the differences.

"All the Books" is a complex concept. It includes religious identity; the effect of printing on book dissemination; the loss of Latin as a universal language for scholars; the rise of non-textual media. I hope to hunker down and write this piece, but meanwhile, this is a taste.

New judicial rule poses massive privacy threat / District Dispatch

Ever hear of Rule 41 of the Federal Rules of Criminal Procedure? Neither has practically anyone else, including Members of Congress. Unless Congress says “wait” before Dec. 1, it will grant federal law enforcement authorities sweeping new powers to remotely hack into computers or computer systems – maybe yours or your library’s — to neutralize a cybersecurity threat that they think those computers are helping to distribute.

Computer code with the words Date Breach and Cyber Attack highlighted

Source: http://www.globalresearch.ca/wp-content/uploads/2016/11/cyber-attack-data-breach.jpg

Congress has until just December 1 to pass a bill delaying that effective date of the new Rule so it can at least hold a hearing on the intended and unintended consequences of this potentially disruptive, privacy-invasive new Rule. That bill is the “Review the Rule Act” (S.3475; H.R.6341), which would delay the implementation of changes to Rule 41 until July 1, 2017.

There’s still time to stop Rule 41 from going into effect without Congressional scrutiny, but not much! Please, tell your Congressperson and both of your Senators to cosponsor and vote for the Review the Rule Act without delay.

(For more details about Rule 41 and the serious problems that its adoption could produce, please see this November 21 article in The Hill newspaper and the letter just sent to House and Senate leaders by ALA and 25 others to which it refers.)

The post New judicial rule poses massive privacy threat appeared first on District Dispatch.

Want to buy a can opener? / Peter Murray

This has to be among the weirdest pieces of unsolicited mail I’ve ever received. Nigerian prince? That is so yesterday. Virtual pharmacy? Too much effort. No, what we want to sell you is a can opener!

Hi Sir/Madam,

Glad to hear that you’re on the market for can opener.

We specialize in this field for 5 years, with good quality and pretty competitive price. Also we have our own professional designers to meet any of your requirements.

Should you have any questions, call me, let’s talk details.

Yours Sincerely,

Jenny

EAST VIGOR TRADE CO., LTD
ADDRESS: NO.30, FUAN STREET, JIANGNAN NEW TOWN, YANGJIANG, GUANGDONG, CHINA
TEL: 86-662-3670199 / 3666679
FAX: 86-662-3666679

This came through the general information account at my employer, Index Data. Needless to say, as a boutique software development firm specializing in creating applications for libraries we are not in the market for a can opener.

Curious, I tried to find the company — it is in the center of this map on Fu’an St:


That is a highly packed area! I don’t think what Google thinks are roads are actually there. Looking at the map makes me wonder what life is like in that area.

I searched their website for can openers, but I couldn’t find one. They do have knives and cookie cutters and silicone kitchen utensils, so I wonder if can openers is something they are looking to expand into.

Russian spice tea / Coral Sheldon-Hess

My house smells amazing right now, because I am making my great-grandmother’s Russian spice tea, to bring to my friends’ Thanksgiving dinner. It’s a delightful winter party drink, and it’s also good to make if you have a household full of people fighting a cold.

And, speaking of cold and flu season, because the recipe was already on my mind, I also made the instant version, so that we can drink it all winter. It’s good when you’re feeling well, but it feels like magic when you are stuffy or have a bit of a sore throat. (You could throw in more cloves for extra numbing effect—who needs Chloraseptic?)

The instant version isn’t something you’d confuse for the real thing. It’s a tasty beverage in its own right, but it is different from the fresh version. It’s good to have on hand, though: I don’t know about you, but I don’t always have the energy to juice 11 citruses. (With arthritis, I don’t actually ever; Dale juiced more than half of the citruses today.)

I’m sharing both recipes here, so that you can also enjoy one or both of these winter beverages. (Fair warning: they both contain a fair bit of sugar.)

Nanny’s Russian spice tea

(with a few modifications by Coral)

Ingredients:

  • 8 cups water
  • 3/4 tsp ground cloves
  • 1 tsp allspice
  • 1/2 tsp ginger
  • 2 cinnamon sticks (more would be OK)
  • 3 bags of black tea
  • 3 lemons
  • 8 oranges
  • 2 cups sugar

Directions:

Juice the citruses. (Keep the peels, and you can use them for candied citrus peels. Don’t listen to Martha Stewart; you don’t have to include grapefruits, and lemon and lime peels are both great, candied.)

Put the spices and the water into a pot, and bring to a boil.

Remove from heat, add the tea bags, and let it stand for 10 minutes.

Add the citrus juice and sugar.

Enjoy hot. If you don’t drink it all right away, it’ll last in the refrigerator for a few days and can be reheated by the mugful.

Instant Russian spice tea

(with a few modifications by Coral)

Ingredients:

  • 2 cups of Tang powder
  • 1 pouch of Wyler’s lemonade mix (I used 6 of those little “make a bottle of water into lemonade” pouches, and that worked fine)
  • 1.5 cups of unsweetened instant tea powder
  • 3 tsp cinnamon
  • 1 scant tsp cloves
  • 1/2 tsp allspice
  • 1/2 tsp ginger (more would be fine)
  • 3/4 cup sugar

Directions:

Mix all of the ingredients up. Store in an airtight container (or several — one for home, one for work, etc.).

To drink: Mix two heaping teaspoons per cup of hot water.

 

(The image from the post header shows both types of Russian spice tea, together.)

Django project update / Brown University Library Digital Technologies Projects

Recently, I worked on updating one of our Django projects. It hadn’t been touched for a while, and Django needed to be updated to a current version. I also added some automated tests, switched from mod_wsgi to Phusion Passenger, and moved the source code from subversion to git.

Django Update

The Django update didn’t end up being too involved. The project was running Django 1.6.x, and I updated it to the Django LTS 1.8.x. Django migrations were added in Django 1.7, and as part of the update I added an initial migration for the app. In my test script, I needed to add a django.setup() for the new Django version, but otherwise, there weren’t any code changes required.

 Automated Tests

This project didn’t have any automated tests. I added a few tests that exercised the basic functionality of the project by hitting different URLs with the Django test client. These tests were not comprehensive, but they did run a signification portion of the code.

mod_wsgi => Phusion Passenger

We used to use mod_wsgi for serving our Python code, but now we use Phusion Passenger. Passenger lets us easily run Ruby and Python code on the same server, and different versions of Python if we want (eg. Python 2.7 and Python 3). (The mod_wsgi site has details of when it can and can’t run different versions of Python.)

Subversion => Git

Here at the Brown University Library, we used to store our source code in subversion. Now we put our code in Git, either on Bitbucket or Github, so one of my changes was to move this project’s code from subversion to git.

Hopefully these changes will make it easier to work with the code and maintain it in the future.

Tax Forms Outlet Program deadline approaches / District Dispatch

Close up image of a US tax form, available to libraries through the Tax Forms Outlet Program

Source: 401(K) 2012

Libraries participating in the Tax Forms Outlet Program (TFOP) for filing season 2017 should be aware that orders for 2016 tax forms can now be placed. The deadline to submit orders is November 28, 2016.

TFOP participants can submit orders via email to pdf.orders@eforms.enterprise.irs.gov using the Form 8635, Order for Tax Forms Outlet Program (TFOP). To ensure that you receive the products once they have been printed and ready to be released, the IRS recommends that you order as early as possible.

If you need assistance with placing your order, please contact the IRS TFOP Administrator at WI.TFOP.Administrator@irs.gov.

The post Tax Forms Outlet Program deadline approaches appeared first on District Dispatch.

Conference Report: Digital Library Federation 2016 Forum / Library of Congress: The Signal

The Digital Library Federation (DLF) 2016 Forum was held alongside the DLF Liberal Arts Colleges Pre-Conference and Digital Preservation 2016 this year from November 6-10 at the Pfister Hotel in Milwaukee, Wisconsin.

Self-described as a  ”meeting place, marketplace, and congress“ of digital librarians from member institutions and the wider community, the conference, under the leadership of Bethany Nowviskie, set a welcome precedent of accessibility and inclusivity this year. As registration began, DLF released a major revision to their Code of Conduct, expanding the statement to include appropriate models of behavior for the event (such as giving the floor to under-represented viewpoints) and detailing what behaviors may qualify as harassment (such as “sustained disruption of talks or other events”).

Other efforts included publishing a guide to creating accessible presentations, encouraging DLF community members to vote on the program, offering the option to list a preferred gender pronoun on conference name tags and sponsoring an Ally Skills Workshop that taught “simple, everyday ways to support women in their workplaces and communities” that took place on November 8th.

This ethical intentionality set the tone of the forum, whose keynotes and panels resonated around a central theme of professional self-critique and care. What are our responsibilities as digital librarians? What can we do better? How do our actions reflect who we care for and who we don’t?

Jarret M. Drake, Digital Archivist at Princeton University’s Seeley G. Mudd Manuscript Library, opened the DLF Liberal Arts Colleges Pre-Conference with his keynote “Documenting Dissent in the Contemporary College Archive: Finding our Function within the Liberal Arts,” in which he argued that college archives should document student protests and activist efforts that are critical of the campus in an effort to stop re-occurring injustices.

Stacie Williams, the Learning Lab Manager at the University of Kentucky’s Special Collections Research Center, gave the DLF Forum keynote titled “All Labor is Local.” Speaking from her experience as a mother of two, Williams highlighted the necessity of care work in making all other types of labor possible. She called for all librarians to evaluate their organizations, tools and systems with a caregiver’s approach- do they meet basic needs, sustain societal functionality and/or alleviate pain? Williams urged libraries to prioritize our impact on local communities, stop unpaid and underpaid digitization labor and exploiting student labor in general. She highlighted Mukurtu, an open-source content management system that empowers indigenous communities to manage their digital heritage on their own terms, as an example of a care-based project that embodied these ideals.

Bergis Jules,  the University and Political Papers Archivist at the University of California, Riverside library, opened Digital Preservation 2016 with his keynote “Confronting Our Failure of Care Around the Legacies of Marginalized People in Archives,” in which he pointed to a library profession, its archives and funding agencies dominated by white perspectives that have failed to care for the legacies of marginalized groups and who share responsibility for the eradication and/or distortion of these groups. He praised community archive projects such as the Digital Transgender Archive, A People’s Archive of Police Violence in Cleveland, and The South Asian American Digital Archive as models we should look to as we evaluate how our own collections represent or silence marginalized groups.

National Digital Initiatives is interested in approaches to computational use of library collections, and we were pleased to see a large representation of digital scholarship themed panels at the forum. Much like the Collections as Data symposium we hosted earlier this year, practitioners focused on people over tools in their presentations. At #t2d: Managing Scope and Scale: Applying the Incubator Model to Digital Scholarship panel, librarians from UCLA, University of Nebraska-Lincoln, University of Michigan and Florida State described their efforts to build digital scholarship communities on campus and facilitate research projects. Programs such as Florida State University’s Project Enhancement Network and Incubator (PEN & Inc.) and UCLA’s Digital Research Start-Up Partnerships for Graduate Students (DREsSUP) programs enroll faculty, librarians and graduate students in collaborative projects that encourage mutual skill-building and perpetuated mentorship. Representatives from the University of Nebraska-Lincoln, who have graduated three cohorts from their Digital Scholarship Incubator program to date, also discussed the shared challenge of balancing the need for iterative, responsive support for fellows with a set curriculum.

Paige Morgan, Digital Humanities Librarian at the University of Miami, and Helene Williams, Senior Lecturer at the University of Washington Information School, presented the results of their quantitative investigation of the role of digital humanities (DH) & digital scholarship (DS) during the #t5b: DH panel. Studying job ads from 2009 to 2016, they found skills for these types of positions have changed and expanded significantly over time, from an emphasis on digitization and databases (2010) to data management, analysis, project management and understanding the scholarly communication process (2016). Skills associated with copyright and rights management consistently increased over the period.

morganwilliamsfinding

Morgan’s tweet featuring the tableau visualization of DH/DS competencies by Morgan and Williams. For the full worksheet, click here. 

Later in this session, Matt Burton and Aaron Brenner from the University of Pittsburgh used adult learning theory to ground their talk “Avoiding techno-service-solutionism. Organizations who want to cultivate DH culture among staff, they argued, should shift their thinking towards one of mutual inquiry and a provision of differences, moving from a bounded set, or “in” and “out”, model of thinking to a centered set that assumes everyone is on a vector heading towards a DH future.

burton-brennan

See their slide deck and accompanying references here.

This visualization of membership models  struck me as representative of the forum as a whole. Many attendees at DLF pushed back against a static, inherently exclusive definition of librarianship, illustrating instead either literally or by example a dynamic definition that recognizes we are all on the same vector and must use our work to facilitate meaning-making and care with each other and our communities.

To see NDI’s presentation at Digital Preservation 2016, see this Signal post.

The next DLF forum will be held from October 23-25 in Pittsburgh, Pennsylvania.

 

Initiatives at the Library of Congress (Digital Preservation 2016 Talk) / Library of Congress: The Signal

Here’s the text of the presentation I gave during the Initiatives panel at Digital Preservation 2016, held in collaboration with the DLF Forum on November 10, 2016. This presentation is about what the National Digital Initiatives division has been up to in FY16 and what’s coming up in FY17. For a report on the DLF Forum, see this Signal post.

NDSA_slide1
Hello! I’m Jaime Mears and I am from the Library of Congress, with National Digital Initiatives, a division of National and International Outreach.

Among our goals, we are looking for strategic partnerships that help increase access, awareness and engagement with our collections. So, as I present what we’ve done and what we plan to do, please think about whether there may be any synergies with efforts at your own institution.

NDSA_slide2

National Digital Initiatives is a small and agile team created in 2015 to maximize the benefit of the Library of Congress’ digital collections. Kate Zwaard, our chief, previously managed the Digital Repository Development team and led efforts to ingest three petabytes of digital collections. You may have come across some of Mike Ashenfelder’s communication work with the Library from The Signal, and Abbey Potter was formerly an NDSA organizer and a program officer for NDIIPP. I’m the newest member of the group and I was hired fresh off of my time as a National Digital Stewardship Resident, where I built a lab for personal digital archiving.

NDSA_slide3

I’m not new to the Library though. Back in 2012, I worked as intern for about eight months in the Manuscript Division, where I had the privilege of helping process the Charles and Ray Eames collection. The Eames’s are makers of the Eames chair and various modern works of art across formats like film, graphic design and architecture.

In an effort to impose order at Ray’s office at their design studio in California, Ray’s team used to clear her desk periodically, put everything in a shopping bag, large envelope, basket, silk scarf or whatever was handy, and slap a piece of masking tape on it with the date. In processing the collection at the Library, the decision was made to preserve these bundles as Andy Warhol-esque time capsules.

It was my job to go through these  bags and essentially help make them accessible for researchers. The strategy was to  group together provocative pieces and pieces that were indicative of the color palettes or design themes she was collecting at the time into arrangements to entice someone to investigate the rest. Sort of like a visual index into the capsules.

NDSA_slide5

Four years, three library jobs and one MLS degree later, I’m back at the Library of Congress and I recognize that even in this new role I’m still doing something very similar with the National Digital Initiatives team: provoking  exploration, getting users to engage with the Library’s material and staff and surfacing important work that often goes unseen.

NDSA_slide6

We try to highlight digital projects happening across the Library. For example, here is the debut of the Library’s new homepage, the first of many roll outs of a massive site redesign.

NDSA_slide7

This is Natalie Buda Smith, the supervisor of our User Experience team that is working on this redesign. I had the pleasure of interviewing her for the Signal last month and we discussed what it’s like to do a redesign like this for the Library of Congress, which has had a website for over 20 years. Because the Library of Congress was such an early implementer of digital collections, the growing pains of refreshing the interface and making the collection easily accessible can’t be understated. One of her goals is bringing a sense of joy of discovery to the user, and part of her team’s strategies for the homepage was to tell a story as a way of showing that we are more than just our holdings.

NDSA_slide8

So what other strategies does NDI use to tell stories, to provoke, to invite? How do we show that the Library is more than just a collection of things in specific subjects?

NDSA_slide10

There are multiple strategies; I can’t call them buckets because they’re not mutually exclusive. In fact they tend to build on one another, bleed together and most importantly all depend on multiple stakeholders both in and outside of the Library to be successful.

NDSA_slide11

We host events, we highlight collections, we highlight our efforts and those of our colleagues internally and externally, we develop programs, we facilitate experimentation and investigation (usually the final product is a report or example project) of our digital collections and we partner with outside GLAM practitioners. In time, we hope to include data journalists, artists, local community organizations and lifelong learners. Our team is dedicated to a “ rising tide lifts all boats” philosophy.

I will discuss six initiatives that we did this year and then — at the end — layout our game plan for FY17.

NDSA_slide12

We began with a basic question – “Who out there was using our collections?” Traditionally, most units have done this through monthly web metrics reports but we wanted profiles that were more detailed, to give us some direction in our outreach. We partnered with our communications office and Library Services, and came up with three priority areas we wanted profiled: two about types of audiences (undergraduate and graduate students, writers and creative professionals) and one about content (who was using our public domain and rights-cleared content).

We hired a marketing firm to build profiles of these priority areas and we learned  important user behavior trends and ways that we could target these groups. We will use the information from the audience analysis to help design and execute a digital-outreach campaign for existing and upcoming LC digital collections.

NDSA_slide13
To facilitate experimentation and investigation of some of our dark archives, we partnered with the Library of Congress Web Services and the Law Library, and the non-profit Archives Unleashed (Ian Milligan, Matthew Weber and Jimmy Lin). Together we hosted the Library’s first hackathon.

NDSA_slide14

Scholars came from around the world and had two and a half days to partner up, form a query, choose data sets to investigate and present their findings. You can read about this event on The Signal.

NDSA_slide15

This was the first time some of our web archives had ever been used by researchers. Here you see a word cloud that one team generated of Supreme Court nominations. We discovered that there was a Justice Roberts and a Senator Roberts, which threw off our text-mining efforts. Law Librarian Andrew Weber was on this team and gave context to the researchers about why they were seeing skewed results in the word cloud. That was a great example of engaging curators with the researchers.

NDSA_slide16

We co-hosted DPLA fest in 2016 with the National Archives and Records Administration and the Smithsonian Institution.

NDSA_slide17

We hosted a summit in September called Collections as Data. You may have seen it on Twitter as the hasthag #AsData. We invited leaders and experts from organizations that are collecting, preserving, using and providing researchers access to digital collections that are used as data. We asked them to share best practices and lessons learned. The event featured speakers such as data artist Jer Thorp of the Office for Creative Research, data curator Thomas Padilla at UC Santa Barbara, Maciej Ceglowski of Pinboard and Marisa Parham, director of the Five College Digital Humanities Project.

NDSA_slide18

Data humanism surfaced as a uniting theme of the day’s talks. Although data is a human invention, and one that is often very personal, it can be used to:

  • dehumanize people through data collection without consent or collaboration
  • create biased metadata and technical barriers for entry
  • produce design that is falsely interpreted as neutral.

Solutions suggested during the day included collecting and describing data sets with creators, cultivating diverse user communities to benefit from the data, and being transparent about decisions taken by libraries as they make these sets available.

So far the archived streamed video has been viewed over 8000 times, so we feel that the conference registered with a lot of people who are thinking about how to support data scholarship.

NDSA_slide19

The day after the conference, we brought together 30 Library of Congress staff members and 30 invited guests to discuss how the Library could improve data scholarship. We are currently working on some of those recommendations and looking into partnerships. We will publish a report on the conference next month by Thomas Padilla, as well as a series of visualizations by Oliver Bendorf.

NDSA_slide20

We opened an internal call for staff to apply to experiment with Library of Congress digital collections on a part-time, temporary assignment.

NDSA_slide21

We had a number of colleagues apply, and chose Tong Wang, a repository developer and Chris Adams, a developer from the World Digital Library, to be our inaugural fellows.

Seeing our developers use the privilege of time to explore the collections and play with them was a really joyous part of this year for us. We hope to expand the program to a wider, external pool of applicants next round.

NDSA_slide22

We asked two outside experts to do a proof-of-concept for a digital scholars lab in partnership with the Library’s John W. Kluge Center. The Kluge Center hosts a number of senior scholars and post-doctoral fellows, including most recently Dame Wendy Hall. Our goal in this pilot is to demonstrate what a lightweight implementation of using collections as data could look like. There will be a workflow to demo with some of our web archives.

NDSA_slide23

If you’ve seen a theme rise out of these initiatives to increase engagement with our collections, you wouldn’t be wrong. Our focus right now is enabling computational use of our digital collections, although we foresee year to year these themes changing depending upon what’s happening in the GLAM community at large or internally at the Library where we could best be helpful.

So what’s happening now, coming up?

Library Innovation Fellowship: As you heard before, we opened this up internally last year, but we  are currently investigating funding approaches and hope to complete the fellowship this year.

Digital Scholars Lab Implementation with the Kluge Center: Final report and pilot for the Digital Scholars Lab will be done by the end of December, and we will leverage those recommendations to begin piloting enhanced support for digital scholarship in the Kluge Center.

Lab Site Visits: We visited MITH, DCIC and the University of Richmond’s Digital Scholarship Lab, and we will continue to visit labs in libraries, archives, museums and media through the year along the east coast to learn how they are serving collections as data and engaging their communities.

Hackathon: We are currently co-organizing a hackathon for the spring that will include an introductory workshop on analytical tools and methods.

Annual Summit: Following on the success of the Collections as Data summit held in September 2016, NDI will host another conference in a similar style. We’d love to hear suggestions about topics.

Architecture Design Environment Summit: We will partner with other Library of Congress divisions to host a symposium next fall exploring preservation and access of Architecture, Design, and Engineering software and file formats.

Partnership Development: Last and most important, we are looking for opportunities to collaborate. Do you have a lab that we could visit? Are you or your colleagues interested in co-hosting a hackathon with us? Are you also looking to enhance access to your digital collections and want to connect? Talk to us. As you’ve seen from some of the things we’ve accomplished, they have all been through collaboration.

You can email us at ndi@loc.gov. You can also check us out on The Signal as we write about our initiatives and other cool digital projects happening at the Library and in the broader GLAM community. Thanks so much.

NDSA_slide25

 

Openbudgets.eu launches collection of fiscal transparency tools for journalists and civil society organisations. / Open Knowledge Foundation

Berlin, November 21, 2016 – Today, the beta version of OpenBudgets is officially released to the public. The Horizon 2020-funded project seeks to advance transparency and accountability in the fiscal domain by providing journalists, CSOs, NGOs, citizens and public administrations with state-of-the-art tools needed to effectively process and analyze financial data.

For the beta version, we have developed tools around the three pillars of the project: data analytics, citizen engagement, and journalism. In the realm of data analytics, we present a time series forecasting algorithm that integrates with OpenSpending and predicts and visualizes the development of budgets into the future. As for citizen engagement, the participatory budgeting interface lets users preview the interaction with the budget allocation process. Finally, the highly praised ‘budget cooking recipes’ website highlights the journalistic value of budget data by listing cases in which it has been used to investigate corruption.

cookingbudgets_screenshot

Tools, data and stories will be continuously added and improved over the next months as three large-scale pilot scenarios in the domains of participatory budgeting, data journalism and corruption prevention will be launched to gain further insights. These insights will feed into the overall platform for fiscal data.

OpenBudgets.eu develops tools for the analysis of fiscal data. On the fiscal data platform, users can upload, visualise, analyse and compare financial data. Specific tools will be offered to our target audiences: municipalities, participatory budgeting organisations and journalists. Municipalities can use micro-sites to publish their budget and spending data on designated websites, participatory budgeting organisations can use decision-making and monitoring tools to support the process, and journalists are provided with tailor-made tooling and tutorials.

OpenBudgets.eu has launched a call for tender for the improvement of transparency and modernization of budget and spending data directed at municipalities, regional governments, and qualified legal entities. Find more information on the website.

OpenBudgets.eu is a EU funded project run by an international consortium of nine partners:
Open Knowledge International, Journalism++, Open Knowledge Greece, Bonn University, Fraunhofer IAIS, Open Knowledge Foundation Deutschland, Fundación Civio, Transparency International-EU, and University of Economics, Prague.

Press contact:
Anna Alberts – Open Knowledge Foundation Germany
openbudgets.eu | @OpenBudgetsEU
anna.alberts@okfn.de

Read the blog post on the prototype launch of OpenBudgets.eu: http://openbudgets.eu/post/2016/11/18/OBEU-prototype-launch/

Press releases available in other languages: http://openbudgets.eu/press/

Library Lockdown: An escape room by kids for the community / In the Library, With the Lead Pipe

Download PDF

In Brief

Hoping to bring the unexpected to Nebraska City, the Morton-James Public Library applied for an ALA Association for Library Service to Children Curiosity Creates grant to undertake an ambitious project: build an escape room. In a library storage room. With children. The hope was  by trying something completely different, we could increase interest in the library throughout the community and build a sense of ownership in the participants, while encouraging creativity and having a lot of fun. Library Lockdown was a four-month program that brought several dozen kids together—age 8 to 13—to build a fully-functioning escape room. Their creation, the Lab of Dr. Morton McBrains, is now open for business.

Introduction

It all began with a “what if?” and a “why not?” Well, really it started with a large storage room and a grant solicitation. The result was a transformation of not only a space in the library, but in the library’s space in the community. In the spring of 2016, we guided a group of kids in building an escape room in the Morton-James Public Library. It was an extraordinarily fun (and time consuming!) project; and while our goals were mainly focused on what the participants and what the community would take away from it, we were the ones who probably learned the most.

We shared some of our reflections in a short Library Journal article (Thoegersen & Thoegersen, 2016), but wanted to share our experiences in more depth. Our hope is that, after reading this, you will be inspired to create your own flavor of Library Lockdown in your own library. But first, a little exposition.

The Place

Image Caption/Alt Text: Aerial View of Morton-James Public Library, Morton-James Public Library (CC-BY 4.0)

Aerial View of Morton-James Public Library, Morton-James Public Library (CC-BY 4.0)

Morton-James Public Library serves the community of Nebraska City, in southeast Nebraska. Nebraska City has a population of 7,289 (U.S. Census Bureau, 2014). The population is predominately white (91.5%) and is 10.9% Hispanic/Latino. The percentage of people with income below the poverty level was 15.1%, which is higher than the percentage for the state of Nebraska (12.9%), but comparable to the United States as a whole (15.6%).

Built in 1897 (with additions in 1933 and 2002), the public library building is beautiful and historic; it’s listed on the National Register of Historic Places (Nebraska State Historical Society, 2012). It can also seem a bit of a maze and has lots of nooks and crannies. One such nook was a rather large storage room full of used books, holiday decorations, and a miscellany of craft supplies. This room also had some water damage and moldy carpet that needed to be replaced. It was clearly in need of some love and perhaps a new purpose, as well.

The Project

In August 2015, we became aware of the Curiosity Creates grants administered  by the Association for Library Service to Children (ALSC) division of the American Library Association (ALA). Thanks to a donation from Disney, over seventy $7,500 grants were to be awarded to libraries in order “to promote exploration and discovery for children ages 6 to 14” through programming that promotes creativity (American Library Association, 2015). The grant could be used to grow an existing library program or to develop completely new programming. Grant recipients were notified in October 2015, and they were expected to implement their project and complete a project report by May 31, 2016.

Inspired by the availability and purpose of this grant, the recent popularity of escape rooms around the world, and the breakoutEDU movement, Rasmus hit upon an intriguing idea: What if we turned this neglected storage room into an escape room? And why not give kids a chance to build it themselves?

Interlude: So, what is an escape room?

The escape room concept was created by 35-year-old Takao Kato in Kyoto, Japan in 2007 as a way to have a real-life adventure like those he encountered in literature as a child (Corkill, 2009). Escape rooms are like a real-life video game, where you and your friends are “locked”*1 in a room and must search for clues and solve puzzles to determine how to get out. “In order to escape the room the player must be observant of his/her surroundings and use their critical thinking skills as well as elements in the room to aid in their escape” (The Escape Game Nashville, 2015). Escape rooms come in many shapes and sizes but generally include the following:

  1. A story
  2. Puzzles, clues, and riddles
  3. A time limit
  4. A lot of fun

Library Lockdown

Thus the idea of Library Lockdown was born. The plan was for the program to run during the spring of 2016. The group of kids would meet at the library weekly, first learning about escape rooms and puzzles, and then creating the story, decorations, and puzzles for one of their own. After being awarded the ALSC grant in October 2015, library staff went to work clearing out the storage room. The carpet was replaced with the help of a separate, local grant. Library Lockdown was advertised in person by circulation staff, in the local paper, and in local schools. Potential participants were asked to fill out a registration form and a photo waiver.

Based on the registration forms received, Saturdays were selected as the weekly meeting time. The first meeting was February 13th, and two dozen kids showed up. We had meetings every Saturday until the grand opening on May 25th.2

The number of kids attending each meeting varied from ten to nearly 30; with usually around fifteen present. Nearly three dozen kids participated in at least one meeting, but there was a core group of about ten that attended the majority of the meetings. This provided continuity from week to week.

The format for the meetings was generally five to ten minutes of introducing the day’s activities and forty-five to sixty  minutes of work, followed by lunch (paid by the grant). The first few weeks were focused on having the kids solve puzzles on a theme (appropriately, the first week’s theme was Valentine’s Day).

Children playing with cards to illustrate one of the puzzles in the lockdown room.

During week three, the kids selected the theme for the room (zombies), and also decided they wanted to make a zombie movie that would play before and during an escape room run. Then, they began making puzzles themselves and creating different parts of the room. For every subsequent week (except the week we filmed the movie), we planned for at least four separate groups, each supervised by an adult. These groups initially started out very broad:

  1. The tech group was provided with old electronics, a laptop, a couple of Makey Makey kits and a Squishy Circuits kit to use as the basis of a puzzle.
  2. The storytellers brainstormed about the backstory for the room, as well as the screenplay for the zombie movie.
  3. The puzzle group was tasked with coming up with ideas for puzzles.
  4. The “zombiemakers” were given costume makeup and old clothes to practice zombie makeup and create zombie clothes for the movie.

As the weeks progressed, group work became far more specific; they would have a specific task related to a specific piece of the room, e.g. a particular puzzle or props for the room.The final meetings involved pulling everything together and setting up and testing the room.

Creating decorations for the Lab of Dr. Morton McBrains; Morton-James Public Library (CC-BY 4.0)

Creating decorations for the Lab of Dr. Morton McBrains; Morton-James Public Library (CC-BY 4.0)

The grand opening event was on Wednesday, May 25th. The families of all of the participants were invited, along with members of the community. The Nebraska City Tourism and Commerce organization held a ribbon cutting, which was covered by the local paper and radio station (Partsch, 2016; Hannah & Swanson, 2016). The city’s mayor and his family were the official first group to attempt to escape the room, and, while they were “locked” in solving puzzles, everyone else was in the library gallery having a pizza party and solving puzzle boxes that were on their tables. Over 100 people attended the grand opening event, including twenty-two of the Library Lockdown participants. Every participant received their own lock and a family pass to the commercial escape room, Escape This, which opened in Nebraska City just as Library Lockdown wrapped up.

The Library Lockdown escape room is now open for business and is free for anyone to play, though donations are accepted. Groups must book the room in advance by calling the library. The room can be booked any day the library is open from thirty minutes after opening to an hour and a half before close. Generally, only one group may play the room per day, ensuring that there is time to reset the room for the next group and that it isn’t taking up too much staff time. Since its opening, twenty-five groups have played the room (over 150 individuals), and there are currently eight reservations through December 2016. The groups have been families, work groups, school classes, scouting troops, and groups of friends. The escape room will likely remain open until there is a new, interesting idea for how to repurpose the room once again.

Fostering creativity

“Although creativity is a complex and multifaceted construct, for which there is no agreed-upon definition, it is viewed as a critical process involved in the generation of new ideas, the solution of problems, or the self-actualization of individuals…” (Esquivel, 1995, p. 186)

When we discussed what we wanted kids to take away from their participation in Library Lockdown, we relied heavily on a white paper by Hadani & Jaeger (2015) highlighted by ALSC and published by the Center for Childhood Creativity. This paper introduces seven components of creativity:Imagination & Originality, Flexibility, Decision Making, Communication & Self-Expression, Motivation, Collaboration, and Action & Movement. For each component, the authors present a body of research explaining its role in fostering creativity and provide strategies and examples for incorporating each component into projects.

As part of the grant application, we were asked to identify which of the seven components were most critical for the project. We chose to focus on two: Imagination & Originality, and Collaboration. While all of the components played a role in the project, we felt these two were the most vital to success and used the strategies suggested by Hadani & Jaeger for these components as a guide when planning Library Lockdown meetings.

Imagination was key because the kids were starting with an empty room and no story. They had to consider many possibilities and visualize the details of the escape room. Hadani & Jaeger provided five strategies for promoting imagination, which we found to be some of the most valuable guidance, especially during the early weeks of the project.

  • Generate ideas by building on other ideas: This was how we approached puzzle making in the beginning. We created dozens of puzzles for the kids to try out (some thought up by ourselves, many modified from those found through amazing resources like breakoutEDU). We would then ask the kids to build a similar, but new, puzzle. This didn’t always lead to the creation of functional puzzles, but it did help put kids in the mindset of creating their own puzzles.
  • Generate lots of ideas: We used this strategy for determining the theme and story for the room. At the third session, after spending a few weeks having the kids solve and modify puzzles, we had the kids pick a theme for the room. They split into groups and, guided by an adult, wrote down as many possible themes as they could think of. They came up with some pretty awesome themes–like candyland, haunted library, and star wars–before selecting “zombie” by voting for their favorites.
 Drafts of the story for the Library Lockdown escape room; Morton-James Public Library (CC-BY 4.0)

Drafts of the story for the Library Lockdown escape room; Morton-James Public Library (CC-BY 4.0)

  • Plenty of imaginary play and unstructured time: Though we often provided very structured tasks for the kids to work on during meetings, we also included several opportunities for freer, less structured activities. This included giving groups a room full of craft and other materials and puzzle books, and letting them spend the entire time attempting to make puzzles. This time did not result in many strong, functional puzzles, but allowed the kids to experiment and play.
Library Lockdown participants working together to create a puzzle with Play-Dough; Morton-James Public Library (CC-BY 4.0)

Library Lockdown participants working together to create a puzzle with Play-Dough; Morton-James Public Library (CC-BY 4.0)

  • Encourage new ideas and building on others’ ideas: We generally had kids work in groups of three to six, each led by an adult, which allowed us to guide conversations and ensure each kid was able to share their ideas in a positive environment.

Given the time constraints and the variety of tasks involved, the kids had to collaborate and rely on each other to ensure that the puzzles, props, and story formed a cohesive whole. One of Hadani and Jaeger’s suggested strategies for promoting collaboration was providing “project-based opportunities that are structured to avoid merely splitting of tasks in favor of sharing and co-creating.” Given the number of participants and the amount and variety of work that needed to be accomplished, logistically, we had to split participants into multiple groups, each with a different purpose or task. However, each group had to work together to achieve their individual objectives, and all of the groups fed into the same shared goal.

Child playing with iPad and solving a puzzle.

A good example was a puzzle that involved a robot maze. Using grant money, we purchased Dash and Dot, a set of programmable robots. The group working on the puzzle first worked together to program the robots so that by pushing buttons on Dot, Dash would move. At subsequent meetings, they determined how wide the maze paths had to be for Dash to be able to move easily, designed several iterations of the maze on paper, measured out and colored in where the walls would be, and painted the maze. At every step, they worked on these tasks together. Sometimes this was out of necessity (it’s pretty hard to use a chalk line alone), but mostly they were doing tasks together that they could have done individually. This allowed them to problem solve, share ideas, and create something better together.

Psychological ownership

A major outcome we were interested in fostering was a sense of ownership of the library among the community, especially the youth participating in Library Lockdown. Pierce et al. (2003) define psychological ownership as “that state where an individual feels as though the target of ownership or a piece of that target is ‘theirs’ (i.e., it is MINE!)” The project was embarked upon with the hope that creating a physical space would instill a sense of accomplishment and pride, and the children would develop a sense of ownership over the library. Though we did not attempt to measure the participants’ feelings of ownership in any formal way, many of the participants started referring to the escape room as “their room,” and those who played the room with their family or school groups pointed out the puzzles they helped make and enjoyed watching their teammates attempt to solve them. From being on a first name basis with the library director, to having access to a room off limits to everyone else, to keeping secrets about the project from family and friends (can’t give away clues or answers to the puzzles!), we noticed participants definitely had an increased level of comfort in the library space and were clearly very proud of what they had accomplished.

Engaging the community

Working with groups and individuals in the community proved to be both easy and rewarding. From planning to opening, we sought to involve a variety of groups in the project. Initially, we approached principals and teachers at local schools to advertise and help recruit students to participate. This also led to a few teachers volunteering to help.

When the group decided to make a zombie movie, we approached an acquaintance at the local radio station, who volunteered to direct, film, and edit the movie for free. The city police chief also agreed to be interviewed about the zombie invasion. The city mayor and his family readily agreed to be the first official group to try out the escape room. This drummed up publicity and provided a final, grand opening event for the group to celebrate their accomplishment. Though no one declined the invitation to participate, we had a couple of people who offered to help, but had to back out for various personal reasons.   

Nebraska City Chief of Police being interviewed about the Zombie Apocalypse; Morton-James Public Library (CC-BY 4.0)

Nebraska City Chief of Police being interviewed about the Zombie Apocalypse; Morton-James Public Library (CC-BY 4.0)

Gameplay and game mechanics

The entire project was a clear case of ‘process’ over ‘product’. As we explained above, our main goals involved engaging the kids and inspiring them to be creative. Having an escape room was the secondary goal, and, in order to ensure that the room worked well as an experience for the community, we had to make sure the gameplay was there. Whether you enjoy a game or not, is closely related the flow of the experience (Sweetser & Wyeth, 2005).

Gameplay hinges on both variety and functioning mechanics. Variety means that you aren’t simply solving riffs on the same word-replacement puzzle. Functioning mechanics is that the puzzle can be solved, but more importantly it relates to balance.

When explaining the importance of a balanced experience to the kids, we kept coming back to the video-game metaphor. Imagine a race game. If the track you are racing on is a straight line, he best car always wins and no skill is required. On the other end of the spectrum the track can curve so much and beat even the most skilled driver. Neither of those experiences will be a lot of fun. The first one will be boring and the second one frustrating.

We ran a puzzle in which they had to find a key to a lock. Then we tasked them with creating a similar experience for another group. The first thing we heard was a gleeful ‘they will never find it’. Then you remind them of the video game and the racetrack. We want them to find the key. Impossible isn’t fun.

Structuring creativity

Going into the project, we had intended to give a lot of freedom to the participants. We would create the framework and ensure that the project was progressing according to the timeline, but creative choices would be made by the children. The stories and the puzzles would be their own. However, as the project progressed, we realized we needed to put some limits on where they could express their creativity. Too much freedom resulted in several problems, including paralysis (uncertainty of how to proceed),unfeasible or impractical ideas, and a disregard for time constraints. We realized that, given our short timeframe and the logistics involved, we would need to create a bit more structure around the creativity.

There were two main ways we approached this. The first was what we were already trying to do: give them a wide breadth for creativity, then funnel their ideas into a realistic plan. But this creative brainstorming was more effective for items like story and decorations, less so for puzzle making. We allowed the kids to express wild ideas, and then we took these ideas and adapted them into a realistic plan that we could implement. For example, once a zombie theme had been selected, many kids expressed a strong desire to actually dress up as zombies. Of course, this wouldn’t make a whole lot of sense as part of an escape room that would be open for months. Instead, we made the zombie movie to provide the backstory and ambience for the room and give the kids their chance to zombies for a day.

The second approach become essential in the last weeks of the project: providing a realistic frame that had a creative component. We would present the basics of a puzzle and have specific ways that they could make decisions, be creative and contribute. One puzzle involved mathematical equations on a whiteboard and several zombie head cutouts. We explained the puzzle to the kids. They colored and laminated the zombie heads (the laminator was definitely a favorite!). Using examples from math books, they helped fill the white board with equations. They had a blast doing this  (and slipping their names in there, too).

 "Beauty is in the eye of the zombie-holder", one of the puzzles created as part of Library Lockdown; Morton-James Public Library (CC-BY 4.0)

“Beauty is in the eye of the zombie-holder”, one of the puzzles created as part of Library Lockdown; Morton-James Public Library (CC-BY 4.0)

Our kids responded to having a clear goal in mind that was smaller and more manageable than the room as a whole, but still allowed them to play and infuse their ideas into the project.

Building repeatability, building modularity

As we worked on the project, we thought about its adaptability. How could this concept translate into a variety of contexts, especially considering most libraries would not have $7,500 to spend on similar projects or such a large space and/or staff time to devote to it? In addition, we grew concerned about how best to tie the room and the various puzzles together, as well as how we might make the room enjoyable for a variety of skill levels. We were also conscientious of the time it might take to reset the room and ensure that it was reset correctly.

The solution for all of these concerns was simple: modularity. When initially considering the escape room, we imagined the puzzles would be linear and tie into each other. Imagine, you find a key early on and it does seemingly nothing. Your team struggles on, finds more clues and solves more puzzles until the very end, when someone remembers that key from the beginning, and it all makes sense.

While the temptation to tie every puzzle into a sequence and the bigger narrative certainly was compelling, we learned that a clear division between individual puzzles makes sense for project like this. The end result was ten puzzles separated conceptually, as well as physically. Each puzzle corresponded with a locked box. Each box contained a code to be typed into a computer terminal. Once any eight of the codes had been entered, the terminal provided the code to the safe (which, of course, held the antidote to the zombie apocalypse). Ambitious/completionist players could attempt to solve the final two puzzles for bragging rights if they still had time remaining.

This approach provided benefits both during the creation of the escape room and after it opened as well:

  • Having distinct puzzles simplified the planning process, which was a boon given our time constraints and the age of the kids working on the room (most of whom were nine or ten).
  • Since all of the puzzles are self-contained, it is very easy for staff to quickly check if each puzzle is ready to go when resetting the room for a new group.
  • We are able to raise and lower the difficulty of the room by changing the number of solved puzzles required to win, by changing a number in the computer program where players enter the codes.
  • We can remix the gameplay to accommodate groups of different sizes and skill levels. Recently, the local middle school asked to bring their students totry the room. The groups had about a dozen students and only thirty minutes. We had them ignore the overall goal of the room, divided them into smaller teams, and instructed them to work on one puzzle at a time. Once they solved a puzzle, we helped them to reset it, so another group could attempt it, and they moved on to another puzzle.
  • Other libraries can use the same basic format and modify it to fit their time, budgetary, and space limitations.  

Feedback

Participants that attended the grand opening event (22 of the 35 total participants) were asked to complete a short survey about their experience in Library Lockdown. They were asked three Likert-scale questions (“How much fun did you have?”, “Would you do it again?”, “Is the library fun?”) to which they provided generally positive responses (average of 4.5, 4.5, and 4.3, respectively). They were also asked what they liked best about Library Lockdown. Three of the answers were very broad (“the whole thing”, “everything”, “building it”); eight answers mentioned making or solving puzzles; and nine mentioned unique, specific items:

  • Computers
  • Working with locks
  • The story making
  • Watching movie
  • The mayor trying to get out
  • Build lego sets
  • The people
  • Making play-dough shapes
  • eating

Our interpretation of these responses is that the group really enjoyed the puzzle-based structure of Library Lockdown, but many of the kids responded to vastly different aspects of project.

We also did informal interviews with parents during and after the project and got some very positive feedback from them on both the project and the final result. In addition, the program was effective overall for creating hype/awareness of library and reminding our community that we are still very much around. The head of the local radio/television station mentioned how the library seemed to have changed recently, stating “You guys are doing all this stuff now.” Most of the library’s programming outside of Library Lockdown during the preceding months was not new at all, but having a flagship event like this seems to have raised our visibility in our community.

Final thoughts

This was a gratifying project, and we really enjoyed working on it. We also learned a great deal along the way. It was a massive time commitment. The group met for ninety minutes every Saturday from February until May. We also spent a combined ten to fifteen hours per week planning, preparing, and cleaning up.  Enthusiasm for the project, for kids, and for puzzles was a requirement. Anywhere from ten to almost thirty kids would attend each session. This made planning and logistics a challenge: How much space and food do we need? How many volunteers or staff members need to be present? How many activities do we need to have ready? During the later weeks, how are we going to bring this all together?!

By the final weeks, we got very good at being prepared. We had structured activities and backup plans. We also ensured we had enough staff and volunteers to have a high staff to kid ratio (1:4), which allowed us to be more flexible when something different happened than expected (or if more kids showed up than expected).

We wanted this project to bring in a diverse group of kids, and in many ways, it did. There was a near 50-50 split of boys and girls, and, though Nebraska City is predominantly white, multiple cultural backgrounds were represented in our group. There were a few areas we wish we had been more successful in.Though we advertised outside the library in hopes of attracting non-library users, we had few participants who weren’t already library users. We also advertised directly to the Hispanic and home school populations in our city. Though several returned the registration forms, none ultimately attended any of the sessions. In future projects, we will attempt to communicate more closely with potential participants in these groups to better understand what kept them from attending.

Though not a major issue for us, it was important to consider any laws that may be applicable to the room itself. We asked the local fire marshal to review the room for any potential issues, and he approved of the use, as long as the door remained unlocked and unobstructed. The Library Lockdown room is wheelchair accessible and navigable. All of the puzzles can be solved by someone with physical limitations or who is deaf or hard of hearing with little or no assistance. The instructions for the room are written and are also spoken aloud. In addition, reservations are scheduled to allow for a staff person to be present in the room and provide assistance as needed.

We were aware that not every puzzle would work out, so we chose to err on the side of having too many. That way, if something broke – which happened – a back-up puzzle was ready to go. We ended up having ten puzzles in the final room.

You will want to do some beta-testing before having the general public go through the room. We first invited some of our volunteers who had not been involved in the project to try it. This gave us a sense of what worked and what didn’t work. It also instilled some confidence in the project as a whole, as they all seemed to thoroughly enjoy themselves.

We built the room with the intention of not having an employee/volunteer staffing it. But we found that having someone along to guide the participants and explain the rules made a difference. Hints are totally ok. Good-natured teasing as well. The typical group will make it out of the room with 5-10 minutes left on the clock, which is ideal.

Though we were lucky enough to receive a generous award, which allowed us to purchase many materials for Library Lockdown, we certainly could have undertaken  a similar project on a smaller budget. We took advantage of the resources on breakoutEDU, used easily accessible craft supplies, repurposed items that were already at the library (including various things hiding in the storage room before it was cleaned out), and received donations of food from local restaurants. These are strategies that other libraries can take advantage of if interested in building an escape room of their own.

Library Lockdown was a unique opportunity for Morton-James Public Library to bring something different to Nebraska City. Instead of a one-off program, it brought kids into the library weekly to work together on a major project. Based on informal comments and survey responses, the participants enjoyed the puzzle-based nature and variety of activities that Library Lockdown provided. Local press coverage kept the community interested, and the resultant fully-functioning escape room allows the project to continue to engage (Hannah, 2016a; Hannah, 2016b; Mancini, 2015). It is something that made an impression on these kids and our community, and we most certainly will not forget the wonderful things that can be made by asking the simple question: what if?

Acknowledgements

Many thanks to ALSC and Disney for the generous grant that funded this project. Thank you to our reviewers, Lauren Bradley and Ian Beilin for helping us revise and improve this article.

References

American Library Association (2015, August 25). “$7,500 curiosity creates grant from ALSC.” ALAnews. Last accessed November 2, 2016 from http://www.ala.org/news/press-releases/2015/08/7500-curiosity-creates-grant-alsc

Browndorf, M. (2014). “Student library ownership and building the communicative commons”, Journal of Library Administration, 54(2), 77-93, DOI: 10.1080/01930826.2014.903364

Corkill, E. (2009, December 20). “Real escape game brings its creator’s wonderment to life.” The Japan Times. Last accessed October 31, 2016 at http://www.japantimes.co.jp/life/2009/12/20/to-be-sorted/real-escape-game-brings-its-creators-wonderment-to-life

The Escape Game Nashville (2015, May 4). “The History of Escape Games.” Last accessed October 31, 2016 at https://nashvilleescapegame.com/2015/05/04/the-history-of-escape-games

Esquivel, G.B. (1995). “Toward an educational psychology of creativity, part 1.” Educational psychology review, 7(2), pp. 185-202. http://www.jstor.org/stable/23359326.

Hadani, H. & Jaeger, G. (2015). Inspiring a generation to create: 7 critical components of creativity in children [white paper]. Retrieved October 16, 2016, from ResearchGate: https://www.researchgate.net/publication/277301842_Inspiring_a_generation_to_create_7_critical_components_of_creativity_in_children

Hannah, J. (2016a, March 7). “Morton James Public Library to film zombie movie with new escape room.” News Channel Nebraska. Last accessed October 31, 2016 at http://kwbe.com/local-news/morton-james-public-library-to-film-zombie-movie-with-new-escape-room/

Hannah, J. (2016b, April 9). “Zombies run wild in Nebraska City.” News Channel Nebraska. Last accessed October 31, 2016 at http://ncn21.com/local-news/zombies-run-wild-in-nebraska-city/

Hannah, J. & Swanson, D. (2016, May 25). “Bequettes Solve Escape Room Puzzles to Save Nebraska City.” News Channel Nebraska. Last accessed October 31, 2016 at http://ncn21.com/local-news/bequettes-solve-escape-room-puzzles-to-save-nebraska-city/

Mancini, J. (2015, November 25). “Morton-James receives grant for ‘escape room.’” Nebraska City News Press. Last accessed November 3, 2016 at http://www.ncnewspress.com/article/20151125/NEWS/151129931

Nebraska State Historical Society (2012). “Nebraska National Register sites in Otoe County”. Last accessed October 31, 2016 at http://www.nebraskahistory.org/histpres/nebraska/otoe.htm

Partsch, T. (2016, June 2). “Mayor’s family emerge victorious from library Escape Room” Nebraska City News-Press. Last accessed October 31, 2016 at http://www.ncnewspress.com/news/20160602/mayors-family-emerge-victorious-from-library-escape-room

Pierce, J.L., Kostova, T. & K.T. Dirks (2003). “The state of psychological ownership: Integrating and extending a century of research.” Review of General Psychology, 7(1), 84-107, DOI: 10.1037/1089-2680.7.1.84

Sweetser, P., & Wyeth, P. (2005). GameFlow: a model for evaluating player enjoyment in games. Computers in Entertainment (CIE), 3(3), 3-3.

Thoegersen, R. & Thoegersen, J. (2016). “Pure Escapism.” Library Journal, 141(12), pg 24. http://lj.libraryjournal.com/2016/07/opinion/programs-that-pop/pure-escapism-progams-that-pop

U.S. Census Bureau (2014). “2010-2014 American community survey 5-year estimates.” Last accessed November 4, 2016 at http://factfinder.census.gov/bkmk/table/1.0/en/ACS/14_5YR/DP03/1600000US3133705

  1. not really, since the fire marshal would not approve
  2. Except for Arbor Day weekend, that is. Nebraska City is “home of Arbor Day”, and most of the kids were participating in the annual Arbor Day Parade.

UMich Library tried agile – and they kind of liked it / LibUX

Melissa J. Baker-Young writes on Library Tech Talk about experimenting with agile-like project management in the making of Fulcrum.

“We can do that. We’re Agile!”

One can count on this statement to be made at least once during any number of project meetings. True, at the start, it was tinged heavily with sarcasm, and, admittedly, it was me saying it. However, these days, nineteen months into a three year project, while this statement still regularly attends meetings, the skepticism that surrounded it has faded and it’s become a declaration of empowerment.

After several sprints, however, it became clear that we all had to drop the excuses we were using for not populating the board if we wanted to produce a good product on time. While I wouldn’t claim our practices are perfect, I would say that the board serves as the project’s source of truth and the entire team works hard to keep it so.