Planet Code4Lib

Open Knowledge International does IODC2015! / Open Knowledge Foundation

It’s that time of the year again. That time when the international open data community descends on an unsuspecting city for a jam packed week of camps, meet-ups, hacks and conference events. Next week, open data enthusiasts will be taking over Ottawa and Open Knowledge will be there in full force! As we don’t want to miss an opportunity to meet with anyone, we have put together a list of events that we will be involved in and ways to get in touch.We have also started collecting this information in a spreadsheet!

The School of Data team is arriving early for the second annual School of Data Summer Camp. Every year we strive to bring the entire School of Data community together for three intense days to plan future activities, to learn from each other, to improve our skills and ways of working and to give new School of Data fellows the opportunity to meet their future collaborators! This year’s School of Data Summer Camp will take place at the HUB Ottawa. We’ll have a meet and greet on one of the evenings for School of Data family and friends – so watch this space for details, or follow @SchoolofData on Twitter.

On Tuesday, we are partnering with Open North, the Sunlight Foundation, Iniciativa Latinoamericana de Datos Abiertos (ILDA) and Aspiration Tech to put on the Open Data Con Unconference.

Wednesday is going to be a busy day as we will be spread out across three events – CKANCon, organised by the CKAN association, the Opening Parliaments Fringe Event and the Open Data Con Research Symposium, where we will be presenting new work on measuring and assessing open data initiatives and on “participatory data infrastructures”.

At the International Open Data Conference, Open Knowledge team members are co-organising or presenting at the following sessions:

As you can probably see, the week is going to be a busy one and we are aware that it will be difficult to schedule meetings with everyone! To accommodate, the Open Knowledge team and the entire School of Data family are organising informal drinks at The Brig Pub from 7:30 PM Thursday evening! We would love for you to come say hello in person or you can always find Pavel (Open Knowledge’s new CEO!!!!), Zara, Milena, Jonathan, Mor, Sander, Katelyn, School of Data & of course Open Knowledge on twitter!

Safe travels and we will see you in Ottawa!

Institution-wide ORCID Adoption Test in U.K. Shows Promise / Peter Murray

Via Gary Price’s announcement on InfoDocket comes word of a cost-benefit analysis for the wholesale adoption of ORCID identifiers by eight institutions in the U.K. The report, Institutional ORCID, Implementation and Cost Benefit Analysis Report [PDF], looks at the perspectives of stakeholders, a summary of findings from the pilot institutions, a preliminary cost-benefit analysis, and a 10-page checklist of consideration points for higher education institutions looking to adopt ORCID identifiers in their information systems. The most interesting bits of the executive summary came from the part discussing the findings from the pilot institutions.

Perhaps surprisingly, technical issues were not the major issue for most pilot institutions. A range of technical solutions to the storage of researchers’ ORCID iDs were utilised during the pilots. … Of the eight pilot institutions, only one chose to bulk create ORCID iDs for their researchers, the others opted for the ‘facilitate’ approach to ORCID registration.

Most pilot institutions found it relatively easy to persuade senior management about the institutional benefits of ORCID but many found it difficult to articulate the benefits to individual researchers. Several commented that staff saw it as ‘another level of bureaucracy’ and it was also noted that concurrent Open Access (OA), REF and ORCID activities can make the message confused, as they overlap. … Clear and effective messages (as short and precise as possible), creating a well-defined brand for ORCID and the targeting of specific audiences and audience segments were identified as being especially important.

One thing I found surprising in the report was the lack of the mention of the usefulness of ORCID identifiers in the linked data universe. The word “linked” appeared six times in the report; five of the six mentions talk about connections between campus systems and ORCID. It would seem that some of the biggest benefits of ORCID ids will come when they appear as the object of a subject-predicate-object triple in data published and consumed by various systems on the open web. That is, part of the linked open data.

Library Linked Data in the Cloud / HangingTogether

librarylinkeddataA book that a few of our colleagues have been working on for quite some time has now been released: Library Linked Data in the Cloud: OCLC’s Experiments with New Models of Resource Description. You can also preview it on Google Books.

OCLC Research has been working with linked data for years, and we have developed processes for mining our MARC record database into linked and linkable entities. This book reports on a lot of that work, the problems we ran into and some of the solutions we created.

The main sections are:

  1. Library Standards and the Semantic Web
  2. Modeling Library Authority Files
  3. Modeling and Discovering Creative Works
  4. Entity Identification Through Text Mining
  5. The Library Linked Data Cloud

There are likely few people who have had as much experience parsing library data into linked data triples than the authors of this book and their OCLC Research colleagues. Therefore, anyone seeking to create or use library linked data would do well to study this book. You can take my word for it.

About Roy Tennant

Roy Tennant works on projects related to improving the technological infrastructure of libraries, museums, and archives.

IMLS announces new immigration webinar for public libraries / District Dispatch

Institute of Museum and Library Services logoNext week, the Institute of Museum and Library Services (IMLS) and U.S. Citizenship and Immigration Services (USCIS) will host a free webinar for public librarians on the topic of immigration and U.S. citizenship. Join in to learn more about what resources are available to assist libraries that provide immigrant and adult education services. The webinar will provide an overview of how libraries can expand these services and even acquire free materials to display.

Webinar Details
Date: May 27, 2015
Time: 2:00 – 3:00 p.m. EDT
Click here to register

Prior participation in previous webinars on this topic is not required. Registration is not requried, but the agencies recomment that you check your system for compatibility in advance.

This series was developed as part of a partnership between IMLS and USCIS to ensure that librarians have the necessary tools and knowledge to refer their patrons to accurate and reliable sources of information on immigration-related topics. To find out more about the partnership and the webinar series, visit the Serving New Americans page of the IMLS website or on the USCIS website.

The post IMLS announces new immigration webinar for public libraries appeared first on District Dispatch.

A New Interface and New Web Archive Content at Loc.gov / Library of Congress: The Signal

The following is a guest post by Abbie Grotke, Lead Information Technology Specialist on the Web Archiving Team, Library of Congress.

Archived version of a Member of Congress Official Web Site - Barack Obama

Archived version of a Member of Congress Official Web Site – Barack Obama

Recently the Library of Congress launched a significant amount of new Web Archive content on the Library’s Web site, as a part of a continued effort to integrate the Library’s Web Archives into the rest of the loc.gov web presence.

This is our first big release since we launched the first iteration of collections into this new interface, back in June 2013. The earlier approach to presenting archived web sites turned out to be a challenge to allow us to increase the amount of content available, so in a “one step back, two steps forward” move, the interface has been simplified, and should be more familiar to those working with Web Archives at other institutions – item records point to archived web sites displaying in an open-source version of the Wayback Machine. This simplification allowed the Library to increase the number of sites available in this interface from just under 1,000 to over 5,800. The most recent harvested sites now publicly available were harvested in March-April 2012. The simplified approach should also allow catching up with moving more current content into the online collections.

There are now 21 named collections available in the new interface; some had been available in our old interface but are newly migrated; other content is entirely new. With this launch, we are particularly excited about the addition of the United States Congressional Web Archives, which for the first time allows researchers to access content collected since December 2002 up thru April 2012. Each record covers those sessions where a particular member of Congress was serving, such as for Barack Obama as senator during two sessions, or the example of Kirsten E. Gillibrand serving in the House and Senate, represented on one record despite a URL change.

Other newly available collections include the Burma/Myanmar General Election 2010 Web Archive, Egypt 2008 Web Archive, Laotian General Election 2011 Web Archive, Thai General Election 2011 Web Archive, Vietnamese General Election 2011 Web Archive and the Winter Olympic Games 2002 Web Archive.

We still have some work to do to move the U.S. Election Web Archives from our old interface, so for the time being researchers interested in those collections will need to refer back to the old site. Eventually we will be combining the separate Election collections into one U.S. Election Archive that will allow better searchability and access, and migrating them over (and then “turning off” the old interface).

We hope researchers will enjoy access to these new web archive collections.

Navigating Conferences Like a Pro… When You’re a Rookie / LITA

I’ve recently attended some of my first conferences/meetings post-MLIS and I thought I’d pass on the information I learned from my experience navigating them for the first time.

 

Courtesy of Jatenipet. Pixabay 2014Courtesy of Jatenipit. Pixabay 2014

Always be prepared to promote

This is the most dreaded aspect of networking. It essentially implies schmoozing and self-aggrandizement, but if you consider it as a socializing you’ll realize it’s an essential part of getting to know others in the profession and the roles they play in their organization. If you’re new to the information profession, it can be a great opportunity to ask other professionals about the path they took to enter the industry. More often than not when they find that you’re new to the profession, they’ll offer you advice. They’ll be curious to know what your career goals are and why you’re attending. This is a great opportunity to ask for their business card or contact information. If you find that you’ve built a good rapport and want to become more familiar with their work/organization, you should offer your business card (more on this later).

 

The thing about promotion

If you’re at a conference on behalf of an organization, then you’re on the company dollar. Therefore your mission is to network, learn and share. Since I plan on attending conferences to learn more about the profession and network, I couldn’t talk shop about procedures and management. If you are attending on behalf of an organization you’re expected to create professional networks and trace them back to your institution. It sounds intimidating, but if you allow yourself to soak-up as much information as possible, while being open about what works and doesn’t for your information environment, you’ll find others may want to emulate your framework and share theirs in return.

 

You have leverage too

Believe it or not the pros don’t know everything. Sometimes when you’re new to a profession you can become caught-up in what you don’t know and the list of skills you need to get to that ever distant “next level.” I was very surprised to find that many of the resources I was familiar with escaped the purview of individuals working in the digital records management and archives field. I introduced The Signal Digital Preservation and the Cancer Imaging Archive into a conversation and a few individuals took genuine interest in my explanation of their services. While earning your degree or working in different information environments, you are exposed to a variety of resources and ideas that others aren’t aware of. Don’t count yourself out, you have something to add to the conversation.

 

Think outside the box

There is no need to be intimidated about approaching new acquaintances during a professional conference. Most of the time you’re meeting with people who remember what it’s like to be at the forefront of a new career. It can be exciting and informative to strike-up a conversation with a presenter. There is nothing wrong with inquiring about lunch plans and meeting outside of the conference venue during scheduled breaks. The relaxed atmosphere of a restaurant is where funny stories of the trade can be passed along and you’ll get to know each other on a personal level. There are several factors that account for good networking and having an outgoing personality is one of them. While being personable is fine, doing so in a respectful manner is most apt.

 

Handy business cards

If you’re using a conference to network for future employment, then you need to have business cards. At larger conferences you can be one in hundreds of attendees. Business cards are a great way to establish that you’re prepared and professional. However, providing an acquaintance with your contact information is not enough. Perhaps you may want to ask for their card if you want to continue the conversation after the conference concludes. It’s likely that they’ll never take a look at your business card again, so it’s important to follow-up with an e-mail to remind them of the highlights of the conversation you had and how you’d like to collaborate with them going forward.

If you’re hoping to enter a new field post-graduation, at a minimum your business card should include: your name, degree(s) and university, your phone number and e-mail. You can also add a specialization to encompass your career trajectory such as Librarian, Electronic Resource Specialist or Certified Webmaster. For points of contact beyond your phone number and e-mail, providing your website, online portfolio or LinkedIn URL is a great way to showcase your web presence. If you can connect with another professional’s LinkedIn, you will not only increase their awareness of you, but you will be exposed to their extended network as well.

 

An added bonus

If you are networking for employment, one thing that you don’t want to do is outrightly ask about potential employment with another attendee’s organization. I’ve seen this happen before and it can be off-putting for the person being asked as well as anyone involved in the conversation. If you’re a new graduate or changing careers, the conversation will naturally flow into questions about your career plans. If the person you’re speaking with feels inclined to mention an upcoming opportunity, then it is an added bonus. Otherwise, enjoy yourself and take advantage of the learning opportunity. You’ll be in a room filled with like-minded professionals and everyone wants the most of their experience.

 

Are you planning on attending any conferences this year? What takeaways do you have from conferences you’ve attended in the past? Let me know in the comments section.

NOW AVAILABLE: Fedora 4.2.0 Release / DuraSpace News

From Andrew Woods, Technical Lead for Fedora, on behalf of the Fedora team.

Winchester, MA  The Fedora team is pleased to announce that Fedora 4.2.0 was released on May 19, 2015 and is now available.

The focus of the 4.2.0 release was twofold:

  • Establish an Audit Service

  • Create and exercise tooling for Fedora 3 to 4 migrations

Reminder: Apply now for the Oakley Memorial Scholarship / District Dispatch

Robert Oakley

Robert Oakley

Reminder: The application window to apply for the Robert L. Oakley Memorial Scholarship, a scholarship opportunity that supports research and advanced study for librarians in their early-to-mid-careers, closes on June 1, 2015. The annual $1,000 scholarship, which was developed by the American Library Association and the Library Copyright Alliance, supports librarians interested in intellectual property, public policy and copyright law.

Applicants should provide a statement of intent for use of the scholarship funds. Such a statement should include the applicant’s interest and background in intellectual property, public policy, and/or copyright and their impacts on libraries and the ways libraries serve their communities. Additionally, statements should include information about how the applicant and the library community will benefit from the applicant’s receipt of scholarship. Statements should be no longer than three pages (1,000 words). The applicant’s resume or curriculum vitae should be included in their application.

Applications must be submitted via e-mail to Carrie Russell, crussell[at]alawash[dot]org. Awardees may receive the Robert L. Oakley Memorial Scholarship up to two times in a lifetime. Funds may be used for equipment, expendable supplies, travel necessary to conduct, attend conferences, release from library duties or other reasonable and appropriate research expenses.

The award honors the life accomplishments and contributions of Robert L. Oakley. Professor and law librarian Robert Oakley was an expert on copyright law and wrote and lectured on the subject. He served on the Library Copyright Alliance representing the American Association of Law Librarians and played a leading role in advocating for U.S. libraries and the public they serve at many international forums including those of the World Intellectual Property Organization and United Nations Educational Scientific and Cultural Organization.

Oakley served as the United States delegate to the International Federation of Library Associations Standing Committee on Copyright and Related Rights from 1997-2003. Mr. Oakley testified before Congress on copyright, open access, library appropriations and free access to government documents and was a member of the Library of Congress’ Section 108 Study Group. A valued colleague and mentor for numerous librarians, Oakley was a recognized leader in law librarianship and library management who also maintained a profound commitment to public policy and the rights of library users.

The post Reminder: Apply now for the Oakley Memorial Scholarship appeared first on District Dispatch.

Should LITA oppose Elsevier’s new sharing policy? / LITA

It’s come to the LITA Board’s attention that the Confederation of Open Access Repositories is circulating a statement against Elsevier’s new sharing policy. (You can find that policy here.) COAR is concerned that the policy imposes long embargoes for open access content (up to 4 years); applies retroactively; and restricts author’s choice of Creative Commons license. Numerous individuals and library organizations, including ALA and ACRL, have signed on to this statement; the LITA Board is discussing doing likewise.

But we represent you, the members! So tell us what you think. Should LITA sign on?

Rewrite git repo URLs / Casey Bisson

A question in a mail list I’m on introduced me to a git feature that was very new to me: it’s possible to have git rewrite the repository URLs to always use HTTPS or git+ssh, etc.

This one-liner seems to force https:

git config --global url.https://github.com/.insteadOf git://github.com/

Or you can add these to your .gitconfig:

# Use https instead of git and git+ssh
[url "https://github.com/"]
  insteadOf = git://github.com/
[url "https://github.com/"]
  insteadOf = git@github.com:
# Use git and git+ssh instead of https
[url "git://github.com/"]
  insteadOf = https://github.com/
[url "git@github.com:"]
  pushInsteadOf = "git://github.com/"
[url "git@github.com:"]
  pushInsteadOf = "https://github.com/"

On the Spirit of Creative Exploration at DPLAfest 2015 / DPLA

It’s been just about a month now since the conclusion of the second annual DPLAfest in Indianapolis, where about 300 people gathered to discuss the current and future state of digital collections and digital publishing. Those in attendance included librarians, archivists, publishers, authors, developers, and other interested members of the community. My own participation at the conference was generously made possible by a DPLA + DLF Cross-Pollinator Travel Grant, which allowed me share with and learn from DPLA community during the two-day event.scott_w_h_young headshot

Put simply, my time at the DPLAfest was inspiring, based largely on my interactions with the DPLA community, which I found to be vibrant, welcoming, ingenious and altogether energizing. A fellow travel grant recipient, Laura Wrubel, has recently written on her own experiences at the conference. In her post, Laura identified a tangible excitement and a feeling of momentum that accompanied the conference, and I must agree wholeheartedly. An unmistakable atmosphere of positivity and possibility flowed through the conference and its attendees. Allow me to share an example.

My participation in the conference revolved mainly around the hackathon. On the first day of the conference, a large group of attendees learned how to access the DPLA API, and from there we split into smaller groups to discuss possible app ideas. My fellow brainstormers in this process included Alexandra Murray, Laura Wrubel, and Brandon Locke. The four of us were mostly DPLA newcomers, and were attending the conference in an exploratory mode. Even with our status as novices, however, we were received as a peers into the DPLA community. As we talked through app ideas among each other and with the larger group, the support we received from DPLA community members was immediate and enthusiastic. DPLA staff Audrey Altman, Mark Breedlove, Mark Matienzo, and Tom Johnson were all on hand to provide direction and feedback as we worked first through ideas and then through code.

Working with Mark Matienzo’s Dial-a-DPLA app during the DPLAfest hackathon: https://www.flickr.com/photos/dpla/17235175275/in/album-72157652052355166/

Working with Mark Matienzo’s Dial-a-DPLA app during the DPLAfest hackathon (source)

Inspired by Historical Cats, Mark Matienzo’s Dial-a-DPLA, and the encouragement we received from others in the room, we aimed to create a Twitter-based app that could serve as a whimsical window into the DPLA collection. Borrowing from Mark Sample’s DPLAbot, we reconfigured some nifty Node.js code so that our new bot matched food terms with DPLA metadata and produced Tweets that encouraged hungry Twitter users to take a look at food-related items in the DPLA collection. We dubbed this little program “DinnerPLAnsbot.” The final app would not have been possible without some well-timed and much-needed JavaScript assistance from Chad Nelson in the final few moments before we unveiled DinnerPLAnsbot at the Developer Showcase on the second day of the conference. At every turn during the DPLAfest, I felt the support of a talented and thoughtful community that wanted to help us build with the DPLA collection.

DinnerPLAsbot team at the DPLAfest developer showcase. Left to right: Brandon Locke, Scott W. H. Young, Alexandra Murray, Laura Wrubel. https://www.flickr.com/photos/dpla/16612622394/in/album-72157652052355166/

DinnerPLAsbot team at the DPLAfest developer showcase. Left to right: Brandon Locke, Scott W. H. Young, Alexandra Murray, Laura Wrubel (source)

In thinking back to when I first learned of the DPLA, the earliest experience I could recall was the Fall 2013 meeting of the Coalition of Networked Information, where I attended a presentation by Dan Cohen. The DPLA was a mere 8 months into its existence at that time, and I remember being impressed not only with its mission and vision, but also the sticker on Dan’s laptop:

Scott Young - DPLAfest photo 1

In the 16 months since, the DPLA has grown into a fully-realized digital library platform that now provides access to over 10,000,000 metadata-rich records from over 1,600 contributing institutions, and with a killer API to boot. With such a diverse number of digital objects, a vibrant, supportive community, and a forward-thinking vision for digital collections and digital services, the DPLA represents a spirit of creative exploration (demonstrated nicely in the DPLA App Library).

A query of the DPLA API shows that my institution, Montana State University Library, has so far contributed 1,632 records to the DPLA through the Mountain West Digital Library. We plan to increase our contribution by several more thousand records over the next several months, including records from the Acoustic Atlas, our newly-formed collection of natural sound recordings from the American West. My participation in the DPLAfest has served to strengthen my own interest and motivation for exploring, building, and creating with digital library objects, and I am now even more excited to see the DPLA’s collection and community continue to grow and strengthen.

To find out more about Scott, visit his personal website or follow him on Twitter.

IMLS awards libraries National Medals / District Dispatch

(Left to right) Cecil County Public Library Director Denise Davis, Cecil County Community Member Thomas Cousar and Michelle Obama with National Medal.

(Left to right) Cecil County Public Library Director Denise Davis, Cecil County Community Member Thomas Cousar and Michelle Obama with National Medal.

Earlier this week, First Lady Michelle Obama joined the Institute of Museum and Library Services (IMLS) Acting Director Maura Marx to present the 2015 National Medal for Museum and Library Service to ten exemplary libraries and museums National Medals for their service to their communities. Now in its 21st year, the National Medal is the nation’s highest honor conferred on libraries and museums, and celebrates institutions that make a difference for individuals, families, and communities.

National Medal recipients include:

(Left to right) Erica Jesonis, Chief Librarian for Information Technology; Morgan Miller, Assistant Director for Public Service; U.S. Rep. Andy Harris (R-MD); Denise Davis, Cecil County Public Library Director, Frazier Walker, Community Relations Specialist.

(Left to right) Erica Jesonis, Chief Librarian for Information Technology; Morgan Miller, Assistant Director for Public Service; U.S. Rep. Andy Harris (R-MD); Denise Davis, Cecil County Public Library Director, Frazier Walker, Community Relations Specialist.

During the event, First Lady Michelle Obama said to the recipients: “The services that you all provide are not luxuries. Just the opposite. Every day your institutions are keeping so many folks in this country from falling through the cracks. In many communities our libraries and museums are the places that help young people dream bigger and reach higher for their futures, the places that help new immigrants learn English and apply for citizenship…the places where folks can access a computer and send out a job application so they can get back to work and get back to the important process of supporting their families.”

Denise Davis, director of the Cecil County Public Library in Elkton, Md., spoke about receiving the prestigious recognition:

Public libraries have a powerful role in creating opportunities by keeping the doors to knowledge open, allowing creativity to flourish, and never letting barriers become insurmountable.

The next deadline for nominating a library or museum is October 1, 2015. Learn more about the National Medal at www.imls.gov/medals.

The post IMLS awards libraries National Medals appeared first on District Dispatch.

The K-12 Web Archiving Program: Preserving the Web from a Youthful Point of View / Library of Congress: The Signal

This article is being co-published on the Teaching With the Library of Congress blog and was written by Butch Lazorchak and Cheryl Lederle.

If you believe the Web (and who doesn’t believe everything they read on the Web?), it boastfully celebrated its 25th birthday last year. Twenty-five years is long enough for the first “children of the Web” to be fully-grown adults, just now coming of age to recognize that the Web that grew up around them has irrevocably changed.

In this particular instance, change is good. It’s only by becoming aware of what we’re losing (or have already lost) that we’ll be spurred to action to preserve it. We’ve been aware of the value of the historic web for a number of years here at the Library of Congress, and we’ve worked hard to understand how to capture the Web through the Library’s Web Archiving program and the work we’ve done with partners at the Memento project and through the International Internet Preservation Consortium.

K-12 Web Archiving Program.

K-12 Web Archiving Program.

But let’s go back to those “children of the Web.” Nostalgia is a powerful driver for preservation, but most preservation efforts are driven by full-grown adults. If they’re able to bring a child’s perspective to their work it’s only through the prism of their own memory, and in any event, the nostalgic items they may wish to capture may not be around anymore by the time they get to them. What’s needed is not just a nostalgic memory of the web, but efforts to curate and capture the web with a perspective that includes the interests of the young. And who better to represent the interests of the young than children and teenagers themselves! Luckily the Library of Congress has such a program: the K-12 web archiving program.

The K-12 Web Archiving program has been operating since 2008, engaging dozens of schools and hundreds of students from schools, large and small, from across the U.S. in understanding what the Web means to them, and why it’s important to capture it. In partnership with the Internet Archive, the program enables schools to set up their own web capture tools and choose sets of web resources to collect; resources that represent the full range of youthful experience, including popular culture, commerce, news, entertainment and more.

Cheryl Lederle, an Educational Resource Specialist at the Library of Congress, notes that the program builds student awareness of the internet as a primary source as well as how quickly it can change. The program might best be understood through the reflections of participating teachers:

  • “The students gained an understanding of how history is understood through the primary sources that are preserved and therefore the importance of the selection process for what we are digitally preserving. But, I think the biggest gain was their personal investment in preserving their own history for future generations. The students were excited and fully engaged by being a part of the K-12 archiving program and that their choices were being preserved for their own children someday to view.” – MaryJane Cochrane, Paul VI Catholic High School
  • “The project introduced my students to historical thinking; awareness of digital data as a primary source and documentation of current events and popular culture; and helped foster an appreciation and awareness of libraries and historical archives.” – Patricia Carlton, Mount Dora High School

And participating students:

  • “Before this project, I was under the impression that whatever was posted on the Internet was permanent. But now, I realize that information posted on the Internet is always changing and evolving.”
  • “I find it very interesting that you can look back on old websites and see how technology has progressed. I want to look back on the sites we posted in the future to see how things have changed.”
  • “I was surprised by the fact that people from the next generation will also share the information that I have collected.”
  • “They’re really going to listen to us and let us choose sites to save? We’re eight!”

Collections from 2008-2014 are available for study on the K-12 Web Archiving site, and the current school year will be added soon. Students examining these collections might:

  • Compare one school’s collections from different years.
  • Compare collections preserved by students of different grade levels in the same year.
  • Compare collections by students of the same grade level, but from different locations.
  • Create a list of Web sites they think should be preserved and organize them into two or three collections.

What did your students discover about the value of preserving Web sites?

Unrecoverable read errors / David Rosenthal

Trevor Pott has a post at The Register entitled Flash banishes the spectre of the unrecoverable data error in which he points out that while disk manufacturers quoted Bit Error Rates (BER) for hard disks are typically 10-14 or 10-15, SSD BERs range from 10-16 for consumer drives to 10-18 for hardened enterprise drives. Below the fold, a look at his analysis of the impact of this difference of up to 4 orders of magnitude.

When a disk in a RAID-5 array fails and is replaced, all the data on other drives in the array must be read to reconstruct the data from the failed drive. If an unrecoverable read error (URE) is encountered in this process, one or more data blocks will be lost. RAID-6 and up can survive increasing numbers of UREs.

It has been obvious for some time that as hard disks got bigger without a corresponding decrease in BER that RAID technology had a problem, in that the probability of encountering a URE during reconstruction was going up, and thus so was the probability of losing data when a drive failed.As Trevor writes:
Putting this into rather brutal context, consider the data sheet for the 8TB Archive Drive from Seagate. This has an error rate of 10^14 bits. That is one URE every 12.5TB. That means Seagate will not guarantee that you can fully read the entire drive twice before encountering a URE.
Let's say that I have a RAID 5 of four 5TB drives and one dies. There is 12TB worth of data to be read from the remaining three drives before the array can be rebuilt. Taking all of the URE math from the above links and dramatically simplifying it, my chances of reading all 12TB before hitting a URE are not very good.
With 6TB drives I am beyond the math. In theory, I shouldn't be able to rebuild a failed RAID 5 array using 6TB drives that have a 10^14 BER. I will encounter a URE before the array is rebuilt and then I’d better hope the backups work.
So RAID 5 for consumer hard drives is dead.
Well, yes, but RAID-5, and RAID in general, is just one rather simple form of erasure coding. There are better forms of erasure coding for long-term data reliability. I disagree with Trevor when he writes:
There are plenty of ways to ensure that we can reliably store data, even as we move beyond 8TB drives. The best way, however, may be to put stuff you really care about on flash arrays. Especially if you have an attachment to the continued use of RAID 5.
Trevor is ignoring the economics. Hard drives are a lot cheaper for bulk storage than flash. As Chris Mellor pointed out in a post at The Register about a month ago, each byte of flash contains at least 50 times as much capital investment as a byte of hard drive. So it will be a lot more expensive, even if not 50 times as expensive. For the sake of argument, lets say it is 5 times as expensive. To a first approximation, cost increases linearly with the replication factor, but reliability increases exponentially. So, instead of a replication factor of 1.2 in a RAID-5 flash array, for the same money I can have a replication factor of 12.2 in a hard disk array. Data in the hard drive array would be much, much safer for the same money. Or suppose I used a replication factor of 2.5, the data would be a great deal safer for 40% of the cost.

NOW AVAILABLE: DSpace 5.2! / DuraSpace News

From Hardy Pottinger, on behalf of the DSpace 5.2 Release Team, and all the DSpace developers.
 
Winchester, MA  The DSpace developers are pleased to formally announce that DSpace 5.2 is now available. DSpace 5.2 is a bug-fix release and contains no new features.
DSpace 5.2 can be downloaded immediately at either of the following locations:
 
• SourceForge: https://sourceforge.net/projects/dspace/files/

SKOS and Wikidata / Ed Summers

For #DayOfDH yesterday I created a quick video about some data normalization work I have been doing using Wikidata entities. I may write more about this work later, but the short version is that I have a bunch of spreadsheets with names in them (authors) in a variety of formats and transliterations, which I need to collapse into a unique identifier so that I can provide a unified display of the data per unique author. So for example, my spreadsheets have information for Fyodor Dostoyevsky using the following variants:

  • Dostoeieffsky, Feodor
  • Dostoevski
  • Dostoevski, F. M.
  • Dostoevski, Fedor
  • Dostoevski, Feodor Mikailovitch
  • Dostoevskii
  • Dostoevsky
  • Dostoevsky, Fiodor Mihailovich
  • Dostoevsky, Fyodor
  • Dostoevsky, Fyodor Michailovitch
  • Dostoieffsky
  • Dostoieffsky, Feodor
  • Dostoievski
  • Dostoievski, Feodor Mikhailovitch
  • Dostoievski, Feodore M.
  • Dostoievski, Thedor Mikhailovitch
  • Dostoievsky
  • Dostoievsky, Feodor Mikhailovitch
  • Dostoievsky, Fyodor
  • Dostojevski, Feodor
  • Dostoyeffsky
  • Dostoyefsky
  • Dostoyefsky, Theodor Mikhailovitch
  • Dostoyevski, Feodor
  • Dostoyevsky
  • Dostoyevsky, Fyodor
  • Dostoyevsky, F. M.
  • Dostoyevsky, Feodor Michailovitch
  • Dostoyevsky, Feodor Mikhailovich

So, obviously, I wanted to normalize these. But I also want to link the name up to an identifier that could be useful for obtaining other information, such as an image of the author, a description of their work, possibly link to works by the author, etc. I’m going to try to map the authors to Wikidata, largely because there are links from Wikidata to other places like the Virtual International Authority File, and Freebase, but there are also images on Wikimedia Commons, and nice descriptive text for the people. As an example here is the Wikidata page for Dostoyevsky.

To aid in this process I created a very simple command line tool and library called wikidata_suggest which uses Wikidata’s suggest API to interactively match up a string of text to a Wikidata entity. If Wikidata doesn’t have any suggestions as a fallback the utility looks in a page of Google’s search results for a Wikipedia page and then will optionally let you use that text.

SKOS

Soon after tweeting about the utility and the video I made about it I heard from Alberto who works on the NASA Astrophysics Data System and was interested in using wikidata_suggest to try to link up the Unified Astronomy Thesaurus to Wikidata.

Fortunately the UAT is made available as a SKOS RDF file. So I wrote a little proof of concept script named skos_wikidata.py that loads a SKOS file, walks through each skos:Concept and asks you to match the skos:prefLabel to Wikidata using wikidata_suggest. Here’s a quick video I made of what this process looks like:

I guess this is similar to what you might do in OpenRefine, but I wanted a bit more control over how the data was read in, modified and matched up. I’d be interested in your ideas on how to improve it if you have any.

It’s kind of funny how Day of Digital Humanities quickly morphed into Day of Astrophysics…

Panel to discuss ebook lending growth at 2015 ALA Annual Conference / District Dispatch

San Francisco Cable Car on Pine StreetA leading panel of library and publishing experts will provide an update on the library ebook lending market and discuss best ways for libraries to advance library access to digital content at the 2015 American Library Association’s (ALA) Annual Conference in San Francisco. The interactive session, “Making Progress in Digital Content,” takes place from 10:30 to11:30a.m. on Sunday, June 28, 2015. The session will be held at the Moscone Convention Center in room 2018 of the West building.

During the session, an expert panel of library leaders from ALA’s Digital Content Working Group (DCWG) will provide insights on the most promising opportunities available to advance library access to digital content. Organizational leaders will discuss ALA’s efforts toward exploiting digital content access opportunities. Audience input will be sought to inform ALA priorities in this area. The program features DCWG co-chairs Carolyn Anthony and Erika Linke, along with additional guest panelists.

Speakers

  • Carolyn Anthony, co-chair, ALA Digital Content Working Group; director, Skokie Public Library (Illinois); immediate past-president, Public Library Association
  • Erika Linke, co-chair, ALA Digital Content Working Group; associate dean of Libraries and director of Research and Academic Services, Carnegie Mellon University Libraries

View all ALA Washington Office conference sessions

The post Panel to discuss ebook lending growth at 2015 ALA Annual Conference appeared first on District Dispatch.

Jobs in Information Technology: May 20, 2015 / LITA

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Director of Information Technology, Douglas County Libraries, Castle Rock, CO

Associate Product Owner, The Library Corporation (TLC), Inwood, WV

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

“First Rule of Usability? Don’t Listen to Users” / Jonathan Rochkind

A 15-year-old interesting brief column from noted usability expert Jakob Nielsen, which I saw posted today on reddit:  First Rule of Usability? Don’t Listen to Users

Summary: To design the best UX, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior. Users do not know what they want.

I’m reposting here, even though it’s 15 years old, because I think many of us haven’t assimilated this message yet, especially in libraries, and it’s worth reviewing.

An even worse version of trusting users self-reported claims, I think, is trusting user-facing librarians self-reported claims about what they have generally noticed users self-reporting.  It’s like taking the first problem and adding a game of ‘telephone’ to it.

Nielsen’s suggested solution?

To discover which designs work best, watch users as they attempt to perform tasks with the user interface. This method is so simple that many people overlook it, assuming that there must be something more to usability testing. Of course, there are many ways to watch and many tricks to running an optimal user test or field study. But ultimately, the way to get user data boils down to the basic rules of usability:

  • Watch what people actually do.
  • Do not believe what people say they do.
  • Definitely don’t believe what people predict they may do in the future.

Yep. If you’re not doing this, start. If you’re doing it, you probably need to do it more.  Easier said than done in a typical bureaucratic inertial dysfunctional library organization, I realize.

It also means we have a professional obligation to watch what the users do — and determine how to make things better for them. And then watch again to see if it did. That’s what makes us professionals. We can not simply do what the users say, it is an abrogation of our professional responsibility, and does not actually produce good outcomes for our patrons. Again, yes, this means we need library organizations that allow us to exersize our professional responsibilities and give us the resources to do so.

For real, go read the very short article. And consider what it would mean to develop in libraries taking this into account.


Filed under: General

Libraries for Tomorrowland / District Dispatch

[The following article was written by Christopher Harris, director of the School Library System for the Genesee Valley Educational Partnership in New York. Chris also serves as a Youth and Technology Fellow for the American Library Association’s Office for Information Technology Policy (OITP).]
Panelists at the Miles fromTomorrowland event.

Panelists at the Miles fromTomorrowland event.

“Miles from Tomorrowland,” a relatively new show from Disney Junior, has plotted a course to bring computer science to young viewers. Perhaps more importantly, the team behind the show has made intentional choices to encourage young girls to consider computer science and other science, technology, engineering, and mathematics (STEM) fields as great options for the future. The show has been developed as a collaborative effort between Disney, Google, and National Aeronautics and Space Administration (NASA). On Monday, the three organizations gathered at the Google offices in Washington, D.C. to showcase the success and think about future plans. But what does this have to do with libraries? Patience…first we must consider the backstory.

Following in the footsteps of the original Star Trek where creator Gene Roddenberry cast Nichelle Nichols as Lt. Uhura in one of the first professional roles for an African-American female, “Miles from Tomorrowland” showcases women in key positions. In the show, a family is travelling through space working for the Tomorrowland Transportation Authority. The captain of the family ship is the mother, and Miles’ big sister Loretta is the computer science whiz who guides Miles through his adventures and is always ready to save the day. What are we currently doing in our libraries—and what more might we be doing—to also highlight women in STEM and leadership fields?

Miles from TomorrowLand image.

Miles from TomorrowLand show. Photo by Disney.

Nichelle Nichols’ portrayal of Uhura on Star Trek was inspirational to many women in the 1970s. She was even hired by NASA to help bring more women and minorities into the space program; a mission that resulted in the recruitment of the first six women astronauts – including the very notable Sally Ride – in 1978.

The idea of having a female astronaut as captain of the ship in Miles of Tomorrowland was based on real life astronaut Dr. Yvonne Cagle. Dr. Cagle, the second African-American woman selected by NASA for astronaut training, completed her training in 1998 and has been working as a strong advocate for getting more young women into STEM fields ever since.

The statistics are a bit frightening when you really stop and consider them. in 1984, one year after Sally Ride became the first American woman in space and in the prime of Adm. Grace Hopper’s media coverage as a female computer scientist, only 37 percent of those entering computer science fields were women according to research from Google. Things have not improved; in fact the situation has become quite alarming. In 2009, women made up only 18 percent of those pursuing computer science. Today, despite making up roughly half of the U.S. workforce, women fill less than 25 percent of STEM (pdf) related jobs in the country.

“Miles from Tomorrowland” is hoping to change things. Creator Sascha Paladino spoke yesterday about the birth of his twin sons as inspiration for the show. He wanted his sons to see positive female role models in the captain and a big sister who used computer programming to solve problems. Quick…name three books from your library that feature female military commanders, female scientists, or female computer programmers.

For older readers, the first challenge is relatively easy thanks to some amazing science-fiction series like David Weber’s Honor Harrington or Elizabeth Moon’s (herself a former computer specialist with the Marines) Vatta and Serano books. Octavia Butler’s work also stands out as promoting both women characters and highlighting people of color as protagonists. For younger readers? Margaret Bechard’s “Star Hatchling” and Madeline L’Engle’s “A Wrinkle in Time” come to mind. But where are the mainstream series like Junie B. Jones or Cam Jansen where girls are shown being interested in and excelling in STEM fields?

What I took away from the event yesterday is that there is a huge amount of interest in encouraging young girls to consider STEM careers. Google, Disney, and NASA are working on this through “Miles from Tomorrowland” for very young viewers, but also in other ways. Google’s Made with Code project empowered girls as coders able to take over their state’s Christmas tree at the White House and program the lights.

What I also heard yesterday is that more work is needed. Two Congresswomen, Representative Susan Brooks (R-IN) and Representative Suzan DelBene (D-WA) are working to highlight the importance of STEM in schools and the need to encourage girls to go into STEM fields. They have co-sponsored an effort to have computer programming added to ESEA as a required field for K-12 education. Who better to lead this instructional effort than the school librarians? Who better to support the early learning that can accompany efforts like “Miles From Tomorrowland” and the extension beyond the classroom than public librarians?

So then the question. The whole point of this post. Are you ready to step up and participate in this effort? Google’s research shows that one of the key contributing factors to young women considering a STEM field in college is encouragement from a parent or other adult (teacher, guidance counselor…librarian?). What will you do in your library to encourage girls and young women to become interested in STEM fields? How will you support their learning to help them build self-confidence as STEM experts? How will you help bridge the gender divide in STEM careers to better help the country fill the numerous current and future STEM field job openings?

How will your library and you as a librarian become a champion of STEM learning for all – but especially for the currently underrepresented such as women and minorities?

Rep. Susan Brooks (R-IN) Image from Miles from Tomorrowland event. Image from Miles from Tomorrowland event. Image from Miles from Tomorrowland event. Panelists at the Miles fromTomorrowland event.

The post Libraries for Tomorrowland appeared first on District Dispatch.

The NDSR Boston Residents Reflect on their “20% Projects” / Library of Congress: The Signal

The following is a guest post by the entire group of NDSR-Boston residents as listed below. For their final posting, the residents present an overview of their individual professional development projects.

Rebecca Fraimow (WGBH)

rebecca_fraimow

Rebecca

One of the best things about this year’s NDSR in Boston  is the mandate to dedicate 20% of our time to projects outside of the specific bounds of our institution. Taking coursework, attending conferences, creating workshops — it’s all the kind of stuff that’s invaluable in the archival profession but is often hard to make time for on top of a full-time job, and I really appreciated that NDSR explicitly supported these efforts.

While I definitely took advantage of the time for my own personal professional development — investing time in Python and Ruby on Rails workshops and Harvard’s CopyrightX course, as well as presentations at AMIA, Code4Lib, Personal Digital Archives, NEA and NDSR-NE — the portion of my 20% that I’ve most appreciated is the opportunity to expand the impact of the program beyond the bounds of the immediate NDSR community. With the support of the rest of the Boston cohort, I partnered with my WGBH mentor, Casey Davis, to lead a series of workshops on handling audiovisual analog and digital material for students at the Simmons School of Library and Information Science. It was fantastic to get a chance to share the stuff I’ve learned with the next generation of archivists (and, who knows, maybe some of the next round of NDSR residents!).

As a cohort, we’ve also teamed up to design workflows and best practice documents for the History Project — a Boston-based, volunteer-run LGBT archive with a growing collection of digitized and born-digital items. This project is also, I think, a really great example of the ways that the program can make an impact outside of the relatively small number of institutions that host residents, and illustrates how valuable it is to keep expanding the circle of digital preservation knowledge.

Samantha Dewitt (Tufts University)

20141216_DeWitt_Photo_01

Samantha

The NDSR residency has been a terrific experience for me, with the Tufts project proving to be a very good fit. Having been completely preoccupied with the subject of open science and Research Data Management in these past nine months, I am finding it hard to let go of the topic and I endeavor to continue working on this particular corner of the digital preservation puzzle. These days, data sharing and research data management frequently arise as topics of conversation in relation to research universities. Consequently, I had little trouble finding ways to add digital data preservation to my “20%” time. I looked forward to sharing the subject with my NDSR cohort whenever possible!

In November, our group attended a seminar on data curation at the Harvard-Smithsonian Center for Astrophysics. Several weeks later, I was able to meet with Dr. Micah Altman (MIT) to explore the subject of identifying and managing confidential information in research. Also in November, the Boston Library Consortium & Digital Science held a workshop at Tufts on Better Understanding the Research Information Management Landscape. Mark Hahnel, founder of Figshare, and Jonathan Breeze, CEO of Symplectic, spoke. This spring, Eleni Castro, research coordinator and data scientist at Harvard, met with our group to discuss the university’s new Dataverse 4.0 beta. Finally, in April, I was excited to be able to attend the Research Data Access and Preservation Summit in Minneapolis, MN. It has been a busy nine months!

Joey Heinen (Harvard Libraries)

joeyh

Joey

The “20%” component of the National Digital Stewardship Residency is a great way for us to expand our interests, learn more about emerging trends and practices in the field and also to stay connected to any interests that might not align with our projects. My 20% involved a mixture of continuing education opportunities, organizing talks and tours and contributing to group projects which serve specific institutions or the field at large. For continuing education I learned some of the basics of Python programming through the Data Scientist Training for Librarians at Harvard.

For talks and tours, I organized a visit to the Northeast Document Conservation Center (largely to learn about the IRENE Audio Preservation System ) and with the Harvard Art Museum’s Registration and Digital Infrastructure and Emerging Technologies departments. I also co-organized an event entitled “Catching Waves: A Panel Discussion on Sustainable Digital Audio Delivery” (webex recording available soon on Harvard Library’s YouTube Channel). For developing resources I participated in the AMIA/DLF 2014 Hack Day in a group that developed a tool for comparing the output of three A/V characterization tools (see the related blog post) and also designed digital imaging and audio digitization workflows for the History Project.

Finally, I participated in NDSR-specific panels at the National Digital Stewardship Alliance – Northeast meeting (NDSA-NE) and the Spring New England Archivists conference as well as individually at the recent American Institute for Conservation of Historic and Artistic Works conference. All in all I am pleased with the diversity of the projects and my level of engagement with both the local and national preservation communities. (As a project update, here is the most recent iteration of the Format Migration Framework (pdf)).

Tricia Patterson (MIT Libraries)

tricia_patterson

Tricia

Two weeks left to go! And I ended up doing so much more than I initially anticipated during my residency. My project was largely focused on diagrammatically and textually documenting the low-level workflows of our digitization and managing digital preservation processes, some of the results of which can be seen on the Digital Preservation Management workshop site. But beyond the core of the project, so much else was accomplished. I helped organize both an MIT host event and a field trip to the JFK Library and Museum for my NDSR compadres. Joey Heinen and I co-organized a panel on sustainable digital audio delivery, replete with stellar panelists from both MIT and Harvard. I collaborated with my NDSR peers on a side assignment for the History Project. I also shared my work with colleagues at so many different venues, like presenting at the New England Music Library Association, giving a brown bag talk at MIT, writing on our group blog, being accepted to present with my MIT colleagues at the International Association of Sound and Audiovisual archives conference, and in the final days of my residency, presenting at the Association of Recorded Sound Collections conference.

All in all, a lot has been crammed into nine brief months: engaging in hands-on experience, enhancing my technological and organizational knowledge, forging connections in the digital preservation community and beyond. It really ended up being a vigorous and dynamic catapult into the professional arena of digital preservation. Pretty indispensable, I’d say!

Jen LaBarbera (Northeastern University)

jenlabarberaphoto_0

Jen

Though my project focused specifically on creating workflows and roadmaps for various kinds of digital materials, I found myself becoming more and more intrigued by the conceptual challenges of digital preservation for the digital humanities. Working on this project as part of a residency meant that I had some flexibility and was given the time and encouragement to pursue topics of interest, even if they were only indirectly related to my project at Northeastern University.

As a requirement of the residency, each resident had to plan and execute an event at their host institution, and we were given significant latitude to define that event. Instead of doing the standard tour and in-person demonstration of my work at Northeastern, Giordana Mecagni and I chose to reach out to some folks in our library-based Digital Scholarship Group to host a conversation exploring the intersections between digital preservation and digital humanities. The response from the Boston digital humanities and library community was fantastic; people were eager to dive into this conversation and talk about the challenges and opportunities presented in preserving the scholarly products of the still fairly new world of digital humanities. We had a stellar turnout from digital humanities scholars and librarians from all over the Boston area, from institutions within the NDSR Boston cohort and beyond. We didn’t settle on any concrete answers in our conversation, but we were able to highlight the importance of digital preservation within the digital humanities world.

My experience with NDSR Boston will continue to be informative and influential as I move on to the next step in my career, as the lead archivist at Lambda Archives of San Diego in sunny southern California. From the actual work on my project at Northeastern to the people we met through our “20%” activities – e.g. touring NEDCC, attending Rebecca’s AV archiving workshops at Simmons, working with the History Project to develop digital preservation plans and practices – I feel much more prepared to responsibly preserve and make available the variety of formats of digital material that will inevitably come my way in my new position at this LGBTQ community archive.

Developing and implementing a technical framework for interoperable rights statements / DPLA

Farmer near Leakey, holding a goat he has raised. Near San Antonio, 1973. National Archives and Records Administration.

Farmer near Leakey, holding a goat he has raised. Near San Antonio, 1973. National Archives and Records Administration. http://dp.la/item/234c76f4c6cc16488ddedbe69a7da297

Within the Technical Working Group of the International Rights Statements Working Group, we have been focusing our efforts on identifying a set of requirements and a technically sound and sustainable plan to implement the rights statements under development. Now that two of the Working Group’s white papers have been released, we realized it was a good time to build on the introductory blog post by our Co-Chairs, Emily Gore and Paul Keller. Accordingly, we hope this post provides a good introduction to our technical white paper, Recommendations for the Technical Infrastructure for Standardized International Rights Statements, and more generally, how our thinking has changed throughout the activities of the working group.

The core requirements

The Working Group realized early on that there was the need for a common namespace for rights statements in the context of national and international projects that aggregate cultural heritage objects. We saw the success of the work undertaken to develop and implement the Europeana Licensing Framework, but realized that a more general framework was necessary to be leveraged beyond the Europeana community.  In addition, we established that there was a clear need to develop persistent, dereferenceable URIs to provide human- and machine-readable representations.

In non-technical terms, this identifies a number of specific requirements. First, the persistence requirement means that the URIs need to remain the same over time, so we can ensure that they can be accessed consistently over the long term. The “dereferenceability” requirement states that when we request a rights statement by its URI, we need to get a representation back for it, either human-readable or machine-readable depending on how it’s requested. For example, if a person enters a rights statement’s URI in their web browser’s address bar, they should get an HTML page in response that presents the rights statement’s text and more information. By comparison, if a piece of software or a metadata ingestion process requests the rights statement by its URI, it should get a machine-readable representation (say, using the linked data-compatible JSON-LD format) that it can interpret and reuse in some predictable way.

Beyond these requirements, we also identified the need for both the machine-readable representation to provide specific kinds of additional information where appropriate, such as the name of the statement, the version of the statement, and where applicable, the jurisdiction where the rights statement applies. Finally, and most importantly, we needed a consistent way to provide translations of these statements that met the above requirements for dereferenceability, since they are intended to be reused by a broad international community of implementers.

Data modeling and implementation

After some discussion, we decided the best implementation for these rights statements was to develop a vocabulary implemented using the Resource Description Framework (RDF) and the Simple Knowledge Organization System (SKOS) standards. These standards are broadly used throughout the Web, and are both used within the Europeana Data Model  and the DPLA Metadata Application Profile. We are also looking at the Creative Commons Rights Expression Language (ccREL) and Open Digital Rights Language (ODRL) models to guide our development. At this stage, we have a number of modeling issues still open, such as which properties to use for representing various kinds of human-readable documentation or providing guidance on how to apply community-specific constraints and permissions. Deciding whether (and how) rights statements can be extended in the future is also an intriguing point. We are looking for feedback on all these topics!

As part of the process, we have been managing our draft implementation of the RightsStatements.org data model in a GitHub repository to allow for ease of collaboration across the technical subgroup. As the proposed rights statements become finalized following the discussion period on the two white papers, we will be working to provide a web server to host the machine-readable and human-readable versions of the rights statements in accordance with our requirements. To guide our implementation, we are building on the Best Practice Recipes for Publishing RDF Vocabularies with a slight modification to allow for better support for the multilingual requirements of the Working Group. Advice from the technical experts in our community is also highly welcome on this approach.

The end of the public feedback period has been set to Friday 26th June 2015, but the Technical Working Group will try to answer comments on the white paper on a regular basis, in the hope of setting up a continuous, healthy stream of discussion.

Acknowledgements

The technical work on implementing the rights statements has been deeply collaborative, and would not have been possible by the dedicated efforts of the members of the Technical Working Group:

  • Bibliothèque Nationale de Luxembourg: Patrick Peiffer
  • Digital Public Library of America: Tom Johnson and Mark Matienzo
  • Europeana Foundation: Valentine Charles and Antoine Isaac
  • Kennisland: Maarten Zeinstra
  • University of California San Diego: Esmé Cowles
  • University of Oregon: Karen Estlund
  • Florida State University: Richard J. Urban

“I’m Just Really Comfortable:” Learning at Home, Learning in Libraries / In the Library, With the Lead Pipe

image_pdf
BMCC student reading at home

Matteo, a Borough of Manhattan CC student, drew a picture of himself sitting on the sofa at home reading for his research paper.

In Brief: While commuter students may use their college or university libraries, student centers, or other campus locations for academic work, as commuters they will likely also create and negotiate learning spaces in their homes. Our research with urban commuter undergraduates revealed that finding space for their academic work at home was difficult for many students whose needs collided with the needs of other residents using those locations for non-academic purposes. Understanding the details of students’ off-campus academic workspaces can inform the design of learning spaces in academic libraries.

Jessica,1 a 20-year-old student at Hunter College in New York City, woke up at about 6:00 a.m. one spring day and got ready for school. She left her home in the Bronx where she lived with her mother and uncle and his family, including her three-year-old cousin, at around 7:00 a.m. She took a bus to the subway station, then rode two different subway trains down to Hunter’s 68th street main campus, switching from the express to the local. Jessica was attending college full-time and her four classes that day began at 8:10 a.m. and continued through to 4:00 p.m., with two ninety-minute breaks during which she did homework in the library, ate lunch in the cafeteria, and relaxed by using the computers in the library.

After her classes were over Jessica traveled back to the Bronx. Her commute home was via a different subway line and took slightly longer than in the morning; she arrived home just after 6:00 p.m. In the evening she helped her mother cook dinner then watched a soap opera until about 11:00 p.m. in her private bedroom. Jessica multi-tasked during this time, doing some of her homework while watching TV. At 11:00 p.m. she switched off the TV and began working on an essay for her theater class. Jessica went to sleep at around 2:00 a.m.; since she did not have any classes the following day, she “didn’t really mind staying up that late.”

Jessica had a laptop and did some of her schoolwork at home late into the evening. However, with all of the family members living together, especially her toddler cousin, her home was often noisy and she sometimes had difficulty doing her homework there, even though she had her own bedroom. When she was on campus, Jessica often spent the time between classes working on the second floor of the library, sometimes using a computer to review course materials or print out an essay; she did not always bring her laptop to campus as she sometimes had heavy textbooks to carry. Jessica’s home was located close to the New York Botanical Garden, which she considered a retreat: she often went there to do her academic work when the weather was mild.

• • • • •

Millions of U.S. college students, like Jessica, commute to their campus and classes rather than spend their undergraduate years in a residence hall. While commuter students may take advantage of their college or university libraries, student centers, or other campus locations for their academic work, as commuters it is likely that they need to engage in scholarly work in their homes as well. This article discusses research on ways that urban commuter college students create and negotiate learning spaces in their homes, complementing more traditional study spaces such as libraries. Understanding the details of students’ off-campus academic workspaces–the comforts and pitfalls, the benefits and constraints—can inform the design of learning spaces in libraries and on campus.

This research is part of a study we conducted of undergraduate scholarly habits at the City University of New York (CUNY). We interviewed students to learn about the locations where they do their academic work, their reasons for choosing those spaces, and their successes and challenges in using them. We met with students at six CUNY colleges: Brooklyn College, City College, and Hunter College, which offer Baccalaureate and Masters degrees; New York City College of Technology (City Tech), which offers Associate and Baccalaureate degrees; and Bronx Community College and Borough of Manhattan Community College (BMCC), which offer Associate degrees. CUNY is the largest urban public university in the U.S. and enrolls approximately 270,000 students in degree-granting programs at twenty-four undergraduate and graduate schools in the five boroughs of New York City. CUNY students reflect the astounding diversity of New York City and 44.8% of undergraduates were first-generation college students in 2012 (CUNY Office of Institutional Research and Assessment, 2012a). While we did not collect detailed demographic information about the students we interviewed, we did learn a great deal about how each student’s particular living situations influenced their learning spaces, as will be shown below.

At each of the six colleges we interviewed thirty students using three ethnographic research methods: photo surveys, mapping diaries, and retrospective research process interviews.2 In the photo surveys, we gave each student a list of twenty objects or locations related to their lives as students, for example “your favorite place to study” or “a place in the library you don’t like,” and interviewed each student when they returned the photos to us. For the mapping diaries, we asked each student to log their activities, including time and location, over the course of one typical school day, and also to draw their progress through the day. We also conducted retrospective research process interviews with students in which we asked them to describe and draw the activities they engaged in as they completed work on a research assignment. Our visual and interview data illuminated much detail about students’ experiences as scholars outside the classroom.

CUNY is overwhelmingly a commuter university, and students spend much of their time away from campus at home, work, and other locations. Because commuting to campus was a time-consuming process, many students expressed a preference—or a need—to create their academic spaces in their homes rather than in the college library or other campus location. Some students told us that they chose to do their scholarly work at home because it was a familiar place, with all of the comforts to which they were accustomed. Others chose to complete their schoolwork at home because of external factors, for example, if their classes or job ended for the day after the college library was closed. Still others acknowledged the allure of customizing their study setting when doing scholarly work in their homes. However, many students noted constraints of space, noise level, or the activities of other residents while working at home. These students, like Jessica, struggled with the impact of the overlapping activities of others on their own desire to create a meaningful academic space, and engaged in a variety of strategies in attempt to mitigate those effects.

Published research on strategies used by undergraduate commuter students to create learning spaces at home is slim. The Library Study at Fresno State used interviews and student photographs to delve into the details of students’ school days, and conducted interviews in student homes as well as the dormitories. Many of their findings were congruent with what we learned at CUNY: a student often needed to create a space for their scholarly work in a room or area not intended for or solely devoted to their academic work. One Fresno student complained about his noisy roommates, and others found themselves “impacted by space limitations, shared bedrooms, and younger siblings” (Delcore, Mullooly, & Scroggins, 2009, pp. 17, 21). However, the students interviewed at Fresno also spoke of rooms used for schoolwork in addition to the living room, kitchen, and bedrooms—for example, a crafts room, office, or garage (Delcore et al., 2009, pp. 15, 17). Most students at CUNY described much more constricted home spaces, reflecting the high population density of New York City and the predominance of apartment buildings over other types of residences.

Far more research, most of it quantitative, has been undertaken to ascertain the academic and social benefits and constraints for undergraduates who live in residence halls. Krause suggests that the “quality and nature of social interactions have a significant impact on university retention rates,” and that building relationships with peers can be more challenging for commuter students than for residential students (2007, p. 28). Shushok, Scales, Sriram, and Kidd report finding many advantages for undergraduates who live in dormitories on campus, including more participation in extracurricular activities, more frequent interactions with peers and faculty members, more positive perceptions of the campus climate, higher satisfaction with the college experience, greater personal growth and development, more effort and involvement in both the academic and social experiences of the college, and a higher rate of persistence and degree completion (2011, p. 14).

However, a recent review of the literature found that published studies yielded a mix of evidence of both advantages and disadvantages to college students living in residence halls. Sanchez (2012) undertook a predominantly survey-based study at the University of Nebraska-Lincoln to compare the experiences of first year students who lived in a residence hall with those who lived at home and commuted to campus. Students who enjoyed living in the dorm reported that it was convenient to campus and kept them connected to the university, yet others noted difficulties with roommates and excessive noise in the dorms. Students who lived at home mentioned their appreciation for the money saved as well as the familiarity of their homes, rooms, and families (also found by Green, 2012, p. 98), yet also noted the inconvenience of commuting to campus and the feeling of disconnection from college life.

The Advantages of Studying at Home

Since the vast majority of CUNY students live off-campus and commute to college, it is difficult to generalize the definition, physical layout, or social structure of home for all CUNY students. Almost all of the students we spoke with lived in one of the four more residential outer boroughs of New York City—The Bronx, Brooklyn, Staten Island, and Queens—either in an apartment or house.3 A small number lived in Manhattan or in one of the surrounding suburbs. High-rise apartment dwelling predominates in New York City; over half of all occupied residences are apartments in buildings with a least ten units (United States Census Bureau, 2012).

Surveys conducted by the university have gathered data on the household composition and family obligations for students at the university that can help us envision our students’ residences. While there are differences visible across the six colleges where we interviewed students, broadly speaking the household composition and family obligations for students at each of the campuses are similar: most CUNY undergraduates are unmarried and live with members of their family, most often parents or guardians. A minority of undergraduates support children with whom they may or may not live, a higher number of students at the community colleges and comprehensive college than at the senior colleges. No more than 11% of the undergraduates at CUNY colleges live alone, and 12% or less live with friends or roommates (CUNY Office of Institutional Research and Assessment, 2012b).

Most of the students we spoke with preferred to do their academic work at a library, usually their own college library, as we have discussed elsewhere (Regalado & Smale, forthcoming 2015). A similar study at the residential University of Rochester also revealed that more students chose to use the library for their academic work rather than work in their dorm rooms (Briden & George, p. 26, 2013). However, we did meet a sizable number—over a third of the students we interviewed—who most often chose to engage with their academic work at home. The reasons these students gave for choosing to create a learning space at home were varied and centered broadly on themes of comfort, customization, and, ultimately, familiarity.

The importance of a location with “a lot of comfort” to accomplish academic work was noted by most of the students who considered home to be the preferred study location rather than the library or elsewhere. Soft seating options emerged as a priority, notably opposed to the straight-backed, unpadded chairs and other furniture often available for student use in the library. Though many academic libraries do offer upholstered armchairs, lounge chairs, or sofas, our observations indicate that these types of seating are in the minority at the six libraries we visited. CUNY libraries are typically highly space-constrained, and since soft seating tends to invite lounging and conversation, many of these libraries have opted not to include soft seating in attempt to maintain a quiet environment for studying students. Many of those students for whom comfort was a priority mentioned a preference for studying on a sofa or couch in their homes, most often in their living room or other common area in the apartment or house in which they lived.

A remarkable number of students indicated that their own bed was the preferred study location at home; numerous students photographed their beds for the prompts “your favorite place to study” or “a place at home where you study.” While a bed is certainly comfortable, given its association with sleep it is perhaps not the home location most conducive to work, as a couple of students pointed out to us during their interviews. Those students who preferred to study on their beds may also appreciate the privacy afforded to them in their bedrooms, even those who share a bedroom, though none mentioned that in our interviews.

Well, now that you’re talking about libraries, in general I prefer studying in my bed than at the library. Just because this library specifically doesn’t have many comfortable seats. [. . .] And I just really like to be comfortable when I’m studying.
– Samantha, Hunter College

The option to eat and drink while engaging in their academic work was also important to those students who liked to study at home, and seemed to fit well with their common desire for a comfortable workspace. For many, access to food and drink was a critical component of their academic workspace, and some preferred to study in the kitchen or dining room of their apartment or house. For these students the academic space was imposed on the food preparation and consumption space that typically predominates in those locations.

I love the kitchen. [. . .] 
Because it’s so close to food. The lighting is really bright, not right there, but the lighting is pretty, it’s like right on top so it hits me, right. [. . .] And, I dunno, just really the food. And it’s comfortable for me ‘cause I’m always in the kitchen eating, so it’s good for me.
– Maya, Hunter College

The opportunity to customize their academic workspace in the ways each found most conducive to study was also mentioned by many of the students we interviewed who chose to study primarily at home.4 While this customization took a variety of individualized forms for different students, it typically included reference to the ability to have all of their academic materials available to them and organized in the way they saw fit, rather than the smaller subset of items that each was able to carry to campus on any particular day. Student responses to the photo prompt “the place you keep your books and school materials” displayed a wide range of storage solutions. As expected, many students stored their academic materials on bookshelves or at their desks, both traditional locations for scholarly supplies, while a substantial number of student photos revealed under the bed and on a dresser as common storage places

The ways that each student used their home as an academic workspace also included strategies to customize areas of the apartment or house that were not dedicated to academic activity. As noted above, many students studied in their kitchens or at a common dining table while eating and drinking. Especially in small apartments that may not have dining rooms or room for desks in bedrooms, the common dining table in the kitchen or living area may also be used by the residents of the house for many activities that require the use of a table and chairs. A Bronx CC student described studying at the kitchen table alongside her elementary school-aged child while they both did homework, a strategy that required them to move their academic supplies to another location during mealtimes. Perhaps the most unusual use of residential space for academic activities that we heard about during our research was shared by a Hunter College student who was studying for the MCAT entrance exam for medical school. She had created study guides for herself by writing the materials she needed to memorize for the test onto large sheets of poster board which she hung up in her family’s bathroom so that she could review that content easily in that location.

Bronx CC student's dining room table

Isabella, a Bronx CC student, shared her preferred workspace at home: the common table in her apartment. She had school-aged children and told us that they would all do their schoolwork together at the table.

Time was an important factor for students in considering their preferred learning spaces, and the opportunity to study at a time that was convenient to them was a high priority for students who preferred to do their scholarly work at home. Many students indicated that they simply did not have time to study during the day as school and work schedules did not allow for it. Some wished that their college library was open later—the libraries at the colleges we visited typically close by 11:00 p.m. during the regular semester, with longer hours during exam weeks. However, some students also noted that they did not feel comfortable commuting from campus to their homes late at night, which impacted their choice of study location even with longer library hours.

City College student's drawing of home

On the fifth page of her mapping diary Sarah, a City College student, drew some of the frequently mentioned comforts of home, including food, bed, and lighting.

Ultimately, the reasons cited by the students we met who showed a strong preference for doing their academic work at their apartment or house converged on the theme of familiarity. Home was a familiar place, comfortable and easy to customize, and always open with food and drink available. For these students, the familiarity of their homes gave them a sense of ease while studying that outweighed any potential negative aspects of creating learning spaces at their residence. Students expressed this to us in many different ways, all of which suggest that at home it was easier for them to get their schoolwork done without having to make adjustments based on location.

Yeah, I’d rather study at home. [. . .] It’s more comfortable, and it’s not a lot to adjust to, it’s just at home, you already know the environment.
–Jayden, City Tech

The Challenges of Academic Work at Home

While some students were able to successfully create an academic space in their homes, we spoke with others who were frustrated by their attempts to get scholarly work done at home in locations primarily associated with residential, family, or leisure activities.

For some students it was the physical limitations of home that constrained its use for academic purposes. We met students who did not have a desk to use for their schoolwork, perhaps the piece of furniture most often associated with the activities of a scholar. Other students we spoke with did not have access to a bookshelf, drawer, or other expected items of furniture to store their academic books, folders, and other supplies. A City Tech student noted that while she did not have a desk for herself in her room, there was a child-sized desk for her daughter there. A Brooklyn College student shared his school materials storage solution with us: the ottoman in his family’s living room, which opened up to reveal storage space.

The component of the residential landscape that seemed to impact most significantly on students creating academic space was the presence of the multiple other people engaged in a variety of actions and activities. This affected each student in different, sometimes concurrent, ways. Some students lacked the privacy they desired to accomplish their academic work, perhaps sharing a bedroom or desk with one or more people. Indeed, the space constraints of city living emerged in the accounts of a number of students, in particular those whose personal and sleeping space was also a common living area, as this Bronx CC student notes about studying at home where the living room is also his bedroom.

I mean, it depends on . . . ’cause sometimes my cousin can have company coming in and out of the house. “What’s up bro? What’s up bro?” Every five minutes, you know? [. . .] Then sometimes it could be . . . I dunno. Just what I need. Sometimes I need to be at home. I need to study and to be outside the school. ‘Cause sometimes it’s even more so at the school, whereas at home there’s only four people that are in the house at any given moment.
–Marcus, Bronx CC

While some students were unaffected by competing activities in the kitchen or dining room at home, one Hunter College student revealed that the ability to eat and drink whenever the impulse struck her drove her to study in the library. Indeed, none of the college libraries allowed food or included a café or other eating options within the library itself.

Interviewer: And if you had a choice would you prefer to study at home, or study here on campus somewhere?
Student: I would prefer to study on campus because I find that I’m much more productive on campus. The main reason is because the campus doesn’t have a refrigerator. I tend to have a lot of things to snack on when I’m doing work at home, so then I’m halfway through my work and I realize I have no snacks. Then I go to the kitchen, but then on the way to the kitchen I get distracted by other things . . . like the TV. And then, I’m like, “Huh, let me sit down and have my snack and watch TV,” and my work never gets done. But [on campus], since there’s no kitchen I can’t get distracted. Just sit at my desk and do what I do.
–Yasmin, Hunter College

Many students shared information and communications technologies necessary for their academic work—most often desktop computers, laptops, or printers—with some or all of the residents of their homes. Students’ access to these technologies was thus not always unimpeded, and they sometimes had to schedule their academic work around their family members’ or roommates’ competing needs. Further, the noise produced by the non-academic activities of other members of the household, noise from televisions, video games, conversations, and others, was also noted by some students as a potential (and actual) distraction. Both students who had a private bedroom and those who did not mentioned that the members of their household engaged in activities that produced noise, sometimes to an extent that each found disruptive to their scholarly work. The need to negotiate use of home spaces is highlighted by a Brooklyn College student.

My house is very noisy, it’s a lot of noise, and the thing about it is if I can’t study on campus, I would go home and sit in this little corner [. . .] everybody knows that while I’m in the living room and I’m studying, they can’t come in and watch T.V.
–Damerae, Brooklyn College

Brooklyn College student living room

Damarae, a Brooklyn College student, photographed a small side table in his living room in response to the prompt, “your favorite place to study;” note the TV visible on the left.

Finally, because creating an academic space at home was often a challenge and a frustration for students who did not have a private room in their home, some students ended up using the college library even when it was not their first choice, though others made do at home. The challenge of finding an adequate place to create academic space was met creatively by one successful Bronx CC student we spoke with, who had several younger siblings at home, but the same solution would not work for many.

Interviewer: “A place at home where you study.” What is this place? . . . It looks like the hallway outside your apartment.
Student: Yeah, this is a hallway. You can see the other different apartments along the hall.
Interviewer: Why this place?
Student: Because it’s a quiet area. Sometimes when kids are running around, “Aaaaargh!” I need to find some quiet, so what I do is that I just sit in the hallway. It’s not uncomfortable; it’s pretty cool. You know, I know everyone on my floor, so, they’re like “Oh, you’re studying again!” But they admire me, you know, for putting so much hard work into my schoolwork.
–Audrey, Bronx CC

Bronx CC student's apartment hallway

Audrey, a Bronx CC student, took a photo of the the hallway of her apartment building where she sometimes does her academic work in order to avoid noisy younger siblings.

Conclusions: Home Away from Home

While many CUNY students preferred to study in the comfort of their own homes, for others the college library represented the best possible option for an effective study space. What can college and university libraries, and campuses more generally, learn from the experiences of the students we interviewed? One possibility is to consider initiatives that create a home-like or dormitory-like experience on campus for commuter students. Seattle University’s Collegium program provides “home-like environments in which commuter and transfer students can renew themselves between classes, meet with classmates and faculty in a relaxed setting, have conversations with friends, [and] enjoy a snack” (Seattle University, 2013). The Collegium program uses space on campus, not in the library, but perhaps library space could be converted to meet some of these needs, too.

Commuter students will likely always have the need to engage in some coursework at home, despite the distractions and constraints that many experience when they attempt to construct learning spaces there. While some students can be successful using the college library as a learning space, others cannot. To better accommodate commuter students within the bounds of the campus it is worthwhile to consider how to ensure that the library is an inviting location in which students can successfully accomplish their academic work. Indeed, recent research on student retention confirms the critical importance of “adequate facilities in the library, student center, and academic buildings for students to study, type papers, and make copies of course materials while on campus,” that “are open around times that classes are scheduled, including in the evenings and on weekends” (Braxton et al., 2014, p. 62).

College and university libraries that serve commuter students may consider adding features or services to specifically accommodate their needs. Lockers or lockable carrels that can be reserved by students may relieve them of the need to carry their readings and other course supplies back and forth between home and school. Study rooms or other spaces for solo (rather than group) use may provide the opportunity for students to spread out in the way that is most comfortable for them. Quiet workspace in college or library computer labs may also be useful to students and, if not possible, perhaps laptop or other device loans can fulfill students’ needs for academic access to computers.

It is important to note the tensions that may emerge when considering a comfortable space for commuter students to accomplish their academic work in the college or university library. Many institutions with large commuter populations may not have the physical space or the financial ability to create ample space for students to lounge, eat, relax, and otherwise unwind between their academic commitments. As a location on campus that is generally considered to be quiet, safe, and (we hope) inviting to students, the tendency to use any “comfortable space” in the library as a lounge will likely always be strong for students who truly need such a space. However, on campuses that experience extreme space constraints, as is true at some CUNY colleges, adding comfortable seating may encourage conversation and lead to conflict with students who rely on the library as a quiet place for their academic work. Additionally, while allowing food and drink in the library might be seen by some students as encouraging leisure activity, for others eating while studying may be an essential part of how they make their academic space work for them. Small libraries in large colleges must grapple with these tensions as they consider how best to configure their limited physical spaces.

Local conditions and student needs are likely to vary a great deal, thus even a brief user inquiry is valuable to ensure that decisions about how to shape campus spaces are based on actual student needs not just those assumed by college faculty, staff, and administrators (cf. Foster & Gibbons, 2007). At City Tech we are using the results of this research as we plan to renovate one area of the library to provide more seating and study options for students. At Brooklyn College our research results have already informed small scale decisions such as the addition of express computers for email and quick printing without sign-in; they are also being brought to the table as we develop a strategic space plan, including a possible re-consideration of our no-food policy. As our research has shown, understanding the commuter student experience off-campus can provide invaluable data to inform the creation and maintenance of appropriate learning spaces for commuter students on campus.

Acknowledgements

This article was strengthened by the thoughtful and insightful feedback of our external peer reviewer Henry Delcore, our Lead Pipe peer reviewer Erin Dorney, and our Lead Pipe editor Ellie Collier. Many thanks for working with us! We would also like to thank the CUNY students and faculty we interviewed for their time and effort spent speaking with us, taking photos, and drawing their experiences. We appreciate their willingness to share and the opportunities that this project has provided us to make improvements in our libraries. This work has been supported in part by grants from The City University of New York PSC-CUNY Research Award Program as well as a CUNY Fellowship Leave.

References

Braxton, J. M., Doyle, W. R., Hartley, III, H. V., Hirschy, A. S., Jones, W. A., & McLendon, M. K. (2014). Rethinking college student retention. San Francisco: Jossey-Bass.

Briden, J., and Jones, S. (2013). Picture my work. In N. F. Foster (Ed.), Studying students: A second look (pp. 25-44). Chicago: Association of College and Research Libraries. http://hdl.handle.net/1802/28781

CUNY Office of Institutional Research and Assessment. (2012a). A Profile of Undergraduates at CUNY Senior and Community Colleges: Fall 2012. City University of New York. http://cuny.edu/about/administration/offices/ira/ir/data-book/current/student/ug_student_profile_f12.pdf

– – – – – . (2012b). 2012 Student Experience Survey. City University of New York. http://www.cuny.edu/about/administration/offices/ira/ir/surveys/student/SES2012FinalReport.pdf

Delcore, H. D., Mullooly, J., & Scroggins, M. (2009). The Library Study at Fresno State. Fresno, CA: Institute of Public Anthropology, California State University. http://www.csufresno.edu/anthropology/ipa/thelibrarystudy.html

Foster, N. F., & Gibbons, S. (2007). Studying students: The undergraduate research project at the University of Rochester. Chicago: Association of College and Research Libraries. http://hdl.handle.net/1802/7520

Green, D. (2012). Supporting the academic success of Hispanic students. In L. M. Duke and A. D. Asher (Eds.), College libraries and student culture: What we now know (pp. 87-108). Chicago: American Library Association.

Krause, K. D. (2007). Social involvement and commuter students: The first-year student voice. Journal of the First-Year Experience & Students in Transition, 19(1), 27–45.

Mizrachi, D. (2011). How do they manage it? An exploratory study of undergraduate students in their personal academic information ecologies (Dissertation). University of California, Los Angeles.

Regalado, M., and Smale, M. A. (forthcoming, 2015). “I am more productive in the library because it’s quiet:” Commuter students in the college library. College & Research Libraries. http://crl.acrl.org/content/early/2015/02/05/crl14-696.full.pdf+html

Sanchez, S. E. (2012, April 18). Freshman year living arrangements and college experiences for local students (Masters Thesis). University of Nebraska-Lincoln, Lincoln, Nebraska. http://digitalcommons.unl.edu/cehsedaddiss/90

Seattle University. (2013). Collegia Program. http://www.seattleu.edu/ctsl/collegia/

Shushok, F., Scales, T. L., Sriram, R., & Kidd, V. (2011). A tale of three campuses: Unearthing theories of residential life that shape the student learning experience. About Campus, 16(3), 13–21. doi:10.1002/abc.20063 http://works.bepress.com/rishi_sriram/6/

U. S. Census Bureau. (2009). American Housing Survey (AHS): New York City. Retrieved May 20, 2013, from https://www.census.gov/housing/ahs/data/newyork.html

  1. All student names in this article are pseudonyms.
  2. Our complete research protocols, including interview questions and time log templates, are available on our project website.
  3. We did not explicitly ask students if they lived in apartments or houses. Somewhat confusingly, many New Yorkers tend to use the term “house” to refer to their home regardless of whether they live in an actual house or an apartment.
  4. Research has shown that students who live in residence halls also customize their academic workspace, sometimes extensively. Mizrachi (2011, pp. 240-256) discussed undergraduate UCLA students’ customization of their dorm rooms—especially their desks—to facilitate creation of their learning spaces.

Quality in HathiTrust (Re-Posting) / Library Tech Talk (U of Michigan)

Skew in a Google-digitized volume in HathiTrust

This is a re-posting of a HathiTrust blog post. HathiTrust receives well over a hundred inquiries every month about quality problems with page images or OCR text of volumes in HathiTrust. That’s the bad news. The good news is that in most of these cases, there is something they can do about it. A new blog post is intended to shed some light on the thinking and practices about quality in HathiTrust.

019: Links Should Open in the Same Window / LibUX

Where should links open – and does it matter? In this episode of the podcast, we explore the implications on the net user experience of such a seemingly trivial preference.

Links

You can listen to LibUX on Stitcher, find us on iTunes, or subscribe to the straight feed. Consider signing-up for our weekly newsletter, the Web for Libraries.

The post 019: Links Should Open in the Same Window appeared first on LibUX.

Fedora 4 Project Update IV / Islandora

As the project entered the fourth month, work continued on migration planning and mapping, migration-utils, and Drupal integration.

Migration work was split between working on migration-utils, migration mappings, data modeling (furthering Portland Common Data Model compliance), and working with the Islandora (Fedora 4 Interest Group), Fedora (Fedora Tech meetings), and Hydra (Hydra Metadata Working Group) communities on the preceding items. In addition, Audit Service-- a key requirement of an Islandora community fcrepo3 -> fcrepo4 migration -- finalized the second phase of the project. Community stakeholders are currently reviewing and providing feedback.

Work on migration-utils focused mainly applying a number of mappings (outlined here) to the utility, adding support for object-to-object linking, and providing documentation on how to use the utility. This work can be demonstrated by building the Islandora 7.x-2.x Vagrant Box, cloning the migration-utils repository, and pointing migration-utils at a fcrepo3 native filesystem or directory of exported FOXML.

As for object modeling and inter-community work, an example of this work is the below image of a sample Islandora Large Image object modeled in the Portland Common Data Model. This model will continue to evolve as the communities work together in the various Hydra Metadata Working Group sub-working groups.

Islandora-PCDM-Fedora4.jpg

On the Drupal side of things, work was started on Middleware Services, a middleware service that will utilize the Fedora 4 REST API and the Drupal Services modules, and create an API for the majority of interactions between the two systems. In addition, a few Drupal modules have been created to leverage this; islandora_basic_image, islandora_collection, islandora_dcterms.

In addition, the team has been exploring options with RDF integration and support in Drupal, as well as how to handle editing (Islandora XML Forms) the various descriptive metadata schemas the community uses. This is captured in a few issues in the issue queue; #27 & #28. Due to the importance of the issue, a special Fedora 4 Interest Group meeting was held to discuss how to proceed with this functionality in Islandora 7.x-2.x. The group's consensus was to solicit use cases from the community to better understand how to proceed with 7.x-2.x

Work will continue on the migration and Drupal sides of the project into May.

How Google Crawls Javascript / David Rosenthal

I started blogging about the transition the Web is undergoing from a document to a programming model, from static to dynamic content, some time ago. This transition has very fundamental implications for Web archiving; what exactly does it mean to preserve something that is different every time you look at it? Not to mention the vastly increased cost of ingest, because executing a program takes a lot more, a potentially unlimited amount of, computation than simply parsing a document.

The transition has big implications for search engines too; they also have to execute rather than parse. Web developers have a strong incentive to make their pages search engine friendly, so although they have enthusiastically embraced Javascript they have often retained a parse-able path for search engine crawlers to follow. We have watched academic journals adopt Javascript, but so far very few have forced us to execute it to find their content.

Adam Audette and his collaborators at Merkle | RKG have an interesting post entitled We Tested How Googlebot Crawls Javascript And Here’s What We Learned. It is aimed at the SEO (Search Engine Optimzation) world but it contains a lot of useful information for Web archiving. The TL;DR is that Google (but not yet other search engines) is now executing the Javascript in ways that make providing an alternate, parse-able path largely irrelevant to a site's ranking. Over time, this will mean that the alternate paths will disappear, and force Web archives to execute the content.

Ending “bulk collection” of library records on the line in looming Senate vote / District Dispatch

Man peers through American flag as though it is a window

Image Source: PolicyMic

Last week, the House of Representatives voted overwhelmingly, 338 to 88, for passage of the latest version of the USA FREEDOM Act, H.R. 2048. The bill — and the battle to achieve the first meaningful reform of the USA PATRIOT Act since it was enacted 14 years ago — now shifts to the Senate. There, the outcome may well turn on the willingness of individual voters to overwhelm Congress with demands that USA FREEDOM either be passed without being weakened, or that the now infamous “library provision” of the PATRIOT Act (Section 215) and others slated for expiration on June 1 simply be permitted to “sunset” as the Act provides if Congress takes no action. Now is the time for all librarians and library supporters — for you — to send that message to both of your US Senators. Head to the action center to find out how.

For the many reasons detailed in yesterday’s post, ALA and its many private and public sector coalition partners have strongly urged Congress to pass the USA FREEDOM Act of 2015 without weakening its key, civil liberties-restoring provisions. Already a finely-tuned compromise that delivers fewer privacy protections than last year’s Senate version of the USA FREEDOM Act, this year’s bill simply cannot sustain further material dilution and retain ALA’s (and many other groups’) support. The Obama Administration also officially endorsed and called for passage of the bill.

Unfortunately, the danger of the USA FREEDOM Act being blocked entirely or materially weakened is high. The powerful leader of the Senate, Mitch McConnell of Kentucky, is vowing to bar consideration of H.R. 2048 and, instead, to provide the Senate with an opportunity to vote only on his own legislation (co-authored with the Chair of the Senate Intelligence Committee) to reauthorize the expiring provisions of the PATRIOT Act with no privacy-protecting or other changes whatsoever. Failing the ability to pass that bill, Sen. McConnell and his allies have said that they will seek one or more short-term extensions of the PATRIOT Act’s expiring provisions.

Particularly in light of last week’s ruling by a federal appellate court that the government’s interpretation of its “bulk collection” authority under Section 215 was illegally broad in all key respects, ALA and its partners from across the political spectrum vehemently oppose any extension without meaningful reform of the USA PATRIOT Act of any duration.

The looming June 1 “sunset” date provides the best leverage since 2001 to finally recalibrate key parts of the nation’s surveillance laws to again respect and protect library records and all of our civil liberties. Please, contact your Senators now!

Additional Resources

House Judiciary Committee Summary of H.R. 2048

Statement of Sen. Patrick Leahy, lead sponsor of S. 1123 (May 11, 2015)

Open Technology Institute Comparative Analysis of select USA FREEDOM Acts of 2014 and 2015

Patriot Act in Uncharted Legal Territory as Deadline Approaches,” National Journal (May 10, 2015)

N.S.A. Collection of Bulk Call Data Is Ruled Illegal,” New York Times (May 7, 2015)

The post Ending “bulk collection” of library records on the line in looming Senate vote appeared first on District Dispatch.

Call for Writers / LITA

blogger memememe courtesy of Michael Rodriguez

The LITA blog is seeking regular contributors interested in writing easily digestible, thought-provoking blog posts that are fun to read (and hopefully to write!). The blog showcases innovative ideas and projects happening in the library technology world, so there is a lot of room for contributor creativity. Possible post formats could include interviews, how-tos, hacks, and beyond.

Any LITA member is welcome to apply. Library students and members of underrepresented groups are particularly encouraged to apply.

Contributors will be expected to write one post per month. Writers will also participate in peer editing and conversation with other writers – nothing too serious, just be ready to share your ideas and give feedback on others’ ideas. Writers should expect a time commitment of 1-3 hours per month.

Not ready to become a regular writer but you’d like to contribute at some point? Just indicate in your message to me that you’d like to be considered as a guest contributor instead.

To apply, send an email to briannahmarshall at gmail dot com by Friday, May 29. Please include the following information:

  • A brief bio
  • Your professional interests, including 2-3 example topics you would be interested in writing about
  • If possible, links to writing samples, professional or personal, to get a feel for your writing style

Send any and all questions my way!

Brianna Marshall, LITA blog editor

EFF chief to keynote Washington Update session at Annual Conference / District Dispatch

Cindy Cohn, Legal Director and General Counsel for the EFF. Photographed by Erich Valo.

Cindy Cohn, Legal Director and General Counsel for the EFF. Photographed by Erich Valo.

For decades, the Electronic Frontier Foundation (EFF) and the American Library Association (ALA) have stood shoulder to shoulder on the front lines of the fight for privacy online, at the library and in many other spheres of our daily lives. EFF Executive Director Cindy Cohn will discuss that proud shared history and the uncertain future of personal privacy during this year’s 2015 ALA Annual Conference in San Francisco. The session, titled “Frenetic, Fraught and Front Page: An Up-to-the-Second Update from the Front Lines of Libraries’ Fight in Washington,” takes place from 8:30 to 10:00 a.m. on Saturday, June 27, 2015, at the Moscone Convention Center in room 2001 of the West building.

Before becoming EFF’s Executive Director in April of 2015, Cohn previously served as the award-winning group’s legal director and general counsel from 2000–2015. In 2013, National Law Journal named Cohn one of the 100 most influential lawyers in America, noting: “If Big Brother is watching, he better look out for Cindy Cohn.” In 2012, the Northern California Chapter of the Society of Professional Journalists awarded her the James Madison Freedom of Information Award.

During the conference session, Adam Eisgrau, managing director of the ALA Office of Government Relations, will provide an up-to-the-minute insight from the congressional trenches of key federal privacy legislation “in play,” including the current status of efforts to reform the USA PATRIOT Act, the Freedom of Information Act (FOIA), as well as copyright reform, net neutrality and federal library funding. Participants will have the opportunity to pose questions to the speakers.

Speakers

  • Cindy Cohn, executive director, Electronic Frontier Foundation
  • Adam Eisgrau, managing director, Office of Government Relations, American Library Association

View all ALA Washington Office conference sessions

The post EFF chief to keynote Washington Update session at Annual Conference appeared first on District Dispatch.

Finish Off Your Digital Preservation To-do List with ArchivesDirect / DuraSpace News

Winchester, MA  Everyone has a different set of priorities when it comes to planning for digital preservation. Here are some examples of items that might appear on a typical digital preservation to-do list:

1. leverage hosted online service to manage preservation process

2. apply different levels of preservation to different types of content

3. do more than back up content on spare hard drives

4. keep copies in multiple locations

5. make sure content remains viable

Lucidworks Fusion 1.4 Now Available / SearchHub

We’ve just released Lucidworks Fusion 1.4! This version is a Short-Term Support, or “preview” release of Fusion. There are a lot of new features in this version. Some of the highlights are: Security Fusion has always provided fine-grained security control on top of Solr. In version 1.4, we’ve significantly enhanced our integration with enterprise security systems: fusion-1.4-image00 Kerberos We now support setting up Fusion as a Kerberos-protected service. You will be able to authenticate to Kerberos in your browser or API client, and instead of providing a password to Fusion, Fusion will validate you and allow (or disallow) access via Kerberos mechanisms. LDAP Group Mapping We’ve enriched our LDAP directory integration. In the past, we’ve been able to use LDAP to authenticate users and perform document-level security trimming. We can now additionally determine the user’s LDAP group memberships, and use those memberships to assign them to Fusion roles. Alerting We’ve introduced pipeline stages to send alerts, one each in the indexing and query pipelines. With these stages, you can send emails or Slack messages in response to documents passing through those pipelines. Emails are fully templated, so you can customize the content and include data from matching documents. And you’ll soon also be able to add other alerting methods besides email and Slack. A simple use for these is to set up notifications whenever a document matching a set of queries is crawled. Look for a post from our CTO (who wrote the code!) published here for more info on using alerting. fusion-1.4-image01 Logstash Connector We add new connector integrations to Fusion all the time, but the Logstash Connector deserves special note. For those of you collecting machine data, it’s been possible to configure Logstash to ship logs to a Fusion pipeline or Solr index. The new Fusion Logstash Connector does this too, but makes it easier to install, configure, and manage. We include an embedded Logstash installation, so that you can start, stop, and edit your Logstash configuration right from the Fusion Datasource Admin GUI. You can use any standard Logstash plugin (including the network listeners, file tailing inputs, grok filter, or other format filters), and Fusion will automatically send the results into Fusion. There, you can do further Fusion pipeline processing, simple field mappings, or just index straight into Solr. Apache Spark Fusion is now including Apache Spark and the ability to use Spark to run complex analytic jobs. For now, the Fusion event aggregations and signals extractions can run in Spark for faster processing. In future releases, we expect to allow you to write and run more types of jobs in Spark, taking advantage of any of Spark’s powerful features and rich libraries. Solr 5.x As of Fusion 1.4, we officially support running Fusion against Solr 5.x clusters. We will still ship with an embedded Solr 4.x installation until we have validated repackaging and upgrades for existing Fusion/Solr 4.x customers, but new customers are free to install Solr 5.x, start it up in the SolrCloud cluster mode (bin/solr start -c), and use Fusion and all Fusion features with the new version. As you can see, we’re quickly adding new capabilities to Fusion and these latest features are just a preview of what’s on the way. Stay tuned for much more! Download Lucidworks Fusion, read the release notes, or learn more about Lucidworks Fusion.

The post Lucidworks Fusion 1.4 Now Available appeared first on Lucidworks.

Yahoo YBoss spell suggest API significantly increases pricing / Jonathan Rochkind

For a year or two, we’ve been using the Yahoo/YBoss/YDN Spelling Service API to provide spell suggestions for queries in our homegrown discovery layer. (Which provides UI to search the catalog via Blacklight/Solr, as well as an article search powered by EBSCOHost api).

It worked… good enough, despite doing a lot of odd and wrong things. But mainly it was cheap. $0.10 per 1000 spell suggest queries, according to this cached price sheet from April 24 2105. 

However, I got an email today that they are ‘simplifying’ their pricing by charging for all “BOSS Search API” services at $1.80 per 1000 queries, starting June 1.

That’s 18x increase. Previously we paid about $170 a year for spell suggestions from Yahoo, peanuts, worth it even if it didn’t work perfectly. That’s 1.7 million querries for $170, pretty good.  (Honestly, I’m not sure if that’s still making queries it shouldn’t be, in response to something other than user input. For instance, we try to suppress spell check queries on paging through an existing result set, but perhaps don’t do it fully).

But 18x $170 is $3060.  That’s a pretty different value proposition.

Anyone know of any decent cheap spell suggest API’s? It looks like maybe Microsoft Bing has a poorly documented one.  Not sure.

Yeah, we could role our own in-house spell suggestion based on a local dictionary or corpus of some kind. aspell, or Solr’s built-in spell suggest service based on our catalog corpus.  But we don’t only use this for searching the catalog, and even for the catalog I previously found these API’s based on web searches provided better results than a local-corpus-based solution.  The local solutions seemed to false positive (provide a suggestion when the original query was ‘right’) and false negative (refrain from providing a suggestion when it was needed) more often than the web-based API’s. As well, of course, as being more work on us to set up and maintain.


Filed under: General

What’s up with this Jane-athon stuff? / Manage Metadata (Diane Hillmann and Jon Phipps)

The RDA Development Team started talking about developing training for the ‘new’ RDA, with a focus on the vocabularies, in the fall of 2014. We had some notion of what we didn’t want to do: we didn’t want yet another ‘sage on the stage’ event, we wanted to re-purpose the ‘hackathon’ model from a software focus to data creation (including a major hands-on aspect), and we wanted to demonstrate what RDA looked like (and could do) in a native RDA environment, without reference to MARC.

This was a tall order. Using RIMMF for the data creation was a no-brainer: the developers had been using the RDA Registry to feed new vocabulary elements into their their software (effectively becoming the RDA Registry’s first client), and were fully committed to FRBR. Deborah Fritz had been training librarians and other on RIMMF for years, gathering feedback and building enthusiasm. It was Deborah who came up with the Jane-athon idea, and the RDA Development group took it and ran with it. Using the Jane Austen theme was a brilliant part of Deborah’s idea. Everybody knows about JA, and the number of spin offs, rip-offs and re-tellings of the novels (in many media formats) made her work a natural for examining why RDA and FRBR make sense.

One goal stated everywhere in the marketing materials for our first Jane outing was that we wanted people to have fun. All of us have been part of the audience and on the dais for many information sessions, for RDA and other issues, and neither position has ever been much fun, useful as the sessions might have been. The same goes for webinars, which, as they’ve developed in library-land tend to be dry, boring, and completely bereft of human interaction. And there was a lot of fun at that first Jane-athon–I venture to say that 90% of the folks in the room left with smiles and thanks. We got an amazing response to our evaluation survey, and the preponderance of responses were expansive, positive, and clearly designed to help the organizers to do better the next time. The various folks from ALA Publishing who stood at the back and watched the fun were absolutely amazed at the noise, the laughter, and the collaboration in evidence.

No small part of the success of Jane-athon 1 rested with the team leaders at each table, and the coaches going from table to table helping out with puzzling issues, ensuring that participants were able to create data using RIMMF that could be aggregated for examination later in the day.

From the beginning we thought of Jane 1 as the first of many. In the first flush of success as participants signed up and enthusiasm built, we talked publicly about making it possible to do local Jane-athons, but we realized that our small group would have difficulty doing smaller events with less expertise on site to the same standard we set at Jane-athon 1. We had to do a better job in thinking through the local expansion and how to ensure that local participants get the same (or similar) value from the experience before responding to requests.

As a step in that direction CILIP in the UK is planning an Ag-athon on May 22, 2015 which will add much to the collective experience as well as to the data store that began with the first Jane-athon and will be an increasingly important factor as we work through the issues of sharing data.

The collection and storage of the Jane-athon data was envisioned prior to the first event, and the R-Balls site was designed as a place to store and share RIMMF-based information. Though a valuable step towards shareable RDA data, rballs have their limits. The data itself can be curated by human experts or available with warts, depending on the needs of the user of the data. For the longer term, RIMMF can output RDF statements based on the rball info, and a triple store is in development for experimentation and exploration. There are plans to improve the visualization of this data and demonstrate its use at Jane-athon 2 in San Francisco, which will include more about RDA and linked data, as well as what the created data can be used for, in particular, for new and improved services.

So, what are the implications of the first Jane-athon’s success for libraries interested in linked data? One of the biggest misunderstandings floating around libraryland in linked data conversations is that it’s necessary to make one and only one choice of format, and eschew all others (kind of like saying that everyone has to speak English to participate in LOD). This is not just incorrect, it’s also dangerous. In the MARC era, there was truly no choice for libraries–to participate in record sharing they had to use MARC. But the technology has changed, and rapidly evolving semantic mapping strategies [see: dcpapers.dublincore.org/pubs/article/view/3622] will enable libraries to use the most appropriate schemas and tools for creating data to be used in their local context, and others for distributing that data to partners, collaborators, or the larger world.

Another widely circulated meme is that RDA/FRBR is ‘too complicated’ for what libraries need; we’re encouraged to ‘simplify, simplify’ and assured that we’ll still be able to do what we need. Hmm, well, simplification is an attractive idea, until one remembers that the environment we work in, with evolving carriers, versions, and creative ideas for marketing materials to libraries is getting more complex than ever. Without the specificity to describe what we have (or have access to), we push the problem out to our users to figure out on their own. Libraries have always tried to be smarter than that, and that requires “smart” , not “dumb”, metadata.

Of course the corollary to the ‘too complicated’ argument lies the notion that a) we’re not smart enough to figure out how to do RDA and FRBR right, and b) complex means more expensive. I refuse to give space to a), but b) is an important consideration. I urge you to take a look at the Jane-athon data and consider the fact that Jane Austen wrote very few novels, but they’ve been re-published with various editions, versions and commentaries for almost two centuries. Once you add the ‘based on’, ‘inspired by’ and the enormous trail created by those trying to use Jane’s popularity to sell stuff (“Sense and Sensibility and Sea Monsters” is a favorite of mine), you can see the problem. Think of a pyramid with a very expansive base, and a very sharp point, and consider that the works that everything at the bottom wants to link to don’t require repeating the description of each novel every time in RDA. And we’re not adding notes to descriptions that are based on the outdated notion that the only use for information about the relationship between “Sense and Sensibility and Sea Monsters” and Jane’s “Sense and Sensibility” is a human being who looks far enough into the description to read the note.

One of the big revelations for most Jane-athon participants was to see how well RIMMF translated legacy MARC records into RDA, with links between the WEM levels and others to the named agents in the record. It’s very slick, and most importantly, not lossy. Consider that RIMMF also outputs in both MARC and RDF–and you see something of a missing link (if not the Golden Gate Bridge :-) .

Not to say there aren’t issues to be considered with RDA as with other options. There are certainly those, and they’ll be discussed at the Jane-In in San Francisco as well as at the RDA Forum on the following day, which will focus on current RDA upgrades and the future of RDA and cataloging. (More detailed information on the Forum will be available shortly).

Don’t miss the fun, take a look at the details and then go ahead and register. And catalogers, try your best to entice your developers to come too. We’ll set up a table for them, and you’ll improve the conversation level at home considerably!

Tech Yourself Before You Wreck Yourself – Volume 6 / LITA

robot6What’s new with you TYBYWYers? I’m sure you’ve been setting the world on fire with your freshly acquired tech skills. You’ve been pushing back the boundaries of the semantic web. Maybe the rumors are true and you’re developing a new app to better serve your users. I have no doubt you’re staying busy.

If you’re new to Tech Yourself, let me give you a quick overview. Each installment, produced monthly-ish offers a curated list of tools and resources for library technologists at all levels of experience. I focus on webinars, MOOCs, and other free/low-cost options for learning, growing, and increasing tech proficiency. Welcome!

Worthwhile Webinars:

Texas State Library and ArchivesTech Tools With Tine – One Hour of Arduino – May 29, 2015 – I’ve talked about this awesome ongoing tech orientation series before, and this installment on Arduino promises to be an exciting time!

TechSoup for LibrariesExcel at Everything! (Or At Least Make Better Spreadsheets) – May 21, 2015 – I will confess I am obsessed with Excel, and so I take every free class I find on the program. Hope to see you at this one!

Massachusetts Library SystemPower Searching: Databases and the Hidden Web – May 28, 2015 – Another classic topic, and worth revisiting!

I Made This:

LYRASISLYRASIS eGathering – May 20th, 2015

Shameless self-promotion, but I’m going to take three paragraphs to draw your attention to an online conference which I’ve organized. I know! I am proud of me too.

eGathering 2015eGathering 2015

But not as proud as I am of the impressive and diverse line-up of speakers and presentations that comprise the 2015 eGathering. The event is free, online, and open to you through the generosity of LYRASIS members. Register online today and see a Keynote address by libtech champion Jason Griffey, followed by 6 workshop/breakout sessions, one of which is being hosted by our very own LITA treasure, Brianna Marshall. Do you want to learn ’bout UX from experts Amanda L. Goodman and Michael Schofield? Maybe you’re more interested in political advocacy and the library from EveryLibrary‘s John Chrastka? We have a breakout session for you.

Register online today! All registrants will receive an archival copy of the complete eGathering program following the event. Consider it my special gift to you, TYBYWYers.

Tech On!

TYBYWY will return June 19th!

A DPLA of Your Very Own / DPLA

This guest post was written by Benjamin Armintor, Programmer/Analyst at Columbia University Libraries and a 2015 DPLA + DLF Cross-Pollinator Travel Grant awardee.

I work closely with the Hydra and Blacklight platforms in digital library work, and have followed the DPLA project with great interest as a potential source of data to drive Blacklight sites. I think of frameworks like Blacklight as powerful tools for exploring what can be done with GLAM data and resources, but it’s difficult to get started in without data and resources to point it at. I had experimented with mashups of OpenLibrary data and public domain MARC cataloging, but the DPLA content was uniquely rich and varied, has a well-designed API, and carried with it a decent chance that an experimenter would be affiliated with some of the entries in the index.

Blacklight was designed to draw its data from Solr, but the DPLA API itself is so close to a NoSQL store that it seemed like a natural fit to the software. Unfortunately, it’s hard to make time for projects like that, and as such the DPLA+DLF Cross-Pollinator travel grant was a true boon.

Attending DPLAfest afforded me a unique opportunity to work with the DPLA staff on a project to quickly build a Blacklight site against DPLA data, and thanks to their help and advice I was able to push along a Blacklight engine that incorporated keyword and facet searches and thumbnail images of the entire DPLA corpus—an impressive 10 million items!—by the end of the meeting. The progress we made was enthusiastically received by the Blacklight and Hydra communities: I began receiving contributions and installation reports before the meeting was over. I’ve since made progress moving the code along from a conference demonstration to a fledgling project; the community contributions helped find bugs and identify gaps in basic Blacklight functionality, which I’ve slowly been working through. I’m also optimistic that I’ve recruited some of the other DPLAFest attendees to contribute, as an opportunity to learn more about the DPLA api, Blacklight, and Ruby on Rails.
Check the progress on the project that started at DPLAfest on GitHub

Storify of LITA’s First UX Twitter Chat / LITA

LITA’s UX Interest Group did a fantastic job moderating the first ever UX Twitter Chat on May 15th. Moderators Amanda (@godaisies) and Haley (@hayleym1218) presented some deep questions and great conversations organically grew from there. There were over 220 tweets over the 1-hour chat.

The next UX Twitter Chat will take place on Friday, May 29th, from 2-3 p.m. EDT, with moderator Bohyun (@bohyunkim). Use #litaux to participate. See this post for more info. Hope you can join us!

Here’s the Storify of the conversation from May 15th.

Brush inking exercise / Patrick Hochstenbach

Filed under: Comics Tagged: art, cartoon, cat, ink, inking, mouse

Brush inking exercise II / Patrick Hochstenbach

Filed under: Doodles Tagged: brush, cartoon, cat, comic, inking, mouse, sketchbook

Supporting the USA FREEDOM Act of 2015: ALA’s Perspective / District Dispatch

Rollercoaster at sunset

by Ronald Repolona

Anyone who’s followed legislative efforts over the past ten plus years to restore a fraction of the civil liberties lost by Americans to the USA PATRIOT Act and other surveillance laws will understand the photo accompanying this post. With the revelations of the last several years in particular, first by the New York Times and then Edward Snowden, many believed that real reform might be achieved in the last Congress by passing the USA FREEDOM Act of 2014. They were wrong.

In May 2014, the House passed a version of the USA FREEDOM Act (H.R. 3361) that was dramatically weakened from a civil liberties point of view in the House Judiciary Committee and then stripped of virtually all meaningful privacy-restoring reforms by the full House of Representatives. While strenuous efforts were made to bring a robust version of the bill (S. 2685) to the floor of the Senate, Republican members filibustered that bill and the 113th Congress ended without further action on any form of the USA FREEDOM Act of 2014.

Undeterred, the bill’s bipartisan sponsors in both chambers recently reintroduced the USA FREEDOM Act of 2015, H.R. 2048 and S. 1123, a tenuously calibrated agreement that garnered the support of both many civil liberties organizations, including the American Library Association (ALA), as well as congressional “surveillance hawks,” the nation’s intelligence agencies, and the Administration. On May 14, just one week after a federal appeals court ruled the NSA’s use of Section 215 to collect Americans’ telephone call records in bulk illegal, H.R. 2048 passed the House with a strongly bipartisan vote  (338 yeas – 88 nays). At this writing, with effectively just one week remaining for Congress to consider expiring PATRIOT Act provisions before recessing for the Memorial Day holiday and the June 1 “sunset” of those provisions, the bill’s fate rests with the Senate and is highly uncertain.

Not all civil liberties advocates, however, are pushing for passage of this year’s version of the USA FREEDOM Act. The ACLU, for example, is calling on Congress to simply permit Section 215 and other expiring provisions of the PATRIOT Act to “sunset” as scheduled on June 1. The Electronic Frontier Foundation (EFF) also is urging Members of Congress to strengthen H.R. 2048 (rather than pass it in its current form) because, in EFF’s view, the reforms it makes will not sweep as broadly as the appeals court’s recent ruling could if upheld and broadened in its precedential effect by adoption in other courts (including eventually perhaps the U.S. Supreme Court). Neither group, however, is urging Members of Congress to vote against H.R. 2048.

These views by respected long-time ALA allies have, not unreasonably, caused some to ask (and no doubt many more to wonder) why ALA is actively urging its members and the public to work for passage of H.R. 2048. The answer is distillable to four words: policy, politics, permanence, and perseverance.

Policy

Since January of 2003, the Council of the American Library Association (the Association’s policy-setting body) has adopted at least eight Resolutions addressing the USA PATRIOT Act and the access to library patron reading, researching and internet usage records that it affords the government under Section 215 and through the use of National Security Letters (NSLs) and their associated “gag orders.” While somewhat different in individual focus based upon the legislative environments in which they were written, all make ALA’s position on Section 215 of the PATRIOT Act and related authorities consistently clear. Stated most recently in January of 2014, that position is that ALA “calls upon Congress to pass legislation supporting the reforms embodied in [the USA FREEDOM Act of 2014] (see ALA CD#20-1(A)).”

As detailed in this Open Technology Institute (OTI) section-by-section, side-by-side comparison of the current USA FREEDOM Act (H.R. 2048) with two versions introduced in the last Congress, the current bill is a long way from perfect (just as the “old” ones were). It does, however, achieve the principal objectives of last year’s legislation endorsed by ALA’s Council. Specifically, H.R. 2048:

  • categorically ends the bulk collection not only of telephone call records but also of any “tangible things” (in the language of Section 215), library records included. Henceforth, any request for records must relate to a specific pending investigation and be based upon a narrowly defined “specific selection term” as defined in the law. Accordingly, no longer will the NSA or FBI be able to assert that the search histories of all public access computers are “tangible things” whose production they can lawfully and indefinitely compel as part of an essentially boundless fishing expedition. Nor will agencies be able to continue “bulk collection” under other legal authorities, including National Security Letters, or “PEN register” and “trap and trace” statutes;
  • significantly strengthens judicial review of the non-disclosure (“gag”) orders that generally accompany NSLs by eliminating the current requirement in law that a court effectively accept without challenge mere certification by a high-level government official that disclosure of the order would endanger national security. H.R. 2048 also requires the government to initiate judicial review of nondisclosure orders and to bear the burden of proof in those proceedings that they are statutorily justified;
  • permits more robust public reporting by companies and others who have received Section 215 orders or NSLs from the government of the number of such requests they’ve processed; and
  • requires the secret “FISA Court” that issues surveillance authorities to designate a panel of fully “cleared” expert civil liberties counsel whom the court may appoint to advise it in cases involving significant or precedential legal issues, and to declassify its opinions or summarize them for public access when declassification is not possible. The bill also expands the opportunity for review of FISA Court opinions by federal appellate courts.

As OTI’s “side-by-side” also indicates, H.R. 2048 falls short of last year’s USA FREEDOM Act iteration in several important respects. Most significantly, records collected by the government on persons who ultimately are not relevant to an investigation may still be retained, and reforms affected in last year’s bill to Section 702 of the Foreign Intelligence Surveillance Act Amendments Act are decidedly weaker. The bill also extends expiring portions of the PATRIOT Act, as modified, for five years.

Politics

Determining whether ALA should support a particular piece of almost inevitably imperfect legislation turns not only on the content of the legislation (though that naturally receives disproportionate weight in an assessment), but also on the probability of achieving a better result and when such a result might conceivably be obtained. With the change in control of the Senate in 2014 and very high probability that control of the House will not shift for many elections to come, many groups including ALA believe that H.R. 2048 represents the “high water mark” in reform of Section 215 and related legal authorities achievable in the foreseeable future.

Permanence

The recent landmark ruling by the U.S. Court of Appeals for the Second Circuit noted above was sweeping and clear in some respects, but limited and uncertainty producing in others. Specifically, the Court firmly ruled that the bulk collection of telephone records under Section 215 is illegal. That ruling, however, addressed only the NSA’s bulk collection of “telephony metadata.” It did not directly speak to the bulk collection of any other information, including library records of any kind.

Further, while binding in the states that make up the Second Judicial Circuit (Connecticut, New York, and Vermont), the court’s decision has no precedential effect in any other part of the country. It is also unclear whether the Second Circuit’s decision will be appealed by the government and, if so, what the outcome will be.

Finally, similar decisions are pending in two other federal Courts of Appeal. Should one or both rulings differ materially from the Second Circuit’s, further uncertainty as to what the law is and should be nationally will result. Resolution of such a “split in the Circuits” can only be accomplished through a multi-year appeal process to the U.S. Supreme Court, which is not required to hear the case.

Enactment of the current version of the USA FREEDOM Act would “lock in” the reforms noted above immediately, permanently and nationwide. Accordingly, on balance, ALA and its many coalition allies are supporting the bill and affirmatively urging Members of Congress to do the same.

Perseverance  

Finally, and crucially, ALA and its allies have long been and remain fully committed to working for the most profound reform of all of the nation’s privacy and surveillance laws possible. ALA thus regards the USA FREEDOM Act of 2015 as a critical step — the first possible in 14 years — to make real progress toward that much broader permanent goal, but as only a step.

Work in this Congress (and beyond) will continue aggressively to pass comprehensive reform of the badly outdated Electronic Communications Privacy Act and to restore Americans’ civil liberties still compromised by, for example, other portions of the USA PATRIOT Act, Section 702 of the Foreign Intelligence Surveillance Act, Executive Order 12333 and many other privacy-hostile legal authorities.

With our allies at our side, and librarians and their millions of patrons behind us, the fight goes on.

The post Supporting the USA FREEDOM Act of 2015: ALA’s Perspective appeared first on District Dispatch.