Planet Code4Lib

A Sister Blog is Born / HangingTogether

nextOCLC has launched a new blog: Next. Focused on what comes next for libraries, librarians, and the communities they serve, it will draw upon OCLC staff with a variety of experiences and perspectives.

First up is Skip Prichard, OCLC CEO,  who discusses “Transforming data into impact”. This was also the topic of an OCLC program at ALA Midwinter of the same title, and you can find links to the slides and video of the event in his post.

Second is yours truly on “Getting started with linked data”. In this short piece I try to make linked data understandable and explain why it is important (making data more machine-actionable) and how it will have an impact on libraries (by making many of workflows more efficient and enhancing the user discovery experience).

Then there is “Learning isn’t learning until you use it” by my Membership and Research colleague Sharon Streams. In it she provides some sage advice for both students and teachers — and aren’t we both at different times? And any post that ends with a story from comedian Louis CK can’t be all bad, right?

These initial posts will be followed up by other colleagues who have some fascinating things to say. I think you will find this blog will be well worth adding to your blog reader or aggregator. If you use Twitter more than blog aggregators for current awareness as I do, follow @OCLC and you’ll be good.

About Roy Tennant

Roy Tennant works on projects related to improving the technological infrastructure of libraries, museums, and archives.

Libraries celebrate 20th anniversary of telecom act / District Dispatch

Libraries are celebrating the 20th Anniversary of the 1996 Telecommunications Act this week!

Libraries are celebrating the 20th Anniversary of the 1996 Telecommunications Act this week!

When the 1996 Telecommunications Act was signed into law, only 28% of libraries provided public internet access. What a dizzying two decades we’ve experienced since then! It’s hard to imagine how #librariestransform without also considering the innovations enabled by Act and the E-rate program it created.

Libraries were named one of seven major application areas for the National Information Infrastructure in a 1994 taskforce report: “For education and for libraries, all teachers and students in K-12 schools and all public libraries—whether in urban suburban, or rural areas; whether in rich or in poor neighborhoods—need access to the educational and library services carried on the NII. All commercial establishments and all workers must have equal access to the opportunities for electronic commerce and telecommuting provided by the NII. Finally, all citizens must have equal access to government services provided over the NII.”

In his 1997 State of the Union address, President Clinton called for all schools and libraries to be wired by 2000. We came close: 96% of libraries were connected by this time.

Looking back at precursor reports to the Digital Inclusion Survey, we see both how much things have changed—and how some questions and challenges have stubbornly lingered. Fewer and fewer of us likely remember the dial up dial tone, but in 1997 nearly half of all libraries were connected to the internet at speeds of 28.8kbps. (Thankfully, by 2006 we weren’t even asking about this speed category anymore!) The average number of workstations was 1.9, compared to 19 today.

Then, as now, though, libraries reported that their bandwidth and number of public computers available were unable to meet patron demand at least some of the time. Libraries, like the nation as a whole, also continue to see disparities among urban, suburban and rural library connectivity.

Or how about this quote from the 1997 report under the subheading The Endless Upgrade: “One-shot fixes for IT in public libraries is not a viable policy strategy.”

As exhausting as we may sometimes feel at the speed of change, what has been enabled is truly transformative. From connecting rural library patrons to legal counsel via videoconferencing in Maine to creating and uploading original digital content from library patrons nationwide, “The E’s of Libraries®” are powered by broadband.

According to a 2013 Pew Internet Project report, the availability of computers and internet access now rivals book lending and reference expertise as vital library services. Seventy-seven percent of Americans say free access to computers and the internet is a “very important” service of libraries, compared with 80 percent who say borrowing books and access to reference librarians are “very important” services.

America’s libraries owe a debt to Senators Rockefeller, Snowe and Markey for recognizing and investing in the vital roles libraries and schools play in leveraging the internet to support education and lifelong learning. And we also are grateful to the current FCC for upgrading E-rate for today—setting gigabit goals and creating new opportunities to expand fiber connections to even our most geographically far flung. We invite you to celebrate the 20th anniversary of the Telecom Act (hashtag #96×20) and share how your #librariestransform with high-speed broadband all this week.

The post Libraries celebrate 20th anniversary of telecom act appeared first on District Dispatch.

Amazon Crawl: part en / Open Library Data Additions

Part en of Amazon crawl..

This item belongs to: data/ol_data.

This item has files of the following types: Data, Data, Metadata, Text

Islandora's Long Tail VIII / Islandora

Time for the 8th installment of the Islandora Long Tail (which contains eight modules!), where we take a look at modules outside of the Islandora release that are being developed around the Islandora community.

Islandora Job

Released by discoverygarden last November, this module utilizes Gearman to facilitate asynchronous and parallel processing of Islandora jobs and allows for Drupal modules to register worker functions and routes received messages from the job server to the appropriate worker functions.  

Islandora GSearcher

Another module from discoverygarden, this one a brand new release. Islandora GSearcher sends created and edited objects to be indexed via the Fedora Generic Search Service on page exit, removing the need for ActiveMQ between Fedora and GSearch.

Islandora UIIG Edit Metadata

To address some perceived issues with the interface currently available for editing metadata, the User Interface Interest Group has started work on this standalone feature module to create an "Edit Metadata" tab. It's currently in the early stages of development, so please suggest use cases, improvements, and refinements.

Islandora Ingest Drag'n'Drop

From Brad Spry at the University of North Carolina Charlotte, this ingest module provides a methodology for creating a drag-and-drop batch ingest workflow powered by a local Linux-based NAS system integrated with an Islandora ingest server. Basically, it gives access to the power of islandora_batch without the need to use terminal commands. You can use it with another fun little tool from UNCC, the Islandora Ingest Indicator, which  is "designed to communicate Islandora ingest status to Archivists; a methodology for integrating Blink (link is external) indicator lights with an Islandora ingest server. We have programmed Blink to glow GREEN for indicating "ready for ingest" and RED for "ingest currently running."More about Blink:





Islandora Usage Stat Callbacks

This offering from the Florida Virtual Campus team and Islandora IR Interest Group convenor Bryan Brown, is a helper module that works with Islandora Usage Stats to take the data it collects and expose it via URL callbacks.

Barnard Collection View

And finally, a custom content type from Ben Rosner at Boston College that allows archivists and curators to create a collection view by way of using a Solr query. Basically, it "aims to mimic certain behaviors from the Islandora Solr Views module, but also permit the user to search, sort, facet, and explore the collection without navigating them away from the page." Ben is looking for feedback and  has provided a couple of screenshots of what it looks like in action:

Islandora Mirador Bookreader

This module implements Mirador open source IIIF image viewer for Islandora Book Solution Pack. It was developed by the team at the University of Toronto, with support from the The Andrew W. Mellon Foundation for development of the French Renaissance Paleography website.

Hack your Calendars? Using them for more than just appointments. / LITA

As librari*s one thing we know, and usually know well, is how to do more with less, or at least without any increase. With this mindset, even the most mundane tools can take on multiple roles. For example, our Calendars

I had a boss near the beginning of my professional career who leveraged their calendar in ways I’d never thought to: as a log for tracking projects, personal ticketing system, and the usual meeting/appointment scheduling. It stuck with me; a handful of years later and I still use that same process.

When I interviewed for my now current job, I was asked how I prioritize and manage what I have to do. My response: with my calendar. I don’t have meetings every hour of every day but I do have a lot of tasks to do and things I’m working on, and having a running log of this is useful, as well as scheduling out blocks of time to actually get my work done.

Using a tool that was designed to organize days and then developed for individual use or network use (sharing of information). Personal calendars kept separate from work calendars, and all used for documenting appointments on our schedules. Why not use them for more than that? Calendar software is designed to intake a reasonable amount of information, customize it as you will.

Things that a Calendar offers that makes this easy

  • Free text Subject/Location fields
  • Start & End times
  • Category options (you decide!) — if you wear multiple hats or are working for multiple teams, this can be incredibly useful
  • Free text Notes field
  • Privacy options

Using a Calendar this way allows you to link together in one point an array of information — people associated with a project, a URL to a google doc, organize based on the hat you’re wearing, document time spent on projects — really helpful for annual reviews. My personal favorite use is noting what you did with a specific project (or problem), this works well when you need a ticketing system setup but just for your personal projects/problems/etc. Things break, it’s my current job to fix them and keep them from breaking (as often) in the future — when I spend 4 hours fixing something, I note it on my calendar and use the notes portion to log running issues, how they were solved, etc.

Using my calendar this way accomplished a handful of things, aside from traditional use:

  • Gave me a decent log for time spent on projects
  • Made my annual review 100% easier
  • Forced me to become more aware of what I was spending my time on
  • Helped me set aside the necessary time needed to work on certain tasks
  • Ward off unnecessary meetings (because Calendar was busy)

If you’re concerned about privacy — check here {link to setting Outlook Calendar privacy} and here {link to setting Google Calendar privacy} for how to manage the privacy settings on Outlook and/or Google.

I challenge you for a week to use your calendar in this fashion, as your own personal work log.

Many thanks to @archivalistic @griffey  @timtomch @slmcdanold @collingsruth @metageeky @sharon_bailey @infosecsherpa @gmcharlt @amyrbrown @redgirl13 for sharing their responses.

033 – A UX Shop for One with Stephen Francoeur / LibUX

Stephen Francoeur is among the first user experience librarians and in this episode he shares his insight about thriving as a one-person UX shop. We talk about organizational buy-in, how best to pitch and communicate UX work, as well as a super interesting tear on imposter syndrome.

You have to be careful who you compare yourself too. If you already have bad feelings about what you can do, your library’s relative poverty compared to other institutions, it’s easy to say “oh screw it, we’ll never be able to keep up with that.” … Maybe we all should be pointing to the under-resourced libraries who manage to be doing a real bang-up job. Stephen Francoeur

Notes

  • 1:15 – The story behind “Advice for UX Shops of One in Libraries
  • 2:23 – Stephen petitioned administration to create a new UX position
  • 4:43 – On organizational buy-in
  • 6:26 – Setting milestones or benchmarks for determining whether investment in UX work has been successful.
  • 11:00 – How receptive are university IT to user-centric design or development requests made by the library?
  • 13:28 – What if a proposal fails?
  • 15:29 – What kind of advice does Stephen have for folks whose administrations aren’t so receptive to user experience design?
  • 17:31 – Whether there’s a preference toward either quantitative or qualitative data.
  • 23:13 – If somebody is new — let’s say they just read Amanda Etches’ and Aaron Schmidt’s book — where do they start?
  • 24:44 – How persuasive is it to stakeholders to look at what other institutions have done with user experience teams?
  • 27:27 – Lasting thoughts

If you like you can download the MP3.

As usual, you support us by helping us get the word out: share a link and take a moment to leave a nice review. Thanks!


You can subscribe to LibUX on Stitcher, iTunes, or plug our feed right into your podcatcher of choice. Help us out and say something nice. You can find every podcast on www.libux.co.

The post 033 – A UX Shop for One with Stephen Francoeur appeared first on LibUX.

Omeka - 2.4 / FOSS4Lib Recent Releases

Package: 
Release Date: 
Thursday, January 21, 2016

Last updated February 7, 2016. Created by David Nind on February 7, 2016.
Log in to edit this page.

We are pleased to announce the release of Omeka 2.4. Although most of the changes are behind the scenes, they contribute to a smoother operation overall.

We have increased the required version of PHP, now at a minimum of 5.3.2. Be sure to check what version of PHP you are running before you upgrade to ensure that you have a supported version. On the opposite end of things, the latest version, PHP 7, is now supported.

AtoM - Access to Memory / FOSS4Lib Updated Packages

Last updated February 7, 2016. Created by David Nind on February 7, 2016.
Log in to edit this page.

AtoM stands for Access to Memory. It is a web-based, open source application for standards-based archival description and access in a multilingual, multi-repository environment.

Key features:

  • Web-based Access your AtoM installation from anywhere you have an internet connection. All core AtoM functions take place via a web browser, with minimal assumptions about end-user requirements for access. No more synching multiple installations on a per-machine basis – install AtoM once, and access it from anywhere.
  • Open source All AtoM code is released under a GNU Affero General Public License (A-GPL 3.0) – giving you the freedom to study, modify, improve, and distribute it. We believe that an important part of access is accessibility, and that everyone should have access to the tools they need to preserve cultural heritage materials. AtoM code is always freely available, and our documentation is also released under a Creative Commons Share-alike license.
  • Standards-based AtoM was originally built with support from the International Council on Archives, to encourage broader international standards adoption. We've built standards-compliance into the core of AtoM, and offer easy-to-use, web-based edit templates that conform to a wide variety of international and national standards.
  • Import/export friendly Your data will never be locked into AtoM – we implement a number of metadata exchange standards to support easy import and export through the AtoM user interface. Currently AtoM supports the following import/export formats: EAD, EAC-CPF, CSV and SKOS.
  • Multilingual All user interface elements and database content can be translated into multiple languages, using the built-in translation interface. The translations are all generously provided by volunteer translators from the AtoM User Community.
  • Multirepository Built for use by a single institution for its own descriptions, or as a multi-repository “union list” (network, portal) accepting descriptions from any number of contributing institutions, AtoM is flexible enough to accommodate your needs.
  • Constantly improving AtoM is an active, dynamic open-source project with a broad user base. We're constantly working with our community to improve the application, and all enhancements are bundled into our public releases. This means that whenever one person contributes, the entire community benefits.
License: 
Development Status: 

Releases for AtoM - Access to Memory

Technologies Used: 
Programming Language: 
Database: 
Open Hub Stats Widget: 

KohaCon 2016 / FOSS4Lib Upcoming Events

Date: 
Monday, May 30, 2016 - 08:00 to Saturday, June 4, 2016 - 17:00
Supports: 

Last updated February 7, 2016. Created by David Nind on February 7, 2016.
Log in to edit this page.

Join Koha community members for their annual conference from 30 May to 4 June 2016 in Thessaloniki, Greece.

Whether you're just curious about Koha, or have been using it for many years to manage your library, come along and learn more about Koha, the world's first free and open source integrated library management system.

CollectiveAccess - 1.6 / FOSS4Lib Recent Releases

Release Date: 
Friday, January 29, 2016

Last updated February 6, 2016. Created by David Nind on February 6, 2016.
Log in to edit this page.

Version 1.6 of Providence, the CollectiveAccess cataloguing tool, includes many changes including completely rebuilt support for ElasticSearch, a brand new display template parser (faster! better!), lots of bug fixes and many new user-requested features.

You can learn more by reading the release notes for version 1.6.

NOTE: The 1.4 version of Pawtucket (the public web-access application) is NOT compatible with version 1.6 of Providence. A 1.6-compatible release will be available soon.

Koha - 3.22.2, 3.20.8 / FOSS4Lib Recent Releases

Package: 
Release Date: 
Thursday, January 28, 2016

Last updated February 6, 2016. Created by David Nind on February 6, 2016.
Log in to edit this page.

Monthly maintenance releases for Koha.

See the release announcements for the details:

MarcEdit In-Process Work / Terry Reese

Would this be the super bowl edition? Super-duper update? I don’t know – but I am planning an update. Here’s what I’m hoping to accomplish for this update (2/7/2016):

MarcEdit (Windows/Linux)

· Z39.50/SRU Enhancement: Enable user defined profiles and schemas within the SRU configuration. Status: Complete

· Z39.50/SRU Enhancement: Allow SRU searches to be completed as part of the batch tool. Status: ToDo

· Build Links: Updating rules file and updating components to remove the last hardcode elements. Status: Complete

· MarcValidators: Updating rules file Status: Complete

· RDA Bug Fix: 260 conversion – rare occasions when {} are present, you may lose a character Status: Complete

· RDA Enhancement: 260 conversion – cleaned up the code Status: Complete

· Jump List Enhancement: Selections in the jump list remain highlighted Status: Complete

· Script Wizard Bug Fix: Corrected error in the generator that was adding an extra “=” when using the conditional arguments. Status: Complete

MarcEdit Linux

· MarcEdit expects the /home/[username] to be present…when it’s not, the application data is being lost causing problems with the program. Updating this so allow the program to drop back to the application directory/shadow directory. Status: Testing

MarcEdit OSX

· RDA Fix [crash error when encountering invalid data] Status: Testing

· Z39.50 Bug: Raw Queries failing Status: Complete

· Command-line MarcEdit: Porting the Command line version of marcedit (cmarcedit). Status: Testing

· Installer – Installer needs to be changed to allow individual installation of the GUI MarcEdit and the Command-line version of MarcEdit. These two version share the same configuration data Status: ToDo

–tr

Identify outliers: Building a user interface feature. / Mark E. Phillips

Background:

At work we are deep in the process of redesigning the user interface of The Portal to Texas History.  We have a great team in our User Interfaces Unit that I get to work with on this project,  they do the majority of the work and I have been a data gatherer to identify problems that come up in our data.

As we are getting closer to our beta release we had a new feature we wanted to add to the collection and partner detail pages.  Below is the current mockup of this detail page.

Collection Detail Mockup

Collection Detail Mockup

Quite long isn’t it.  We are trying something out (more on that later)

The feature that we are wanting more data for is the “At a Glance” feature. This feature displays the number of unique values (cardinality) of a specific field for the collection or partner.

At A Glance Detail

At A Glance Detail

So in the example above we show that there are 132 items, 1 type, 3 titles, 1 contributing partner, 3 decades and so on.

All this is pretty straight forward so far.

The next thing we want to do is to highlight a box in a different color if it is a value that is different from the normal.  For example if the average collection has three different languages present then we might want to highlight the language box for a collection that had ten languages represented.

There are several ways that we can do this, first off we just made some guesses and coded in values that we felt would be good thresholds.  I wanted to see if we could figure out a way to identify these thresholds based on the data in the collection itself.  That’s what this blog post is going to try to do.

Getting the data:

First of all I need to pull out my “I couldn’t even play an extra who stands around befuddled on a show about statistics, let alone play a stats person on TV” card (wow I really tried with that one) so if you notice horribly incorrect assumptions or processes here, 1. you are probably right, and 2. please contact me so I can figure out what I’m doing wrong.

That being said here we go.

We currently have 453 unique collections in The Portal to Texas History.  For each of these collections we are interested in calculating the cardinality of the following fields

  • Number of items
  • Number of languages
  • Number of series titles
  • Number of resource types
  • Number of countries
  • Number of counties
  • Number of states
  • Number of decades
  • Number of partner institutions
  • Number of items uses

To calculate these numbers I pulled data from our trusty Solr index making use of the stats component and the stats.calcdistinct=true option.  Using this I am able to get the number of unique values for each of the fields listed above.

Now that I have the numbers from Solr I can format them into lists of the unique values and start figuring out how I want to define a threshold.

Defining a threshold:

For this first attempt I decided to try and define the threshold using the Tukey Method that uses the Interquartile Range (IQR).  If you never took any statistics courses (I was a music major so not much math for me) I found this post Highlighting Outliers in your Data with the Tukey Method extremely helpful.

First off I used the handy st program to get an overview of the data that I was going to be working with.

Field N min q1 median q3 max sum mean stddev stderr
items 453 1 98 303 1,873 315,227 1,229,840 2,714.87 16,270.90 764.47
language 453 1 1 1 2 17 802 1.77 1.77 0.08
titles 453 0 1 1 3 955 5,082 11.22 65.12 3.06
type 453 1 1 1 2 22 1,152 2.54 3.77 0.18
country 453 0 1 1 1 73 1,047 2.31 5.59 0.26
county 453 0 1 1 7 445 8,901 19.65 53.98 2.54
states 453 0 1 1 2 50 1,902 4.20 8.43 0.40
decade 453 0 2 5 9 49 2,759 6.09 5.20 0.24
partner 453 1 1 1 1 103 1,007 2.22 7.22 0.34
uses 453 5 3,960 17,539 61,575 10,899,567 50,751,800 112,035 556,190 26,132.1

With the q1 and q3 values we can calculate the IQR for the field and then using the standard 1.5 multiplier or the extreme multiplier of 3 we can add this value back to the q3 value and find our upper threshold.

So for the county field

7 - 1 = 6
6 * 1.5 = 9
7 + 9 = 16

This gives us the threshold values in the table below.

Field Threshold – 1.5 Threshold – 3
items 4,536 7,198
language 4 5
titles 6 9
type 4 5
country 1 1
county 16 25
states 4 5
decade 20 30
partner 1 1
uses 147,997 234,420

Moving forward we can use these thresholds as a way of saying “this field stands out in this collection from other collections”  and make the box in the “At a Glance” feature a different color.

If you have questions or comments about this post,  please let me know via Twitter.

On The Road Again / Equinox Software

2194

It’s a new year, which means it’s time for the Equinox team to hit the road and attend some Spring conferences!  Here’s where we’ll be for the next few months:

  • Code4Lib Conference in Philadelphia, Pennsylvania March 7-10, 2016
    • We love Pennsylvania, this much is true.  Equinox is proud to be co-sponsoring childcare for this event.  Mike Rylander and Mary Jinglewski will be attending the Code4Lib Conference and they’re excited to learn some new things and mingle with the library tech folk.  If you’d like to meet up with either of them, please let us know!
  • Public Library Association (PLA) Conference in Denver, Colorado April 5-9, 2016
    • Equinox is looking forward to exhibiting at PLA this year in beautiful Denver, Colorado.  The team will be ready and waiting in Booth #408.  We can’t wait to meet with you to talk about Open Source solutions for your library!  
  • Evergreen Conference in Raleigh, North Carolina April 20-23, 2016
    • Our very favorite conference of the year!  We love getting together with Evergreen users and sharing our experience and knowledge.  Equinox is not only a Platinum Sponsor for this event; we are also sponsoring the Development Hackfest. The Equinox team will be involved in fourteen separate talks throughout the conference spanning a wide variety of topics.

There are a lot of exciting things in store for 2016 and we can’t wait to share them with you.  Whether in an exhibit booth or over a beer, we love to talk.  Hope to see you all soon!

Quid Pro Quo: Librarians and Vendors / LITA

I joked with a colleague recently that I need to get over my issue with vendors giving me sales pitches during phone calls and meetings. We had a good laugh since a major responsibility of my job as Assistant Director is to meet with vendors and learn about products that will enhance the patron experience at my library. As the point of contact I’m going to be the person the vendor calls and I’m going to be the person to whom the vendor pitches stuff.

The point was that sometimes it would be nice to have a quiet day so you could get back to the other vendors who have contacted you or maybe actually implement some of the tech you acquired from a vendor—he says as he looks wistfully at a pile of equipment in his office that should out in the public’s hands.

Just last month my fellow blogger Bill Dueber talked about the importance of negotiating with vendors in his post “There’s a Reason There’s a Specialized Degree.” Because I work hand in hand with vendors on an almost daily basis there’s a number of things I try to do to hold up my end of the bargain. There’s an article from 2010 on LIS Careers that talks about the Librarian/Vendor relationship. While not everything is relevant, it does have some good information in it (some of which I’ve pulled into this post).

  • Pay bills on time
  • Reply to calls/emails in a timely manner
  • Be clear about timelines
  • Say no if the answer’s no
  • Be congenial

I find it helps if I think of the vendors as my patrons. How would I treat a member of the public? Would I wait weeks before answering a reference question that came in via email? We’re all busy so not responding the same day to a vendor is probably ok but going more than a day or two is not a good idea. If I don’t want the vendor emailing me every other day I need to communicate. And if things are really busy or something’s come up I need to be clear with the vendor that I won’t be able to look at a new product until next week or second quarter, whichever the case may be.

I can’t speak for other libraries, but our board approves bills so we basically do a big swath of payments once a month. The more time it takes me to sign off on a bill and hand it over to finance, the longer it’ll take for that bill to get processed. Trust me, the last thing you want is for your computer reservation license to expire so you end up scrambling fifteen minutes before you open the doors trying to get a new license installed.

If I’m doing my part, then there are some things I expect in return from vendors (this list will look similar):

  • Send bills in a timely manner
  • Don’t send email/call every other day
  • Take no for an answer
  • Don’t trash competitors

It’s very frustrating to me when a vendor keeps pushing a product after I’ve said no. I know the vendor’s job is to find customers but sometimes it can be beneficial to lay off the sales pitch and save it for another visit. Only once have I actually had to interrupt a vendor several times during a phone call to tell them that I no longer will be doing business with them and do not want them to call me any more.

It’s one thing to say that your product does something no one else’s does or to claim that your product works better than a competitor. That’s business. But I’ve sat in vendor demos where the person spent so much time trashing another company that I had no idea what their product did. Also, sometimes I use similar products from different companies because they’re different and I can reach more patrons with a wider variety of services. This is particularly true with technology. We provide desktops, laptops, and WiFi for our customers because different people like to use different types of computers. It’s not always economically feasible to provide such a variety for every service, but we try to do it when we can.

I also have a number of things I’ll put on a wish list for vendors.

  • Look over meeting agendas and minutes
  • Check our website for services we’re offering
  • Provide a demo that you can leave behind
  • Try to not show up unannounced; at least call first

It shocks me when vendors ask what our budget is on a project, especially something for which we’ve done an RFP. This might pertain more to public libraries, but everything we do is public record. You can find the budget meetings on the city website and see exactly how much was approved. That attention to detail goes a long way towards showing me how you’ll handle our relationship.

Maybe we use iPads in our programming. Maybe we just replaced our selfchecks. Perhaps we already have a 3D printer. Maybe the head of our children’s department took part in an iLead program with the focus on helping parents pick early literacy apps for their children. Our website is, for all intents and purposes, an ever-changing document. As such, we make every effort to keep our services up to date and tout what our staff is doing. This can help you frame your sales pitch to us. You might not want to downplay iPads when we’ve been having success with them.

Where technology’s concerned, being able to leave a demo device with me is huge. It’s not always possible, but any amount of time I get where I can see how it would fit into our workflow helps us say yes or no. Sometimes I have a question that only comes up because I’ve spent some time using a device.

If you’re seeing a customer in Milwaukee, my library is not that far away and it makes sense that you can drop in and see how things are going. Totally fine. If you can, call first. The number of times I’ve missed a vendor because I didn’t know they were coming are more numerous than I’d like. But I can’t be available if I don’t know I should.

I get it. Companies are getting bigger through acquisitions, people’s sales areas are changing, the volume of customers goes up and up, and there’s still the same number of hours in the day. But there are vendors who do the things I mention above, and they’ll get my attention first.

What are some of the things you would like to see vendors do?

Studies in crosshatching / Patrick Hochstenbach

Filed under: portaits, Sketchbook Tagged: art, crosshatching, hatching, illustration, ink, pen, rotring, sketch, sketchbook

2016 Election Slate / LITA

The LITA Board is pleased to announce the following slate of candidates for the 2016 spring election:

Candidates for Vice-President/President-Elect

Candidates for Director-at-Large, 2 elected for a 3-year term

Candidates for LITA Councilor, 1 elected for a 3-year term

View bios and statements for more information about the candidates. Voting in the 2016 ALA election will begin on March 25 and close on April 22. Election results will be announced on April 29. Note that eligible members will be sent their voting credentials via email over a three-day period, March 15-18. Check the main ALA website for information about the general ALA election.

The slate was recommended by the LITA Nominating Committee: Michelle Frisque (Chair), Galen Charlton, and Dale Poulter. The Board thanks the Nominating Committee for all of their work. Be sure to thank the candidates for agreeing to serve and the Nominating Committee for developing the slate. Best wishes to all.

Fusion plus Solr Suggesters for More Search, Less Typing / SearchHub

The Solr suggester search component was previously discussed on this blog in the post Solr Suggester by Solr committer Erick Erickson. This post shows how to add a Solr suggester component to a Fusion query pipeline in order to provide the kind of auto-complete functionality expected from a modern search app.

By auto-complete we mean the familiar set of drop-downs under a search box which suggest likely words or phrases as you type. This is easy to do using Solr’s FST-based suggesters. FST stands for “Finite-State Transducer”. The underlying mechanics of an FST allow for near-matches on the input, which means that auto-suggest will work even when the inputs contain typos or misspellings. Solr’s suggesters return the entire field for a match, making it possible to suggest whole titles or phrases based on just the first few letters.

The data in this example is derived from data collected by the Movie Tweetings project between 2013 and 2016. A subset of that data has been processed into a CSV file consisting of a row per film, with columns for a unique id, the title, release year, number of tweets found, and average rating across tweets:

id,title,year,ct,rating
...
0076759,Star Wars: Episode IV - A New Hope,1977,252,8.61111111111111
0080684,Star Wars: Episode V - The Empire Strikes Back,1980,197,8.82233502538071
0086190,Star Wars: Episode VI - Return of the Jedi,1983,178,8.404494382022472
1185834,Star Wars: The Clone Wars,2008,11,6.090909090909091
2488496,Star Wars: The Force Awakens,2015,1281,8.555815768930524
...

After loading this data into Fusion, I have a collection named “movies”. The following screenshot shows the result of a search on the term “Star Wars”.

img

The search results panel shows the results for the search query “Star Wars”, sorted by relevancy (i.e. best-match). Although all of the movie titles contain the words “Star Wars”, they don’t all begin with it. If you’re trying to add auto-complete to a search box, the results should complete the initial query. In the above example, the second best-match isn’t a match at all in an auto-complete scenario. Instead of using the default Solr “select” handler to do the search, we can plug in an FST suggester, which will give us not just auto-complete, but fuzzy autocomplete, through the magic of FSTs.

Fusion collections are Solr collections which are managed by Fusion. To add a Lucene/Solr suggester to the “movies” collection requires editing the Solr config files according to the procedure outlined in the “Solr Suggester” blogpost:

  • define a field with the correct analyzer in file schema.xml
  • define a request handler for auto-complete in file solrConfig.xml

Fusion sends search requests to Solr via the Fusion query pipeline Solr query stage, therefore it’s also necessary to configure a Solr query stage to access the newly configured suggest request handler.

The Fusion UI provides tools for editing Solr configuration files. These are available from the “Configuration” section on the collection “Home” panel, seen on the left-hand side column in the above screenshot. Clicking on the “Solr Config” option shows the set of available configuration files for collection “movies”:

img

Clicking on file schema.xml opens an edit window. I need to define a field type and specify how the contents of this field will be analyzed when creating the FSTs used by the suggester component. To do this, I copy in the field definition from the very end of the “Solr Suggester” blogpost:

<!-- text field for suggestions, taken from:  https://lucidworks.com/blog/2015/03/04/solr-suggester/ -->
<fieldType name="suggestTypeLc" class="solr.TextField" positionIncrementGap="100">
  <analyzer>
    <charFilter class="solr.PatternReplaceCharFilterFactory" pattern="[^a-zA-Z0-9]" replacement=" " />
    <tokenizer class="solr.WhitespaceTokenizerFactory"/>
    <filter class="solr.LowerCaseFilterFactory"/>
  </analyzer>
</fieldType>

img

After clicking the “Save” button, the Fusion UI displays the notification message: “File contents saved and collection reloaded.”

Next I edit the solrConfig.xml file to add in definition for the suggester search component and corresponding request handler:

img

This configuration is based on Solr’s “techproducts” example, based on the Suggester configuration docs in the Solr Reference Guide. The suggest search component is configured with parameters for the name, and implementation type of the suggester, the field to be analyzed, the analyzer used. We also specify the optional parameter weightField which, if present, returns an additional document field that can be used for sorting.

For this example, the field parameter is movie_title_txt. The suggestAnalyzerFieldType specifies that the movie title text will be analyzed using the analyzer defined for field type suggestTypeLc, (added to the schema.xml file for the “movies” collection in the previous step). Each movie has two kinds of ratings information: average rating and count (total number of ratings from tweets). Here, the average rating value is specified:

<searchComponent name="suggest" class="solr.SuggestComponent">
    <lst name="suggester">
      <str name="name">mySuggester</str>
      <str name="lookupImpl">FuzzyLookupFactory</str>
      <str name="dictionaryImpl">DocumentDictionaryFactory</str>
      <str name="storeDir">suggester_fuzzy_dir</str>
      <str name="field">movie_title_txt</str>
      <str name="weightField">rating_tf</str>
      <str name="suggestAnalyzerFieldType">suggestTypeLc</str>
    </lst>
</searchComponent>

For details, see Solr wiki Suggester seachComponent section.

The request handler configuration specifies the request path and the search component:

<requestHandler name="/suggest" class="solr.SearchHandler">
    <lst name="defaults">
      <str name="suggest">true</str>
      <str name="suggest.count">10</str>
      <str name="suggest.dictionary">mySuggester</str>
    </lst>
    <arr name="components">
      <str>suggest</str>
    </arr>
</requestHandler>

For details, see Solr wiki Suggester requestHandler section.

After each file edit, the collection configs are saved and the collection is reloaded so that changes take effect immediately.

Finally, I configure a pipeline with a Solr query stage which permits access to the suggest request handler:

img

Lacking a UI with the proper JS magic to show autocomplete in action, we’ll just send a request to the endpoint, to see how the suggest request handler differs from the default select request handler. Since I’m already logged into the Fusion UI, from the browser location bar, I request the URL:

http://localhost:8764/api/apollo/query-pipelines/movies-default/collections/movies/suggest?q=Star%20Wars

img

The power of the FST suggester lies in its robustness. Misspelled and/or incomplete queries still produce good results. This search also returns the same results as the above search:

http://localhost:8764/api/apollo/query-pipelines/movies-default/collections/movies/suggest?q=Strr%20Wa

Under the hood, Lucidworks Fusion is Solr-powered, and under the Solr hood, Solr is Lucene-powered. That’s a lot of power. The autocompletion for “Solr-fu” is “Solr-Fusion”!

The post Fusion plus Solr Suggesters for More Search, Less Typing appeared first on Lucidworks.com.

Lobstometre Rising / Islandora

Our friendly fundraising lobster, the Lobstometre (r before e because we are Canadian like that) has gotten another bump this week, thanks to new Collaborator Florida Virtual Campus, a renewed partnership with LYRASIS, and support from Individual Members totalling more than $1500. We are more than halfway to our minimum fundraising goal and would like to say a very big 'THANK YOU!" to the supporters who have gotten us here. 

what I’ve been up to / Andromeda Yelton

Wow, it turns out if you have a ton of clients materialize over the fall, you have no time to tell the internet about them!

So here’s what I’m up to:

  1. Running for LITA president! Yup. If you’re a member in good standing of LITA, you’ll get your ballot in March, and I’d really appreciate your vote. Stay tuned for my campaign page and official LITA candidate profile.
  2. tiny adorable computer

  3. Coding for Measure the Future! This consists largely in arguing with Griffey about privacy. And also being, as far as I can tell, the first person on the internet to have gotten a Django app running on an Intel Edison, a tiny adorable computer that fits in the palm of my hand.
  4. Coding for Wikimedia! So…that happened. I’m doing an internal project for The Wikipedia Library, improving the usability of their journal access application system (and creating the kernel of a system that, over time, might be able to open up lots more possibilities for them).
  5. Coding for CustomFit! We’ve debuted straight-shaped sweaters along with our original hourglass (a coding process which was not unlike rebuilding an airplane in flight), so now you can make sweaters for people who may not want normatively-feminine garments. Yay! Also I implemented a complete site redesign last fall (if you’re wondering, “can Andromeda take a 12-page PDF exported from Photoshop, translate it into CSS, and rewrite several hundred templates accordingly”, the answer turns out to be yes). Anyway, if you’d been thinking of taking the CustomFit plunge but not gotten around to it yet, please go check that out – there’s a ton of great new stuff, and more on the way.
  6. Keynoting LibTechConf! My talk will be called “The Architecture of Values”, and it’ll be about how our code does (or, spoiler alert, doesn’t) implement our library values. Also the other keynoter is Safiya Noble and I am fangirling pretty hard about that.

pycounter - 0.11.1 / FOSS4Lib Recent Releases

Package: 
Release Date: 
Monday, January 25, 2016

Last updated February 4, 2016. Created by wooble on February 4, 2016.
Log in to edit this page.

Now includes a bare-bones SUSHI client executable, better support for DB1, BR1, BR2 reports, and the ability to output COUNTER 4 TSV reports (from either programatically-built reports, reports parsed from other formats, or reports fetched with SUSHI)

Call for Proposals, LITA education webinars and web courses / LITA

What library technology topic are you passionate about?
Have something to teach?

The Library Information Technology Association (LITA) Education Committee invites you to share your expertise with a national audience! For years, LITA has offered online learning programs on technology-related topics of interest to LITA Members and wider American Library Association audience.

Submit a proposal by February 29th to teach a webinar, webinar series, or online course for Summer/Fall 2016.

All topics related to the intersection of technology and libraries are welcomed. Possible topics include:

  • helpkeyboardResearch Data ManagementCC by www.gotcredit.com
  • Supporting Digital Scholarship
  • Technology and Kids or Teens
  • Managing Technical Projects
  • Creating/Supporting Library Makerspaces, or other Creative/Production Spaces
  • Data-Informed Librarianship
  • Diversity and Technology
  • Accessibility Issues and Library Technology
  • Technology in Special Libraries
  • Ethics of Library Technology (e.g., Privacy Concerns, Social Justice Implications)
  • Library/Learning Management System Integrations
  • Technocentric Library Spaces
  • Social Media Engagement
  • Intro to… GitHub, Productivity Tools, Visualization/Data Analysis, etc.

Instructors receive a $500 honorarium for an online course or $100-150 for webinars, split among instructors. For more information, access the online submission form. Check out our list of current and past course offerings to see what topics have been covered recently. We’re looking forward to a slate of compelling and useful online education programs this year!

LITA Education Committee.

Questions or Comments?

For questions or comments related to teaching for LITA, contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

Jobs in Information Technology: February 3, 2016 / LITA

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week:

City of Sierra Madre, Library Services Director, Sierra Madre, CA

Concordia College, Systems and Web Services Librarian, Moorhead, MN

Depaul University Library, Digital Services Coordinator, Chicago, IL

Loyola / Notre Dame Library, Digital Services Coordinator, Baltimore, MD

The National Academies of Sciences, Engineering, and Medicine, Metadata Librarian, Washington, DC

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Federal Dollars on the Line for State Library Programs / District Dispatch

Ask Your Members of Congress to Help Bring the Bucks Home while They’re at Home

It’s “appropriations” season again in Washington. That time every year when the President submits a budget to Congress and, in theory at least, Congress drafts and votes on bills to federally fund everything from llama farming to, well, libraries. Nevermind where llamas get their cash, but libraries in every state in the nation benefit from funds allocated by Congress for the Library Services and Technology Act (LSTA), the only federally funded program specifically dedicated to supporting libraries. Last year, libraries received just under $183 million in LSTA funding, about $156 million of which flowed to states as matching grants.

Stuffed llama stands on a pile of money.

Source: https://twitter.com/ecuadorianllama

Neither llama farmers nor libraries, however, benefit from federal funding without considerable convincing. That’s where you and your Members of Congress come in.

Starting in mid-February, individual Members of Congress will start signing letters addressed to their influential colleagues who sit on the powerful Appropriations Committees in both chambers of Congress. Those letters will ask the Committee to provide specific dollar amounts for specific programs, LSTA included. The math is easy: the more Members of Congress who sign the “Dear Appropriator” letter asking for significant LSTA funding, the better the odds of that money actually being awarded by the Appropriations Committee and eventually flowing to your state. Similarly, the more librarians and library supporters who ask their Members of Congress to sign that LSTA Dear Appropriator letter, the better the odds that LSTA will be funded and funded well.

So, how can you help? That’s easy, too.

We are asking library supporters to reach out and request a meeting with their Representatives and Senators while Members of Congress are home for the Presidents’ Day recess from February 15 – 20. The message to deliver at these meetings couldn’t be more simple or straightforward: “Please add your name to the LSTA Dear Appropriator letter.”

Members of Congress may be considering signing letters in support of other programs, but they will most likely sign the LSTA letter if they hear from constituents back home … or better yet, if they can visit your library and see the positive impact LSTA-funded programs are having on their constituents.

Please take a moment this week to reach out to your Member of Congress’ and Senators’ offices and request a meeting with the Member or his or her “District Director” anytime during the week of February 15 to discuss LSTA and the Dear Appropriator letters. Once you’ve met, please let the Washington Office know how it went and we will follow up on your great work.

Your Representative and Senators work for you and will love hearing about all of the great things that LSTA money does for their constituents. They’ll be happy to hear from you! Please, set that Presidents’ Week meeting today.

The post Federal Dollars on the Line for State Library Programs appeared first on District Dispatch.

VuFind - 2.5.2 / FOSS4Lib Recent Releases

Package: 
Release Date: 
Wednesday, February 3, 2016

Last updated February 3, 2016. Created by Demian Katz on February 3, 2016.
Log in to edit this page.

Minor security release.

STAPLR DisPerSion / William Denton

Next Tuesday STAPLR + a live feed of anonymous desk activity data + Twitter streams will be the basis for a performance by the students in Doug Van Nort’s class DATT 3200, Performing Telepresence, which will take place simultaneously in the DisPerSion Lab and all the branches of York University Libraries. You can watch, listen, participate and help perform from anywhere in the world. If you’re in or near Toronto, you can experience it in person.

Tuesday 9 February 2016, 3:30 – 5:30 pm

William Denton (York University Libraries)

Doug Van Nort (School of the Arts, Media, Performance & Design)

and the

Students of DATT 3200 Performing Telepresence

Re­imagine the real­time streams emanating from, to and about York University Libraries in its physical and virtual homes. Featuring:

STAPLR

William Denton’s sonification of YUL reference desks (listen remotely at staplr.org)

and

Sound, Light and Text Instruments

created by Van Nort and students, that react to YUL reference data and to Twitter feeds (@yorkulibraries, @FrostLibrary, @BronfmanLibrary, @ScottLibrary, @SteacieLibrary, @dispersion_lab).

Performed between all branches of York University Libraries (Bronfman, Frost, Maps, Scott, SMIL, Steacie) and the DisPerSion Lab by DATT students, using Twitter as their interface.

Experience the immersive version at the DisPerSion Lab (334 Centre for Fine Arts),

Watch/Listen to the virtual feed (video, audio, Twitter) at dispersionlab.org

Participate and help perform the piece by tweeting @dispersion_lab

Happy 10th Birthday Apache Solr! / SearchHub

January marked the tenth anniversary of Yonik Seeley’s fateful post on the Apache incubator listserv back in January of 2006:

Hello Incubator PMC folks, I would like to propose a new Apache project named Solr.

http://wiki.apache.org/incubator/SolrProposal

The project is being proposed as a sub-project of Lucene, and the Lucene PMC has agreed to be the sponsor.

-Yonik

Seeley also included the full proposal which includes cultivating an active open source community as a top priority with Doug Cutting as the sponsor and the first three initial committers: Seeley himself, Bill Au, and Chris “Hoss” Hostetter. And here we are, 10 years later and Apache Solr is the most deployed open source search technology on the planet with thousands of production instances. 

We’ve updated our ‘history of Solr’ infographic with the results of our developer survey from the fall. More survey results on the way.

Apache_Solr_History_Infographic

Learn more about Lucidworks Fusion, our Solr-powered application development platform for building intelligent search-driven apps.

The post Happy 10th Birthday Apache Solr! appeared first on Lucidworks.com.

Always read the fine print / David Rosenthal

When Amazon announced Glacier I took the trouble to read their pricing information carefully and wrote:
Because the cost penalties for peak access to storage and for small requests are so large ..., if Glacier is not to be significantly more expensive than local storage in the long term preservation systems that use it will need to be carefully designed to rate-limit accesses and to request data in large chunks.
Now, 40 months later, Simon Sharwood at The Register reports that people who didn't pay attention are shocked that using Glacier can cost more in a month than enough disk to store the data 60 times over:
Last week, a chap named Mario Karpinnen took to Medium with a tale of how downloading 60GB of data from Amazon Web Services' archive-grade Glacier service cost him a whopping US$158.

Karpinnen went into the fine print of Glacier pricing and found that the service takes your peak download rate, multiplies the number of gigabytes downloaded in your busiest hour for the month and applies it to every hour of the whole month. His peak data retrieval rate of 15.2GB an hour was therefore multiplied by the $0.011 per gigabyte charged for downloads from Glacier. And then multiplied by the 744 hours in January. Once tax and bandwidth charges were added, in came the bill for $158.
Karpinnen's post is a cautionary tale for Glacier believers, but the real problem is he didn't look the gift horse in the mouth:
But doing the math (and factoring in VAT and the higher prices at AWS’s Irish region), I had the choice of either paying almost $10 a month for the simplicity of S3 or just 87¢/mo for what was essentially the same thing,
He should have asked himself how Amazon could afford to sell "essentially the same thing" for one-tenth the price. Why wouldn't all their customers switch? I asked myself this in my post on the Glacier announcement:
In order to have a competitive product in the the long-term storage market Amazon had to develop a new one, with a different pricing model. S3 wasn't competitive.
As Sharwood says:
Karpinnen's post and Oracle's carping about what it says about AWS both suggest a simple moral to this story: cloud looks simple, but isn't, and buyer beware applies every bit as much as it does for any other product or service.
The fine print was written by the vendor's lawyers. They are not your friends.

Self-Publishing, Authorpreneurs & Libraries / LITA

“Self-publishing represents the future of literature.  Its willingness to experiment, it’s greater speed to market, it’s quicker communication with the audience, its greater rewards and creative control for creators, its increasing popularity all augur for the continued expansion of self-publishing and its place as the likely wellspring for our best new works” (LaRue, 2014, para. 13).

The self-publishing movement is alive and well in public libraries across the nation, especially within the fiction genre. In a recent American Libraries magazine article, “Solving the Self-Published Puzzle,” Langraf lists several public libraries acquiring self-published books to develop their collections with local authors and with works of regional interest.

I think of how this movement will grow among other types of library communities, and most importantly, how self-publishing technology has made it possible for all of us to publish and access high-quality digital and print resources. Will academic librarians assist teaching faculty to publish their own digital textbooks? Will creative writing classes add an eBook publishing component into their curriculum?  Will special library collections, archives, or museums use these online platforms to create wonderful monographs or documents of archived material that will reach a greater audience?  The possibilities are endless.

What was most interesting to me while reading the American Libraries piece is that libraries are including independent publishing advice and guidance workshops in their makerspace areas.  The freedom of becoming a self-published author comes with a to-do-list: cover illustrations, ebook format conversion (EPUB, MOBI, etc.), online editing, metadata, price and royalties, contracts, and creation of website and social media outlets for marketing purposes.  These are a few of the many things to think about.  Much needs to be learned and librarians can become proficient in these areas in order to create their own creative projects or assist patrons in self-publishing.  It is refreshing to see that an author can trespass the gatekeepers of publishing to get their project published and that our profession can make this phenomenon more accessible to our communities.

We can convert writers into authorpreneurs, a term I recently discovered (McCartney, 2015).  The speed of publishing is awesome – no waiting.  A project can appeal to a particular audience not accessible through traditional routes of publishing. If the author is interested, indie writers have platforms to get picked up by renowned publishing houses and agents.  Traditional authors may also make a plunge into self-publishing.  The attraction for librarians is that the published books can be distributed through platforms like Overdrive currently being used by libraries.  In addition, eBook publishing sites make it possible for users to view their item on several mobile devices through apps or eReaders.  The file type conversions to become readable in all devices are done by many of the organizations listed below.

I have recently become fascinated by the self-publishing movement and plan to write more about the ongoing developments.  I have yet to read my first self-published book and plan to do so soon.  For now, I leave you with some resources that may help you begin thinking about how to use self-publishing to serve your communities and create innovative ways to expand your library services.

Resources

bookworks
BookWorks
The Self Publishers Association
https://www.bookworks.com/


52novels
52 Novels: 
https://www.52novels.com/

Amazon Resources:
createspace
CreateSpace:

https://www.createspace.com/
Tools and services that help you complete your book and make it available to millions of potential readers

kdp
Kindle Direct Publishing (KDP)

https://kdp.amazon.com/

kdpdselect
KDP EDU: https://kdp.amazon.com/edu
Textbook publishing

KDP Kids: https://kdp.amazon.com/kids
Children Books

and many more genres…

ibooks
Apple iBookstore

http://www.apple.com/ibooks/

applepages
Apple Pages

http://www.apple.com/mac/pages/

nookpress
Barnes & Nobles Nook Press

https://www.nookpress.com/

BookBaby-logo1BookBaby: https://www.bookbaby.com/

bookdesigner
The Book Designer: 
http://www.thebookdesigner.com/

bowker
Bowker: http://www.bowker.com/

calibre
Calibre: 
http://calibre-ebook.com/

ebookarchitects
EBook Architects: 
http://ebookarchitects.com/

inscribe_digital
Inscribe Digital: 
http://www.inscribedigital.com

jutoh
Jutoh: 
http://www.jutoh.com/

kobo_writinglife
Kobo Writing Life: 
https://www.kobo.com/writinglife

ingramspark
Ingram Spark: 
https://www.ingramspark.com/

leanpub
Leanpub: 
https://leanpub.com/

lulu
Lulu: 
https://www.lulu.com/

pressbooks
PressBooks: 
http://pressbooks.com/

gutenbergpress
Project Gutenberg Self-Publishing Press: 
http://self.gutenberg.org/

iconscribd
Scribd: 
https://www.scribd.com

scrivener
Scrivner: 
https://www.literatureandlatte.com/scrivener.php

sigil
Sigil: 
https://code.google.com/p/sigil/

smashwords
Smashwords: 
https://www.smashwords.com/

wattpad
Wattpad: 
https://www.wattpad.com/

Indie Title Reviews
Libraries struggle with indie market collection development.  It is not readily available in the usual book review sources heavily used for mainstream titles– so the librarian is left to search within blogs and other social media outlets to learn of new worthy titles for purchase.  Please find a list of self-publishing collection development resources for libraries/readers below.biblioboard
Biblioboard: 
https://www.biblioboard.com/

ebooksareforever
eBooksAreForever: 
http://ebooksareforever.com/

goodreads
GoodReads: 
https://www.goodreads.com/

indie
Indie Reader: 
http://indiereader.com/

pwselect
PW Select: 
http://www.publishersweekly.com/pw/by-topic/authors/pw-select/

Selfe
Self-e: http://self-e.libraryjournal.com/

spr
SelfPublishing Review: 
http://www.selfpublishingreview.com/

References

Friedman, J. (2015). Helping indie authors succeed: What inde authors need to know about the library market. Publishers Weekly, 262(39), 52.

Gross, A. (2015). Digital winners in the bay area. Publishers Weekly, 262(24), 18-20.

Landgraf, G. (October 30, 2015). Solving the self-published puzzle. American Libraries Magazine. Retrieved from http://americanlibrariesmagazine.org/2015/10/30/solving-the-self-published-puzzle/

LaRue, J. (2015). From maker to mission. Library Journal, 140(16), 41.

LaRue, J. (2014). The next wave of tech change. Library Journal, 139(16), 47.

McCartney, J. (2015). A look ahead to self-publishing in 2015. Publishers Weekly, 262(3), 36-38.

Peltier-Davis, C. A. (2015). The cybrarian’s web 2: An a-z guide to free social media tools, apps, and other resources. Medford, NJ: Information Today.

Palmer, A. (2014). What every Indie author needs to know about e-books. Publishers Weekly, 261(7), 52-54.

Quint, B. (2015). So you want to be published. Information Today, 32(2), 17.

Scardilli, B. (2015). Public libraries embrace self-publishing services. Information Today, 32(5), 1-26.

Staley, L. (2015). Leading self-publishing efforts in communities. American Libraries, 46(1/2), 18-19.

Sleep / William Denton

Saturday night I was passing by Soundscapes, the best music store I know in Toronto, so I went in to see what they had. I usually only buy CDs at gigs now, because I use a streaming music service, but I saw something that isn’t available streaming and I would like to have as a real object: the full eight-hour performance of Sleep by Max Richter.

It’s eight CDs plus a Blu-ray that has everything on one disc. I don’t actually have a CD player any more—it broke a long time ago, and then my DVD player broke a couple of years ago—so I needed to rip it (I use FLAC) to listen to it. I put in the first disc and was very surprised: the disc wasn’t part of Sleep!

Not sleep

Rhythmbox recognized it as Toggo Music 41, which is some kind of compilation CD by many different artists. The disc is printed as CD 1 of Sleep and has the Deutsche Grammophon label on it, however. Very strange! What’s going on in the DG factory?

I phoned Soundscapes, and they said I should bring it back for credit or exchange. They only have one copy of the box in at a time, though. I asked how long it would take to get a replacement in and the fella said he didn’t know, I’d have to bring it in.

I wanted to buy a physical copy so a local store could get some of my money, but now because of this bizarre printing error I’m going to have to make three visits there just to get the right version. I think it’s important to go to some extra effort to support local businesses, but one doesn’t expect Nature to introduce glitch like this into everyday life. However, one must accept it.

VIVO Updates for January 31, 2016 / DuraSpace News

From Mike Conlon, VIVO Project Director

Jon Corson-Rikert retires from Cornell.  Jon Corson-Rikert, the creator of VIVO, has retired from Cornell.  It is hard for me to imagine a better colleague – thoughtful, considerate, creative, insightful, respectful, productive, and genuinely kind.  I hope you had a chance to meet Jon, to see him present, to work with him, and to share your thoughts with him.  Too often we rush through our days.  You may want to stop for a moment and recall moments you may have had with Jon and what those moments mean to you.  

Color Our Collections! / DPLA

It’s #ColorOurCollections week and, grown-ups, we’re looking at you! Join the adult coloring craze and put your colorful spin on these illustrations from our collection. For lovers of landscapes, puppies, flowers, creepy-crawlies, celestial maps, and transportation innovation – we’ve got you covered!

Color your favorites and share them with us all week at @DPLA or on Facebook using #ColorOurCollections.

Illustration from The dogs of Great Britain, America, and other countries, 1879 Illustration from The Gardeners' Chronicle, 1880 Illustration from The history of the Caribby-islands , 1666 Illustration from Pennsylvania Illustrated, 1874 Snowden's locomotive machine, 1825 Cetus... (Coelum Stellatum), celestial map by Johann Elert Bode, 1801 Cinderella illustration from Fairy realm. A collection of the favourite old tales, 1866 Woodcut, early canyon transportation, c.1890-1920 Print, from woodcut, of woman with bicycle, 1905 Malenconico per la terra, 1618 "Tobias," Woodcut of landscape with view into a castle room Woodcut drawing of early Alta, c.1880-1910

Images selected from HathiTrust, Biodiversity Heritage Library, David Rumsey Map Collection, California Historical Society via University of Southern California Libraries, University of Utah Libraries via Mountain West Digital Library, Perkins School for the Blind via Digital Commonwealth and New York Public Library.

Islandora 7.x-1.7 Release Team ASSEMBLE! / Islandora

Ever thought about joining an Islandora Release Team? If you have worked with Islandora at all, there's a role to suit your skills and we'd like your help. Join Release Manager Dan Aitken and a team of your fellow volunteers to help get Islandora 7.x-1.7 released this April. We are looking for volunteers for the following roles:

Documentation:

Documentation will need to be updated for the next release. Any new components will also need to be documented. If you are interested in working on the documentation for a given component, please add your name to any component here

Testers:

All components with JIRA issues set to 'Ready for Test' will need to be tested and verifying. Additionally, testers test the overall functionality of a given component. If you are interested in being a tester for a given component, please add your name to any component here. Testers will be provided with a release candidate virtual machine to do testing on.

Auditors:

Each release we audit our README and LICENSE files. Auditors will be responsible auditing a given component. If you are interested in being an auditor for a given component, please add your name to any component listed here.

Component Managers:

Are responsible for the code base of their components. If you are interested in being a component manager, please add your name to any component listed here.

More information about contributor roles can be found at http://islandora.ca/resources/contributors. If you'd like to assist with the release but don't know what to do, feel free to drop us a line and we can point you in the right direction so you can help out when the time comes.

The tentative schedule for the release is:

  • Code Freeze: February 18, 2016
  • First Release Candidate: March 3, 2016
  • Release: Mid to late April

Dispatches from the User List: Islandora Scholar, Solr for Chinese Text, and Drag & Drop Ingest / Islandora

Time to shine a spotlight on some great information you may have missed if you're not a subscriber on our listserv. Another spotlight, since we've done this a few times before.

Florida State University Report

First up is a post that wasn't actually on our main listserv. Instead, we're visiting the IR Interest Group's direct listserv, where Florida State University's Bryan Brown shared a report he wrote about why and how they migrated from Bepress to Islandora Scholar. Although the report is specific to their use case, there's some great stuff in there for anyone who is considering a migration - especially for an institutional repository, since FSU's work with Islandora Scholar is some of the best in the community.

Indexing Chinese text with Solr

Back to the main listserv, where Mark Jordan from Simon Fraser asked the community for advice on how to get Solr to handle "phrase" searches with multiple Chinese characters. Commenters brought up a presentation given by Jeff Liu from the Chinese University of Hong Kong at the Islandora Conference last summer, which showed them handling this issue. Jeff himself chimed in with the details: a custom solr config file by discoverygarden, Inc. that can be found on their GitHub.

Drag and Drop Ingest

Finally, a really great solution for "easy" ingest comes from the University of North Carolina Charlotte's Brad Spry, in response to a request from Jennifer Eustis of the University of Connecticut for advice on how other Islandora sites handle the ingest of very large files (Islandora Plupload is another approach). Brad's solution was to create "a 'drag and drop' ingest solution based upon a local NAS system with built in rsync, and server-side incron, PHP CLI, and islandora_batch," allowing UNCC's archivists to have all the power of islandora_batch without the need to use terminal commands. It's a very user-friendly approach that UNCC has shared on their GitHub.

This was followed up on Friday with another tool that works alongside the drag and drop ingest: Islandora Ingest Indicator, which is "designed to communicate Islandora ingest status to Archivists; a methodology for integrating Blink indicator lights with an Islandora ingest server. We have programmed Blink to glow GREEN for indicating "ready for ingest" and RED for "ingest currently running."

Google Funds Frictionless Data Initiative at Open Knowledge / Open Knowledge Foundation

We are delighted to announce that Open Knowledge has received funding from Google to work on tool integration for Data Packages as part of our broader work on Frictionless Data to support the open data community.

 

What are Data Packages?

The funding will support a growing set of tooling around Data Packages.  Data Packages provide functionality for data similar to “packaging” in software and “containerization” in shipping: a simple wrapper and basic structure for the transportation of data that significantly reduces the “friction” and challenges associated with data sharing and integration.

Data Packages also support better automation in data processing and do so without imposing major changes on the underlying data being packaged.  As an example, comprehensive country codes is a Data Package which joins together standardized country information from various sources into a single CSV file. The Data Package format, at its simplest level, allows its creator to provide information describing the fields, license, and maintainer of the dataset, all in a machine-readable format.

In addition to the basic Data Package format –which supports any data structure– there are other, more specialised Data Package formats: Tabular Data Package for tabular data and based on CSV, Geo Data Package for geodata based on GeoJSON. You can also extend Data Package with your own schemas and create topic-specific Data Packages like Fiscal Data Package for public financial data.  Screen Shot 2016-02-01 at 8.57.44 AM

What will be funded?

The funding supports adding Data Package integration and support to CKAN, BigQuery, and popular open-source SQL relational databases like PostgreSQL and MySQL / MariaDB.

CKAN Integration

CKAN is an open source data management system that is used by many governments and civic organizations to streamline publishing, sharing, finding and using data. This project implements a CKAN extension so that all CKAN datasets are automatically available as Data Packages through the CKAN API. In addition, the extension ensures that the CKAN API natively accepts Tabular Data Package metadata and preserves this information on round-tripping.

BigQuery Integration

This project also creates support for import and export of Tabular Data Packages to BigQuery, Google’s web service querying massive datasets. This involves scripting and a small online service to map Tabular Data Package to BigQuery data definitions. Because Tabular Data Packages already use CSV as the data format, this work focuses on the transformation of data definitions.

General SQL Integration

Finally, general SQL integration is being funded which would cover key open source databases like PostgreSQL and MySQL / MariaDB. This will allow data packages to be natively used in an even wider variety of software that depend on these databases than those listed above.

 

These integrations move us closer to a world of “frictionless data”. For more information about our vision, visit: http://data.okfn.org/.

If you have any questions, comments or would like more information, please visit this topic in our OKFN Discuss forum.

Data OKFN

 

How to Talk About User Experience / LibUX

In 2015, Craig M. MacDonald published interesting research reporting “the results of a qualitative study involving interviews with 16 librarians who have ‘User Experience’ in their official job title.” He was able to demonstrate the quite healthy state of — let’s capitalize it –Library User Experience. Healthy, but emerging. The blossoming of library user experience roles, named and unnamed, the community growing around it (like on slack and Facebook), the talks, conferences, and corresponding literature signal a broad — if shallow — pond, because while we can workshop card sorts and redesign websites, we find it pretty hard to succinctly answer: what is user experience?

Some resist definition in the way others resist referring to people as “users” — you know who you are — but this serves only to conflate an organizational philosophy of being user-centric with the practice of user experience design. These aren’t the same. Muddling the two confuses a service mentality — a bias — with the use of tools and techniques to measure and improve user experience.

Although she is writing about Service Design, Jess Leitch sums-up my same concerns about the fluid definition of “user experience design.”

I will argue that the status of Service Design and its impact as a professional field is impacted by the absence of a single consistent definition of the area, the wide spread of professional practices and the varied backgrounds and training of its practitioners. Jess Leitch, What is Service Design?

How we talk about user experience matters.

The User Experience

When we talk about the user experience, we are talking about something that can be measured. It is plottable and predictable.

The user experience is the measure of your end-user’s interaction with your organization: its brand, its product, and its services.

The overall value of the user experience is holistic: a cumulative quality, one we try to understand through models like Peter Morville’s Honeycomb that serve as practical ways to focus our efforts.

Some, like the Kano Model, reflect how new features to a product or service impact — positively or negatively — the customer experience, the work involved implementing them (and whether it is worth it), and predict what impact if any these features will have in the long run. Others, like Coral Sheldon-Hess’s CMMI-based model, simply define how much consideration is afforded to the user experience at organizational levels.

Kano Model

Kano Model

This is to say that the value of the user experience is both qualitative and quantitative, in which the who and why give meaning to the when, what, and how.

In this way, talking about “user experience” as a measurement makes otherwise woo-woo intangible car-salesman bullshit — “this insert-thing-here has a super UX” — into something that can be practically evaluated and improved.

The customer is always right — but the user isn’t

What we choose to call our end-users can betray the underlying assumptions we make about them, our relationship, the business model, and these imply how we might interpret results. “Customer experience” is conceptually different from “user experience.” The role of the patron is different from the member.

These distinctions can have real impact in determining which metrics matter, and where to focus.

I like “user” as a default. It gets a little tsked-at for being impersonal, but I suspect “user” is more conducive to data-driven design and development than one where the customer is always right.

What matters is that our user-, patron-, member-, customer-, xenomorph-centric ethic is the same, whether we are motivated by business, bosses, or — er — our humanity.

Usability isn’t the Point

Useful, usable, desirable: like three legs of a stool, if your library is missing the mark on any of these it’s bound to wobble. Amanda Etches and Aaron Schmidt

It is easy to confuse “usability” for “user experience” because a product or service’s ease of use is often so crucial that blogs like mine beat that drum ad nauseum, but we should resist using these interchangeably. We otherwise narrow the scope from a nuanced holistic approach to product and service design and business, to one reduced to — I don’t know — bashing hamburger menus on twitter. When all user experience designers see is usability like bulls see red, they may forget that hard-to-learn, ugh inconvenient interfaces, tasks, services, can nevertheless net a positive user experience, succeed, make money, do work.

One of the reasons I like Peter Morville’s User Experience Honeycomb so darn much is because it is such a useful way to visualize a multifaceted user experience where “usability is necessary but not sufficient.” Where the value of the UX is cumulative all ships rise with the tide. The look might suffer, but what drag poor aesthetic creates is tempered by its usefulness.

“The honeycomb hits the sweet spot by … helping people understand the need to define priorities. Is it more important for your [service] to be desirable or accessible? How about usable or credible? The truth is, it depends on your unique balance of context, content, and users, and the required tradeoffs are better made explicitly than unconsciously. Peter Morville

The added value of this model is that user experience is represented as a hive. We can add, remove, jostle facets as we need. It can not only grow in area, but we can even demonstrate its three-dimensionality by — let’s say — elaborating on the relationship between “usable” and “useful.”

When we talk about “usability,” it should be in relation to something’s “utility”:

  • usable — something is easy to use and intuitive
  • utility — something fulfills a demonstrable need
  • useful — a usable product, services, application, process, etc., fulfills a demonstrable need

The honeycomb model for user experience

Usability and utility are equally important and together determine whether something is useful: it matters little that something is easy if it’s not what you want. It’s also no good if the system can hypothetically do what you want, but … is too difficult. Jakob Nielsen, Usability 101: Introduction to Usability

The User Experience and Organizational Inertia

Good design is determined by its functional success, its efficacy, how masterfully it served this-or-that purpose. Its aesthetic has a role. Its emotional impact plays a part. But design is not art. The practical application of design thinking to services or instruction or libraries isn’t to just make an awesome website but empower decision makers with user-centric strategies to better meet mission or business goals.

Most of the time there are business- or mission-sensitive stakeholders behind user experience design work. In the same way we differentiate design from art, it may be generally more practical to differentiate a user experience design strategy from the desire to make whizzbang emotional experiences.

Often in real-world business/mission-driven design work, particularly in which design decisions need stakeholder support — sometimes in the form of cold hard cash — “making good experiences” can be nebulous, whereas “demonstrably improving the user experience of such-and-such service that correlate to the success of such-and-such bottom line” is better suited for the kind of buy-in required for organizational user-centricity.

Anyway, in summary: this is how I choose to talk about user experience

As a measurement. Something plottable, predictable.


I write a weekly newsletter called the Web for Libraries, chock-full of data-informed commentary about user experience design, including the bleeding-edge trends and web news I think user-oriented thinkers should know. Take a minute to sign up!

The post How to Talk About User Experience appeared first on LibUX.

February 1-5 is #ColorOurCollections Week / Open Library

There are a lot of neat public domain images in our collections. We’ve highlighted them in the past and continue to encourage people to use, remix and share our content. This week for the #ColorOurCollections event, we’ve pulled out some especially colorable images and made them into PDFs that you can print out and color. We’ve created a few pairs of images we think you’ll like. Here are the images and links to the books where you can find and download even more. If you just want to download a zip file of all eight images, click here.

apollos_genii nuptial_bath

punkah  mandan

greek_costume2 greek_costume

papilio cicada

16478943838_7297d310e6_o

Technology Awareness Resources / pinboard

The resources below were compiled as part of my research while writing The Neal-Schuman Library Technology Companion: A Basic Guide for Library Staff (forthcoming from ALA Neal-Schuman, 2016). Websites and Blogs Twitter Electronic Discussion Lists Periodicals Continuing Education, Conference, and Trade Show Opportunities Find Libraries Near You To Visit

ALA joins NFCC to serve military and their families through libraries / District Dispatch

ALA member libraries and the NFCC are partnering to deliver financial education and resources to members of the military and their families in libraries across the country.

ALA member libraries and the NFCC are partnering to deliver financial education and resources to members of the military and their families in libraries across the country.

ALA has joined forces with the National Foundation for Credit Counseling® (NFCC®) and local libraries to deliver financial education and resources to members of the military and their families across the country.

Members of the U.S. armed forces, Coast Guard, veterans, and their families face financial challenges often not adequately addressed by resources designed for the general public. ALA and NFCC will leverage local member agencies and libraries to help improve the financial lives of service members, veterans and their families.

ALA President Sari Feldman commented on the vital new initiative:

The Digital Age has seen libraries transform and be recognized as a critical part of the infrastructure delivering services to communities nationwide. It is a particular honor to be able to serve those who have sacrificed so much on behalf of all Americans – our veterans and their families. We are especially pleased to partner with NFCC, an organization that understands the unique financial needs of military families. Together, our organizations and local members will boost access to relevant and customized resources and learning where it is needed most.

Recent preliminary data from NFCC’s Sharpen Your Financial Focus™ (Sharpen) program reveals military families face unique challenges. For example, military Sharpen participants had higher unsecured debt balances ($400-$500 more) than the average Sharpen participant. Fewer tangible assets and higher debt-related expenses were also more common among these families. Relocation, frequent deployment, and changes in local economic conditions are likely among the factors influencing these impacts.

This new initiative emerged out of conversations related to the National Policy Agenda for Libraries, and how ALA and libraries may partner with others to build capacity and further expand services to meet community needs and/or advance the public interest. One of the identified community focuses in the policy agenda is veterans and military families. Roughly 22 million Americans are veterans of military service, and another 2.2 million currently serve on military active duty or in reserve units.

NFCC and ALA had the opportunity to discuss this unique partnership on the radio show, Home & Family Finance, which regularly provides practical financial information to its listeners across the country. It is nationally syndicated and airs on the American Forces Radio Network and Sirius/XM Satellite Radio.

NFCC member agencies will work with local libraries to offer financial education workshops, access to personalized counseling, and other resources that help families reach their financial goals and contribute to the economic stability of their neighborhoods. The workshops will cover subjects like housing, budgeting, banking, credit, permanent change of station (PCS) & deployment, and career transition into civilian life. Local libraries and certified counselors will select the most relevant and timely topics for their communities.

This collaboration builds on relationships and library services developed to meet the needs of veterans, service members and their families. One example of this work can be found in the Veterans connect @ the library initiative with California libraries and the California Department of Veterans Affairs. Close to 40 Veterans Resource Centers have been opened or planned to open this year to connect veterans and their families to benefits and services for which they are eligible.

NFCC and ALA will announce the local communities and libraries where the program will first be launched in the coming weeks. For more information, please email: lclark@alawash.org

The post ALA joins NFCC to serve military and their families through libraries appeared first on District Dispatch.

Emerging Tech: Bluetooth Beacons and the DPLA / Peter Murray

This is the text of a talk that I gave at the NN/LM Greater Midwest Region tech talk on January 29, 2016. It has been lightly edited and annotated with links to articles and other information. The topic was “Emerging Technology” and Trisha Adamus, Research Data Librarian at UW-Madison and Jenny Taylor, Assistant Health Sciences Librarian at UIC LHS in Urbana presented topics as well.

Bluetooth Beacons

Libraries of all types face challenges bridging the physical space with the online space. I'd wager that we've all seen stories of people walking around with their eyes glued to their mobile devices; you and I might have even been the subject of such stories. We want users to know about new services available in our spaces — both the physical and the online — yet it is difficult to connect to users.

Bluetooth Beacons, along with a phone and applications written to make use of beacons, can turn a user's smartphone into a tool for reaching users with information tailored to your library. Some examples:

Facebook Bluetooth Beacons

Facebook is one company experimenting with Bluetooth beacons. In a trial program underway now, Facebook will send you a beacon that you can tie to a Facebook Place. When a patron uses Facebook in range of the beacon, they see a welcome note and posts about the place and are prompted to like the associated Facebook Page and check in at the location. Facebook Bluetooth Beacons are in limited deployment now, and there is a web page available for you to sign up to receive one.

Brooklyn Museum

The Brooklyn Museum experimented with indoor positioning with beacons in 2014 and 2015. They scattered beacons throughout the galleries and added a function to their mobile app to pinpoint where the user is as they ask questions about artwork. They have a blog post on their website where they describe the challenges with positioning the beacons and having the beacons fit into the aesthetics of their gallery spaces.

University of Oklahoma NavApp

As described by the University of Oklahoma Libraries, its NavApp guides users throughout the main library building including various resources, service desks, and event spaces. The app also includes outdoor geolocation to guide users to the libraries' branches and special collections. When a student is standing in front of a study room, the app show how to book the room. The library also has about 100 beacons in its museum space to show more information and videos about artworks.

How Bluetooth Beacons Work

The foundation of Bluetooth Beacons is the iBeacon protocol. As with anything that has an 'i' in front of it now-a-days, you would rightly guess that this is something created by Apple. Announced in 2013, Apple defined a way for an iPhone to figure out its location in an indoor space. (When outside, a device can receive GPS satellite signals, but those signals do not penetrate into buildings.) The iBeacon technology has been adopted by many companies now; it isn't something limited to Apple. A beacon continuously transmits a globally unique number to any device within range — typically up to 30 feet or farther. An app on the device can then take action based on that unique number.

Say, for instance, you have your library's app on your phone when you walk into the library. The beacon at the entrance is transmitting its unique number, and your phone wakes up the app when it gets in range of the library's beacon. The app can then decide what to do — maybe it connects to the library's web server to get the time when the building closes and displays an alert with that information. Or the app can retrieve the events calendar and let you know what is happening in the library today. Maybe the app checks your hold queue to see if you have items to pick up.

Once inside the library, the smartphone starts receiving unique identifiers from beacons scattered around the space. The smartphone app has a built-in map of where the devices are located, and based on which identifiers it receives it can figure out where in the space the phone is. As you move with your smartphone around the space, it sees different identifiers and in that way can track your movement. So when you get that notification about an item to pick up from the hold queue, a map in the library app can guide you to the hold pickup location.

It is important to note, that all the intelligence is in the smartphone app. The beacon itself is just a dumb device that is transmitting the same unique number over and over. The beacon is not connected to your wired or wireless network, and it the beacon doesn't receive any information from the smartphone. It is up to the smartphone and the apps on it to do something with the unique number from the beacon. This means that the beacons themselves can be really cheap — sometimes less than $5 — and can last a really long time on one battery — months or years. That's why Facebook can give them away for free and why retailers are installing dozens of them per store.

Concerns about Beacons

You might think all of this sounds great — a futuristic science fiction world where machines know your exact location and can serve up information tailored specifically to where you are. There are some not so nice aspects, too.

Privacy is one. Your position within a building — whether it be in front of a shelf full of books or a shelf full of flu remedies — can be recorded along with the exact date and time by any number of third parties. The vendor that is supplying the Rite Aid pharmacy chain with beacons for its 4,500 stores is also partnering with publishers like CondeNast and Gannett, so apps from those companies will also be listening for the unique beacon identifiers. Now apps like Epicurious, Coupon Sherpa and ScanLife will know when and where your phone has been in a store.

Security is another. In the basic iBeacon protocol, there is nothing that validates a beacon's signal, so it is possible to fool a smartphone app into thinking it is near a beacon when it isn't. There is a story of how the staff at Make Magazine hacked a scavenger hunt at the 2014 Consumer Electronics Show. They showed how they could win the hunt without ever being in Las Vegas.

If you are interested in hearing more about Bluetooth Beacons, check out the article on my blog that will have links to the things I've talked about and more.

For More Information…

Digital Public Library of America

One of my fondest themes in the evolution of library services is how libraries have dealt with massive waves of information. In fact, I think we are in the third such wave of change. The first wave came with the printing press. It gave rise first to bibles, then to all sorts of commercially published tomes of fact and fiction. Libraries grew out of a desire to make that information more broadly accessible, and that was the first wave — commercially produced physical material. The second wave came just a few decades ago with commercially produced digital material. You know what this looks like: journal articles as standalone PDF files, electronic books downloaded to handheld devices, and indexes first on far away computers with the Thompson-Reuters Dialog system — then on CD-ROMs — and then spread all over the world wide web. For a time, libraries tried to collect and curate the wave of commercially produced digital material themselves, but for the most part this has been seeded to commercial providers.

And now we are in the third wave: local, digital materials. Libraries are taking on the responsibility of stewardship for article preprints, reports, datasets, and other materials for our users. This is not necessarily a new thing — through both the first and second waves of commercially produced information, libraries have been a place for local, unique material. What has changed is that libraries have become a publisher of sorts by offering that information to a community more broad than could be reached by those that could physically come to the library. We're taking not only born-digital materials in this third wave, but we are also reaching back into our collections and archives to digitize and publish that material that is unique to our holdings.

This dispersion of library activity was becoming a problem, though. How could users find the relevant material published by the library down the street, across the state, or on the other side of the country? The European Union, faced with this same question last decade, formed Europeana — an internet portal that provides pointers to the collective digital information of Europe. In 2011, libraries in the U.S. took on the task of forming our own solution, and it is the Digital Public Library of America.

DPLA Portal

Perhaps the most well known aspect of the DPLA is its search portal, and the URL to it is very easy to remember: dp.la. If you can remember "Digital Public Library of America", you can remember this web address. The portal has several ways to search for content: you can look at curated exhibitions of content pulled from all the DPLA partners, you can explore by place through an interactive map, and you can look at a timeline of material. There are apps that use the DPLA application programming interface to search for material in innovative ways or to integrate material from the DPLA into other systems.

The DPLA Portal is just that — it is an aggregation and a view of metadata harvested from hubs across the country. The DPLA Portal doesn't store information, it just points to where the information is stored. A series of content hubs and service hubs provide metadata to DPLA. Content hubs are large standalone units such as ARTstore, the Government Printing Office, and the Internet Archive. Service hubs gather metadata from a libraries in a region and provide a single feed of that metadata to DPLA. Service hubs are also a gathering point for professional development, expertise on digitization and metadata creation, and community outreach.

"Hydra-in-a-Box"

The most difficult part of this library-to-hub-to-portal arrangement is at the local library. At this point in time, it is tricky to publish information to the web in a way that can be harvested by a service hub and maintained for the long term. Your average digital asset management system has a lot of moving parts and requires complex server setups. The Hydra-in-a-Box project aims to reduce this complexity so a library won't need developers to install, configure and run the application. The project launched last year and is nearing the completion of the design phase.

E-books

Since the early formation days of the DPLA, one of the most desired streams of activity is around ebooks. Ebooks have not yet been a good fit into library service offerings. We've seen problems ranging from purchasing and licensing models that don't work well for libraries to electronic book platforms that have limited or no integration with existing library systems. DPLA has a number of ebook initiatives where librarians and publishers are working through ways to smooth the rough edges. One is the Open Ebooks Initiative, a partnership with DPLA, the New York Public Library, and the First Book organization. This initiative is offering public domain and current popular books for free to low-income students. DPLA is also the host of working groups that aim to develop a national library ebook strategy.

DPLA Community Representatives

If you are interested in getting involved with the DPLA, one of the best ways to do so is to join the community reps program. These volunteers are a two-way conduit of information between users of DPLA services and the DPLA staff. Community reps organize regional activities to promote DPLA and provide feedback with a local perspective to other reps and to the staff. Applications for the next class of community reps are due on February 19th.

National Library Legislative Day 2016 / District Dispatch

It’s that time again! Registration for the 42nd annual National Library Legislative Day is open.nlld-banner

This year, the event will be held in Washington, D.C. on May 2-3, 2016, bringing hundreds of librarians, trustees, library supporters, and patrons to Washington, D.C. to meet with their Members of Congress and rally support for libraries issues and policies. As with previous years, participants will receive advocacy tips and training, along with important issue briefings prior to their meetings.

Participants at National Library Legislative Day are also able to take advantage of a discounted room rate by booking at the Liaison (for the nights of May 1st and 2nd). To register for the event and find hotel registration information, please visit the website.

Want to see a little more? Check out the video from last year!


We also offer a scholarship opportunity to one first-time participant at National Library Legislative Day. Recipients of the White House Conference on Library and Information Services Taskforce (WHCLIST) Award receive a stipend of $300 and two free nights at a D.C. hotel. For more information about the WHCLIST Award, visit our webpage.

I hope you will consider joining us!

For more information or assistance of any kind, please contact Lisa Lindle, ALA Washington’s Grassroots Communications Specialist, at llindle@alawash.org or 202-628-8140.

The post National Library Legislative Day 2016 appeared first on District Dispatch.

State Government Information and the copyright conundrum (updated information!) / District Dispatch

Cup of coffee with beans.

CopyTalk is back with a new webinar on February 4, 2016. (photo by trophygeek)

Updated webinar registration information! (see below)

Figuring out whether state government documents are copyrighted is a tricky question. Copyright law has significant impact on the work libraries, digital repositories, and even state agencies, with regards to digitizing and web archiving state government information.

Free State Government Information (FSGI) http://stategov.freegovinfo.info/ has been steadily working to raise awareness and find pathways forward for policy change with regards to the copyright issue of state government publications.

Get the scoop from the FSGI at the next CopyTalk on February 4th at 2 pm Eastern/11 am Pacific.

This presentation will cover:

  • who we are and why we are tackling copyright issues with state government
  • specific state government information projects that academic, state, and digital libraries are engaged in that are impacted by copyright
  • a way forward to address copyright policy in the states: Kyle Courtney’s 50 state survey of copyright policies, State Copyright Resource Center http://copyright.lib.harvard.edu/states/

Speakers:

For full bios see: http://stategov.freegovinfo.info/about

  • Bernadette Bartlett, Library of Michigan, Michigan Documents Librarian
  • Kyle Courtney, Copyright Advisor, Harvard University
  • Kristina Eden, Copyright Review Program Manager, HathiTrust
  • Kris Kasianovitz, Stanford University Library, Government Information

If we have more than 100 attendees, we are charged some ridiculous amount that will come out of my pay check! So we ask that attendees watch the webinar with colleagues when possible. To access the webinar, go here and register as a guest and you’re in!

Yes, it’s FREE because the Office for Information Technology Policy and the Copyright Education Subcommittee want to expand copyright awareness and education opportunities.

An archived copy will be available after the webinar.

The post State Government Information and the copyright conundrum (updated information!) appeared first on District Dispatch.

ALA’s Charlie Wapner promoted / District Dispatch

Will serve as senior information policy analyst in Office for Information Technology Policy (OITP)

Charlie Wapner, senior policy analyst, ALA Office for Information Technology Policy (OITP).

Charlie Wapner, senior policy analyst, ALA Office for Information Technology Policy (OITP).

Please join me in congratulating Charlie Wapner on his promotion from Information Policy Analyst to Senior Information Policy Analyst effective in January 2016.

Many of you know Charlie through his leadership on 3D printing. He completed a major report, “Progress in the Making: 3D Printing Policy Considerations Through the Library Lens,” which attracted library and general press coverage (e.g., Charlie contributed to a piece by the Christian Science Monitor), and he was invited to write an article for School Library Journal based on his report. Charlie also produced a more accessible, shorter report on 3D printing, in collaboration with United for Libraries and the Public Library Association, and in December 2015, released a report on the merits of 3D printing and libraries targeted to the national policy community as part of our advocacy in conjunction with the Policy Revolution! initiative. Charlie was invited to present at a number of venues,
such as the Dupont Summit and a workshop at Virginia Tech, and invited as an expert to a 3-day workshop hosted by Benetech (under an IMLS grant) in Silicon Valley.

Notwithstanding the import of Charlie’s 3D printing contributions, the large majority of his time is dedicated to the extensive and wide-ranging research and analysis that he provides under the rubric of the Policy Revolution! initiative. With general (or even vague) direction, Charlie clarifies research needs, finds and digests relevant material, and writes syntheses on topics from veterans’ services and entrepreneurship to broadband and youth and technology. In the past few months, Charlie’s research and analysis has extended to informing our work to identify new collaborators (e.g., funders) and specifically to identify new funding opportunities for OITP and for the Association generally. Going forward, Charlie also will be increasing his focus on international policy work.

Charlie came to ALA in March 2014 from the Office of Representative Ron Barber (Ariz.) where he was a legislative fellow. Earlier, he also served as a legislative correspondent for Representative Mark Critz (Penn.). Charlie also interned in the offices of Senator Kirsten Gillibrand (N.Y.) and Governor Edward Rendell (Penn.). After completing his B.A. in diplomatic history at the University of Pennsylvania, Charlie received his M.S. in public policy and management from Carnegie Mellon University.

The post ALA’s Charlie Wapner promoted appeared first on District Dispatch.