Planet Code4Lib

Building a stronger user experience with our community’s help / DPLA

Following several studies about the use and usability of the DPLA website, we’ve just completed a set of small but significant changes.  We believe these changes will create a more pleasant, intuitive experience on our website, connecting people more easily with the cultural heritage materials our partners provide.

During the evaluation phase of the project, we drew insight from multiple sources, and benefitted greatly from our community network.  In consultation with DPLA staff, two volunteers conducted usability studies of our website, interviewing and observing yet more volunteers as they interacted with our site. Professional UX researcher Tess Rothstein conducted a pro bono study of users’ experiences searching the DPLA, while DPLA Community Rep Angele Mott Nickerson focused her study on our map, timeline, and bookshelf features.  In addition to these interviews, we conducted in-depth analysis of our usage statistics, gathered via Google Analytics.  In addition to the studies,  we considered  informal feedback from our community of users and partners.

Here are a few lessons we learned, and what we’ve done in response:

Highlighting full access

Anyone who has done a usability study is familiar with the shocking moment when your product completely fails to engage a user in its intended way.  For us, that moment came when a first-time user of our website did not realize that they could get all of the digital materials on our website right now, for free.  They were just one click away from total access — and they didn’t click!  

To ensure that future users don’t miss out, we’ve done a few key things to highlight that our contributors provide public access to all the materials users discover through DPLA. For example, a link that use to read “View object” now says something like this:

Get full image

Refining a search

DPLA is a treasure trove of cultural heritage materials – but sometimes it can be hard to find just the right thing amidst millions and millions of items.  Our research gave us a clearer picture of how to help users when their first search attempt returned too much — or too little — of a good thing.

For example, many of our users rely on our “Refine search” filter to narrow their search results and hone in on truly relevant materials.  In our usability studies, we paid attention to which filters interviewees used, whether or not the filters helped them achieve their goals, and what interviewees told us about their usefulness.  We corroborated these observations with analytics data, looking at which filters are used most frequently and when used, which filters are most likely to be followed by a “click” on a search result.  

As is often the case with user-driven decision-making, our findings surprised us.  We had predicted that filters with the best-quality metadata would prove most useful, but that was not always the case.  Ultimately, we moved the most in-demand filters, like subject and location, to the top of the page, and bumped the lesser-used filters, like type and date, to the bottom.Refine search

Making room for new ebooks features

DPLA is actively working toward an innovative future for ebooks.  To make space for this work, we decided to retire the “Bookshelf,” our original interface for browsing the ebooks collection.  Developed by Harvard Innovation Lab, the “Bookshelf” provided a unique search experience that will continue to inform our work with online search and ebooks.

No bugs, please!

Staying on top of bugs and layout issues – especially those that our community members take the time to report – is an essential component of usability.  We identified and fixed many during the project.  Thanks especially to everyone who has chatted with us or contacted us about bugs on the website.

Future work

This round of improvements is but one component of our community’s ongoing efforts to improve usability of and access to digital materials.  While small, low-cost improvements like these will make an immediate positive impact, we are also actively engaged in conversations about improved metadata quality, new technologies, and stronger community relationships.

 

Next CopyTalk on rights statements / District Dispatch

Fallen coffee cup

From Lotus Head

Avoid the heat and stay inside on August 4 for a free webinar on “Rightsstatements.org: Communicating Copyright Status through Metadata.”

Rightsstatements.org is a collaborative project of DPLA and Europeana to create a standard way of communicating the copyright status of works in digital collections. The project is built on the idea that accurate and clear rights statements are needed to help both organizations and users understand how they can use digital collections. This session, led by members of the working group that developed the statements, will address what Rightsstatements.org does, how it was created, and its first stages of adoption by digital libraries both in the US and abroad.

Speakers:

Emily Gore, Director of Content at DPLA and Right Statement Working Group Co-Chair

David Hansen, Clinical Assistant Professor & Faculty Research Librarian at UNC School of Law and Right Statement Working Group Member

Day/Time: Thursday, August 4 at 2pm Eastern/11am Pacific for our hour long free webinar.

Go to http://ala.adobeconnect.com/copytalk/ and sign in as a guest. You’re in.

This program is brought to you by OITP’s copyright education subcommittee.

The post Next CopyTalk on rights statements appeared first on District Dispatch.

Open Data as a Human Right: the Case of Case-Law / Open Knowledge Foundation

This blog post was written by Simon Matet and Antoine Dusséaux. French version follows the English one

Open data is sometimes considered first as a way to foster economic growth through the development of innovative services built on public data. However, beyond this economic perspective, important though it may be, access to public sector information should be seen first and foremost as an unprecedented opportunity to bridge the gap between the government and its citizens. By providing a better access to fundamental public services and promoting transparency and accountability, open data has the potential to guarantee a greater respect of fundamental human rights. In this respect, access to case-law (the law developed by judges through court decisions) could become a pioneering application of open data to improve our democratic societies.

According to the European Court of Human Rights (ECHR), publicity of court decisions, “by making the administration of justice transparent”, is a condition for a fair trial, the guarantee of which is one of the fundamental principles of any democratic society. There is no concrete publicity without a free access for any citizen to court records. This is why the ECHR considers that the ability for any citizen to obtain copies of judgments, without the need to show a legitimate interest, “protects litigants against the administration of justice in secret” and “is also one of the means whereby confidence in the courts can be maintained”. Furthermore, according to the European Parliament, “certain aspects of (in)accessibility of Court files cause serious legal problems, and may, arguably, even violate internationally recognised fundamental human rights, such as equality of arms.”

For those reasons, all over the world, diffusion of case law is a public service task. However, accessing court documents can prove a daunting task for untrained, private citizens, reporters, and NGOs. In some countries, corporations or charities have captured the market of access to judicial precedents as governments proved unable or unwilling to fulfill this key mission. For instance, an important part of English judge-made law is owned by a private charity, the Incorporated Council of Law Reporting. In others, decisions are sold by courts to private legal publishers. For example, the Administrative Office of the US Courts collects $145 million in fees to access court records, every year. As a result, citizens usually only have access to a small selection of court decisions.

However, modern communication technologies and digitization now make it possible to provide free online access to millions of public court documents.

Open legal data would guarantee the respect of fundamental rights and also increase legal certainty. Indeed not only do citizens need to know the law, in codes and statutes, they also need to understand the concrete application and interpretation of the law by courts. Therefore, a free access to court records can help litigants to prepare their trials, for instance while assessing the opportunity of a negotiation. In the 21st century, the Internet must be seen as a valuable opportunity to enhance the transparency of the judiciary and improve legal certainty.

Open data of jurisprudence shows that behind the mere economic gains, access and reuse of public sector information is a fundamental instrument for extending the right to knowledge, which is a basic principle of democracy, and is a matter of human rights in the information age. The judiciary should not be left behind the ongoing digital transformation of public policies. In this domain, some countries, such as the Netherlands, have already made great efforts to provide a free access to citizens to a large amount of court decisions, while respecting litigants’ privacy, but most countries still have a long way to go. Although access to legislation is already included in the Open Data Index by Open Knowledge, it only requires all national laws and statutes to be available online, and not judge-made law. Since case law is an important source of law, especially in countries of common law tradition, it should be included in the legislation dataset in future versions of the Open Data Index.

 

VIVO Updates for July 17–New Steering and Leadership Group Members, VIVO 1.9 Testing, VIVO and SHARE Synergy / DuraSpace News

From Mike Conlon, VIVO Project Director

Steering Group Updates  The VIVO Leadership Group has nominated new members for the VIVO Steering Group to serve three year terms.  Mark Fallu of the University of Melbourne, Mark Newton of Columbia University, and Paul Albert of Weill Cornell Medicine have each agreed to serve.  Please join me in welcoming them to the Steering Group!  Short biographies below:

Blacklight Summit / FOSS4Lib Upcoming Events

Date: 
Wednesday, November 2, 2016 - 08:00 to Friday, November 4, 2016 - 17:00
Supports: 

Last updated July 26, 2016. Created by Peter Murray on July 26, 2016.
Log in to edit this page.

The Princeton University Library is looking at hosting another Blacklight Summit this fall with tentative dates of Wednesday, November 2nd through Friday, November 4th. We are thinking the event will be organized similar to last year’s format with demonstrations of Blacklight-powered applications, sessions about enhancing Blacklight applications, and ample time for community roadmapping, code exchange and development.

Recommended Formats Statement: Expanding the Use, Expanding the Scope / Library of Congress: The Signal

This is a guest post by Ted Westervelt, head of acquisitions and cataloging for U.S. Serials – Arts, Humanities & Sciences at the Library of Congress.

Photo of a structure design.

“Model Photo: Parametric Gridshell.” Photo by James Diewald on Flickr.

As summer has fully arrived now, so too has the revised 2016-2017 version of the Library of Congress’s Recommended Formats Statement.

When the Library of Congress first issued the Recommended Formats Statement, one aim was to provide our staff with guidance on the technical characteristics of formats, which they could consult in the process of recommending and acquiring content. But we were also aware that preservation and long-term access to digital content is an interest shared by a wide variety of stakeholders and not simply a parochial concern of the Library. Nor did we have any mistaken impression that we would get all the right answers on our own or that the characteristics would not change over time. Outreach has therefore been an extremely important aspect of our work with the Recommended Formats, both to share the fruits of our labor with others who might find them useful and to get feedback on ways in which the Recommended Formats could be updated and improved.

We are grateful that the Statement is proving of value to others, as we had hoped. Closest to home, as the Library and the Copyright Office begin work on expanding mandatory deposit of electronic-only works to include eBooks and digital sound recordings, they are using the Recommended Formats as the starting point for the updates to the Best Edition Statement that will result from this. But its value is being recognized outside of our own institution.

The American Library Association’s Association for Library Collections Technical Services has recommended the Statement as a resource in one of its e-forums. And even farther afield, the UK’s Digital Preservation Coalition included it in their Digital Preservation Handbook this past autumn, bringing the Statement to a wider international audience.

The Statement has even caught the attention of those who fall outside the usual suspects of libraries, creators, publishers and vendors. Earlier this year, we were contacted by a representative from an architectural software firm. He (and others in the architectural field) has been concerned about the potential loss of architectural plans, as architectural files are now primarily created in digital formats with little thought as to their preservation. Though the Library of Congress has a significant Architecture, Design and Engineering collection, this is a community that overlaps little with our own. But he saw the intersection between the Recommended Formats and the needs of his own field and he came to us to see how the Recommended Formats might relate to digital files and data produced within the fields of architecture, design and engineering and how they might help encourage preservation of those creative works as well. This, in turn, led to the addition of Industry Foundation Classes — a data model developed to facilitate interoperability in the building industry — to the Statement. We hope it will lead to future interest, not simply from the architectural community but from any community of creators of digital content who wish their creations to last and to remain useful.

connected_400We have committed to an annual review and revision of the Recommended Formats Statement to ensure its usefulness to as wide a spectrum of stakeholders as possible. In doing so, we hope to encourage others to offer their knowledge and to prevent the Statement from falling out of sync with the technical realities of the world of digital creation. As we progress down this path, one of the benefits is that the changes each year to the hierarchies of technical characteristics and metadata become fewer and fewer. More and more stakeholders have provided their input already and, happily, the details of how digital content is created are not so revolutionary as to need to be completely rewritten annually. This allows for a sense of stability in the Statement without a sense of inertia. It also allows us to engage with types of digital creation with which we might not have addressed as closely or directly as was possible. This is proving to be the case with digital architectural plans and it is proving to be even more the case with the biggest change to the Recommended Formats with this new edition: the inclusion of websites as a category of creative content.

At the time of the launch of the first iteration of the Recommended Formats Statement, websites per se were not included as a category of creative content. This omission was the result of various concerns and perspectives held then but there was no gainsaying that it was definitely an omission. Of all the types of digital works, websites are probably the most open to creation and dissemination and probably the most common digital works available to users, but also not something that content creators have tended to preserve.

Unsurprisingly, this also tends to make them the type of digital creation that causes the most concern to those interested in digital preservation. So when the Federal Web Archiving Working Group reached out about how the Recommended Formats Statement might be of use in furthering the preservation of websites, this filled a notable gap in the Statement.

Naturally, the new section of the Statement on websites is not being launched into a vacuum. The prevalence of websites and much of their development is predicated on the enhancement of the user experience, either in creating them or in using them, which is not the same as encouraging their preservation. It is made very clear that the Statement’s section on websites is focused specifically on the actions and characteristics that will encourage its archivability and thereby its preservation and long-term use.

Nor does the Statement ignore the work that has been done already by other groups and other institutions to inform content creators of best practices for preservation-friendly websites, but instead builds upon them and links to them from the Statement itself. The intention of this section on websites is twofold. One is to provide a clear and simple reminder of the importance of considering the archivability of a website when creating it, not merely the ease of creating it and the ease of using it. The other is to bring together those simple actions along with links to other guidance in order to provide website creators with easy steps that they can take to ensure the works in which they are investing their time and energy can be archived and thereby continue to entertain, educate and inform well into the future.

As always, the completion of the latest version of the Recommended Formats Statement means the beginning of a new cycle, in which we shall work to make it as useful as possible. Having the community of stakeholders involved with digital works share a common commitment to the preservation and long-term access of those works will help ensure we succeed in saving these works for future generations.

So, use and share this version of the Statement and please provide any and all comments and feedback on how the 2016-2017 Recommended Formats Statement might be improved, expanded or used. This is for anyone who can find value in it; and if you think you can, we’d love to help you do so.

Altoona Area Public Library Joins SPARK / Equinox Software

FOR IMMEDIATE RELEASE

Duluth, Georgia–July 26, 2016

Equinox is proud to announce that Altoona Area Public Library was added to SPARK, the Pennsylvania Consortium overseen by PaILS.  Equinox has been providing full hosting, support, and migration to PaILS since 2013.  In that time, SPARK has seen explosive growth.  As of this writing, 105 libraries have migrated or plan to migrate within the next year.  Over 3,000,000 items have circulated in 2016 to over 550,000 patrons.  We are thrilled to be a part of this amazing progress!

Altoona went live on June 16.  Equinox performed the migration and also provided training to Altoona staff.  They are the first of 8 libraries coming together into the Blair County Library System.  This is the first SPARK migration where libraries within the same county are on separate databases and are merging patrons and coming together to resource share within a unified system.  Altoona serves 46,321 patrons with 137,392 items.

Mary Jinglewski, Equinox Training Services Librarian, had this to say about the move:  “I enjoyed training with Altoona Area Public Library, and I think they will be a great member of the PaILS community moving forward!”

About Equinox Software, Inc.

Equinox was founded by the original developers and designers of the Evergreen ILS. We are wholly devoted to the support and development of open source software in libraries, focusing on Evergreen, Koha, and the FulfILLment ILL system. We wrote over 80% of the Evergreen code base and continue to contribute more new features, bug fixes, and documentation than any other organization. Our team is fanatical about providing exceptional technical support. Over 98% of our support ticket responses are graded as “Excellent” by our customers. At Equinox, we are proud to be librarians. In fact, half of us have our ML(I)S. We understand you because we *are* you. We are Equinox, and we’d like to be awesome for you. For more information on Equinox, please visit http://www.esilibrary.com.

About Pennsylvania Integrated Library System

PaILS is the Pennsylvania Integrated Library System (ILS), a non-profit corporation that oversees SPARK, the open source ILS developed using Evergreen Open Source ILS.  PaILS is governed by a 9-member Board of Directors. The SPARK User Group members make recommendations and inform the Board of Directors.  A growing number of libraries large and small are PaILS members.

For more information about about PaILS and SPARK, please visit http://sparkpa.org/.

About Evergreen

Evergreen is an award-winning ILS developed with the intent of providing an open source product able to meet the diverse needs of consortia and high transaction public libraries. However, it has proven to be equally successful in smaller installations including special and academic libraries. Today, over 1400 libraries across the US and Canada are using Evergreen including NC Cardinal, SC Lends, and B.C. Sitka.

For more information about Evergreen, including a list of all known Evergreen installations, see http://evergreen-ils.org.

Library Services for People with Memory Loss, Dementia, and Alzheimers / LibUX

Sarah Houghton (@TheLiB) summarizes what her team has learned about serving older adults with memory issues. We can make accommodations in our design, too. In May, Laurence Ivil and Paul Myles wrote Designing A Dementia-Friendly Website, which makes the point that

An ever-growing number of web users around the world are living with dementia. They have very varied levels of computer literacy and may be experiencing some of the following issues: memory loss, confusion, issues with vision and perception, difficulties sequencing and processing information, reduced problem-solving abilities, or problems with language. Just when we thought we had inclusive design pegged, a completely new dimension emerges.

I think specifically their key lessons about layout and navigation are really good.

What’s more, as patrons these people may be even more vulnerable because, as Sarah says, libraries are trusted entities. So these design decisions demand even greater consideration.

Libraries are uniquely positioned to see changes in our regular users. We have people who come in all the time, and we can see changes in their behavior, mood, and appearance that others who see them less often would never recognize. Likewise, libraries and librarians are trusted entities–you may have people being more open and letting their guard down with you in a way that lets you observe what’s happening to them more directly. Finally, people who work in libraries generally really care a lot about other people–and that in-built sensitivity and care can help when seeing a change in someone’s mental health and abilities. Sarah Houghton

Library Services for People with Memory Loss, Dementia, and Alzheimers

The post Library Services for People with Memory Loss, Dementia, and Alzheimers appeared first on LibUX.

Coding at the library? Join the 2016 Congressional App Challenge / District Dispatch

logo "congressional app challenge"

Last week marked the official start of the 2016 Congressional App Challenge, an annual nationwide event to engage student creativity and encourage participation in STEM (science, technology, engineering, and math) and computer science (CS) education. The Challenge allows high school students from across the country to compete against their peers by creating and exhibiting their software application (or app) for mobile, tablet, or computer devices. Winners in each district will be recognized by their Member of Congress. The Challenge is sponsored by the Internet Education Foundation and supported by ALA.

Why coding at the library? Coding could come across as the latest learning fad, but skills developed through coding align closely with core library activities such as critical thinking, problem solving, collaborative learning, and now connected learning and computational thinking. Coding in libraries is a logical progression in services for youth.

If you’ve never tried coding before, the prospect of teaching it at your library may seem daunting. But even a cursory scan of libraries across the country reveals that library professionals everywhere, at all levels of experience, are either teaching kids how to code or enabling it through the use of community volunteers.   Teens and tweens are learning to code using LED lights and basic circuits, creating animated GIFs, designing games using JavaScript and Python in CodeCombat and the youngest learners are experiencing digitally enhanced storytime with apps and digital media at the Orlando (FL) Public Library. Kids at the Onondaga (NY) Public Library learn coding skills by developing a Flatverse game over the course of a 4 day camp. Girls at the Gaithersburg (MD) Public Library are learning to code in “Girls Just Want to Compute,” a two week camp for teen and tween girls. These programs and many others are a prime way to expose kids to coding and inspire them to want to keep learning.

The App Challenge can be another means to engage teens at your library. Libraries can encourage students to participate in the Challenge by having an App Challenge event-  host an “App-a-thon,” have a game night for teens to work on their Apps, or start an App building club.

At the launch, over 140 Members of Congress from 38 states signed up to participate in the 2016 Congressional App Challenge.  Check to see if your district is participating and if not, you can use a letter template on the Challenge Website to send a request to your Member of Congress.

If you do decide to participate we encourage you to share what you’re doing using the App Challenge hashtag #HouseofCode and ALA’s hashtag #readytocode @youthandtech. The App Challenge runs through November 2. Look for more information throughout the competition.

The post Coding at the library? Join the 2016 Congressional App Challenge appeared first on District Dispatch.

The Citation Graph / David Rosenthal

An important point raised during the discussions at the recent JISC-CNI meeting is also raised by Larivière et al's A simple proposal for the publication of journal citation distributions:
However, the raw citation data used here are not publicly available but remain the property of Thomson Reuters. A logical step to facilitate scrutiny by independent researchers would therefore be for publishers to make the reference lists of their articles publicly available. Most publishers already provide these lists as part of the metadata they submit to the Crossref metadata database and can easily permit Crossref to make them public, though relatively few have opted to do so. If all Publisher and Society members of Crossref (over 5,300 organisations) were to grant this permission, it would enable more open research into citations in particular and into scholarly communication in general.
In other words, despite the importance of the citation graph for understanding and measuring the output of science, the data are in private hands, and are analyzed by opaque algorithms to produce a metric (journal impact factor) that is easily gamed and is corrupting the entire research ecosystem.

Simply by asking to flip a bit, publishers already providing their citations to CrossRef can make them public, but only a few have done so.

Larivière et al's painstaking research shows that journal publishers and others with access to these private databases (Web of Science and Scopus) can use it to graph the distribution of citations to the articles they publish. Doing so reveals that:
the shape of the distribution is highly skewed to the left, being dominated by papers with lower numbers of citations. Typically, 65-75% of the articles have fewer citations than indicated by the JIF. The distributions are also characterized by long rightward tails; for the set of journals analyzed here, only 15-25% of the articles account for 50% of the citations
Thus, as has been shown many times before, the impact factor of a journal conveys no useful information about the quality of a paper it contains. Further, the data on which it is based is itself suspect:
On a technical point, the many unmatched citations ... that were discovered in the data for eLife, Nature Communications, Proceedings of the Royal Society: Biology Sciences and Scientific Reports raises concerns about the general quality of the data provided by Thomson Reuters. Searches for citations to eLife papers, for example, have revealed that the data in the Web of ScienceTM are incomplete owing to technical problems that Thomson Reuters is currently working to resolve. ...
Because the citation graph data is not public, audits such as Larivière et al's are difficult and rare. Were the data to be public, both publishers and authors would be able to, and motivated to, improve it. It is perhaps a straw in the wind that Larivière's co-authors include senior figures from PLoS, AAAS, eLife, EMBO, Nature and the Royal Society.

Stop Helping! How to Resist All of Your Librarian Urges and Strategically Moderate a Pain Point in Computer-Based Usability Testing / LITA

Editor’s note: This is a guest post by Jaci Paige Wilkinson.

Librarians are consummate teachers, helpers, and cheerleaders.  We might glow at the reference desk when a patron walks away with that perfect article or a new search strategy.  Or we fist pump when a student e-mails us at 7pm on a Friday to ask for help identifying the composition date of J.S. Bach’s BWV 433.  But when we lead usability testing that urge to be helpful must be resisted for the sake of recording accurate user behavior (Krug, 2000). We won’t be there, after all, to help the user when they’re using our website for their own purposes.

What about when a participant gets something wrong or gets stuck?  What about a nudge? What about a hint?  No matter how much the participant struggles, it’s crucial for both the testing process and the resulting data that we navigate these “pain points” with care and restraint.  This is  particularly tricky in non-lab, lightweight testing scenarios.  If you have only 10-30 minutes with a participant or you’re in an informal setting, you, as the facilitator, are less likely to have the tools or the time to probe an unusual behavior or a pain point (Travis, 2014).  However, pain points, even the non-completion of a task, provide insight.  Librarians moderating usability testing must carefully navigate these moments to maximize the useful data they provide.  

How should we move the test forward without helping but also without hindering a participant’s natural process?  If the test in question is a concurrent think-aloud protocol, you, as the test moderator, are probably used to reminding participants to think out loud while they complete the test.  Those reminders sound like “What are you doing now?”, “What was that you just did?”, or “Why did you do that?”.  Drawing from moderator cues used in think aloud protocols, this article explains four tips to optimize computer-based usability testing in those moments when a participant’s activity slows, or slams, to a halt.

There are two main ways for the tips described below to come into play.  Either the participant specifically asks for help or you intervene because of a lack of progress.  The first case is easy because a participant self-identified as experiencing a pain point.  In the second case, identify indicators that this participant is not moving forward or they are stalling: they stay on one page for a period of time or they keep pressing the back button.  One frequently observed behavior that I never interfere with is when a participant repeats a step or click-path even when it didn’t work the first time.  This is a very important observation for two reasons: first, does the participant realize that they have already done this?  If so, why does the participant think this will work the second time?  Observe as many useful behaviors as possible before stepping in.  When you do step in, use these tips in this order:  

ASK a participant to reflect on what they’ve done so far!

Get your participant talking about where they started and how they got here.  You can be as blunt as: “OK, tell me what you’re looking at and why you think it is wrong”.  This particular tip has the potential to yield valuable insights.  What did the participant THINK they were going to see on the page and now what do they think this page is?  When you look at this data later, consider what it says about the architecture and language of the pages this participant used.  For instance, why did she think the library hours would be on “About” page?

Notice that nowhere have I mentioned using the back button or returning to the start page of the task.  This is usually the ideal course of action; once a user goes backwards through his/her clickpath he/she can make some new decisions.  But this idea should come from the user, not from you.  Avoid using language that hints at a specific direction such as “Why don’t you back up a couple of steps?”  This sort of comment is more of a prompt for action than reflection.         

Read the question or prompt again! Then ask the participant to pick out key words in what you read that might help them think of different ways to conquer the task at hand.

“I see you’re having some trouble thinking of where to go next.  Stop for one moment and listen to me read the question again”.  An immediate diagnosis of this problem is that there was jargon in the script that misdirected the participant.  Could the participant’s confusion about where to find the “religion department library liaison” be partially due to that fact that he had never heard of a “department library liaison” before?  Letting the participant hear the prompt for a second or third time might allow him to connect language on the website with language in the prompt.  If repetition doesn’t help, you can even ask the participant to name some of the important words in the prompt.   

Another way to assist a participant with the prompt is to provide him with his own script.  You can also ask him to read each task or question out loud: in usability testing, it has been observed that this direction “actually encouraged the “think aloud” process” that is frequently used” (Battleson et al., 2001). The think aloud process and its “additional cognitive activity changes the sequence of mediating thoughts.  Instructions to explain and describe the content of thought are reliably associated with changes in ability to solve problems correctly” (Ericsson & Simon, 1993).  Reading the prompt on a piece of paper with his own eyes, especially in combination with hearing you speak the prompt out loud, gives the participant multiple ways to process the information.

Choose a Point of No Return and don’t treat it as a failure.

Don’t let an uncompleted or unsuccessful task tank your overall test.  Wandering off with the participant will turn the pace sluggish and reduce the participant’s morale. Choose a point of no return.  Have an encouraging phrase at ready: “Great!  We can stop here, that was really helpful.  Now let’s move on to the next question”.  There is an honesty to that phrasing: you demonstrate to your participant that what he is doing, even if he doesn’t think it is “right” is still helpful.  It is an unproductive use of your time, and his, to let him continue if you aren’t collecting any more valuable data in the process.   The attitude cultivated at a non-completed task or pain point will definitely impact performance and morale for subsequent tasks.  

Include a question at the end to allow the participant to share comments or feelings felt throughout the test.

This is a tricky and potentially controversial suggestion.  In usability testing and user experience, the distinction between studying use instead of opinion is crucial.  We seek to observe user behavior, not collect their feedback.  That’s why we scoff at market research and regard focus groups suspiciously (Nielsen, 1999).  However, I still recommend ending a usability test with a question like “Is there anything else you’d like to tell us about your experience today?” or “Do you have any questions or further comments or observations about the tasks you just completed?”  I ask it specifically because if there was one or more pain points in the course of a test, a participant will likely remember it.  This gives her the space to give you more interesting data and, like with tip number three, this final question cultivates positive morale between you and the participant.  She will leave your testing location feeling valued and listened to.

As a librarian, I know you were trained to help, empathize, and cultivate knowledge in library users.  But usability testing is not the same as a shift at the research help desk!  Steel your heart for the sake of collecting wonderfully useful data that will improve your library’s resources and services.  Those pain points and unfinished tasks are solid gold.  Remember, too, that you aren’t asking a participant to “go negative” on the interface (Wilson, 2010) or manufacture failure, you are interested in recording the most accurate user experience possible and understanding the behavior behind it.  Use these tips, if not word for word, then at least to meditate on the environment you curate when conducting usability testing and how to optimize data collection.    

 

Bibliography

Battleson, B., Booth, A., & Weintrop, J. (2001). Usability testing of an academic library web site: a case study. The Journal of Academic Librarianship, 27(3), 188-198.

Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis.

Travis, David “5 Provocative Views on Usability Testing” User Focus 12 October 2014. <http://www.userfocus.co.uk/articles/5-provocative-views.html>

Nielsen, Jakob. “Voodoo Usability” Nielsen Norman Group 12 December 1999. <https://www.nngroup.com/articles/voodoo-usability/>
Wilson, Michael. “Encouraging Negative Feedback During User Testing” UX Booth 25 May 2010. <http://www.uxbooth.com/articles/encouraging-negative-feedback-during-user-testing/>

Dispatches from the User List: new tools for creating and ingesting derivatives outside of a production Islandora / Islandora

This week's blog is another visit to the user listserv to highlight something really great you may have missed if you are not a subscriber. We're bringing you a single entry this time around, from Mark Jordan (from Simon Fraser University and Chairman of the Islandora Foundation when he's not busy writing new modules):
 
Coming out of the last DevOps Interest Group call, which helped me focus some ideas we've been throwing around for a while here at SFU, and on the heels of an incredible iCamp, which demonstrated once again that our community is extraordinarily collaborative and supportive, I've put together two complementary Islandora modules intended to help address an issue many of us face: how to scale large ingests of content into Islandora. The two modules are:
 
 
The first one writes out each object's datastreams onto the server's filesystem, and the second provides a drush command that allows batch ingests to bypass the standard time-consuming derivative creation process for images, PDFs, and other types of objects.
 
Batch ingests that are run with Islandora's "Defer derivative generation during ingest" configuration option enabled (which means that derivative creation is turned off) are hugely faster than batch ingests run with derivative generation left on. In particular, generating OCR from images can be very time consuming, not to mention generating web-friendly versions of video and audio files. There are a number of ways to generate derivatives independently of Islandora's standard ingestion workflow, such as UPEI's Taverna-based microservices and several approaches taken by DiscoveryGarden. The currently-on-hiatus Digital Preservation Interest Group spent some time thinking about generating derivatives, and part of that activity compelled me to produce a longish "discussion paper" on the topic. Islandora CLAW is being built with horizontal scaling as a top-tier feature, but for Islandora 7.x-1.x, we're stuck for the moment with working around the problem of scaling ingestion.
 
The approach taken by the two new modules I introduce here is based on the ability to generate derivatives outside of a production Islandora instance, and then, ingest the objects with all their datastreams into the production Islandora. This approach raises the question of where to generate those derivatives. The answer is "in additional Islandora instances." The Islandora Vagrant provides an excellent platform for doing this. Capable implementers of Islandora could set up 10 throw-away Vagrants (a good use for out-of-warranty PCs?) running in parallel to generate derivatives for loading into a single production instance. All that would be required is to enable the Islandora Dump Datastreams module on the Vagrants and configure it to save the output from each Vagrant to storage space accessible to the production instance. When all the derivatives have been generated on the Vagrants, running the drush command provided by Islandora Batch with Derivatives on the production instance (with "Defer derivative generation during ingest" enabled of course) would ingest the full objects in a fraction of the time it would take to have the single production Islandora generate all the derivatives by itself.
 
Islandora Batch with Derivatives is not the first module to allow the ingestion of pregenerated derivatives. The Islandora Book Batch and Islandora Newspaper Batch modules have had that feature for some time. During SFU's recent migration of around 900,000 pages of newspapers, we saved months of ingestion time because we pulled all our derivatives for the newspaper pages out of CONTENTdm, and then, with "Defer derivative generation during ingest" enabled, batch ingested full newspaper issue packages, with all derivatives in place. Running OCR on our server for that many pages would not have been practical. All the Islandora Batch with Derivatives module linked above does is let you do that with basic images, large images, PDFs, videos, and audio objects too.
 
I've mentioned the Move to Islandora Kit a few times in the past in these user groups; since the Open Repositories conference, we've added functionality to it to support migrating to Islandora from OAI-PMH compliant platforms. At SFU, we are developing workflows that combine MIK with the approach I describe above as we move on to post-migration ingests of content.
 
If you foresee a large migration to Islandora in your short-term future, or are planning to ingest an especially large collection of objects and are looking for ways to speed up the process, introduce your project here on the user groups so that we can share knowledge and tools. If you're waiting for CLAW to migrate to Islandora, help push things along by writing up some migration use cases or by getting involved in the Fedora Performance and Scalability Group.

Sinar Project in Malaysia works to open budget data at all levels of government / Open Knowledge Foundation

“Open Spending Data in Constrained Environments” is a project being lead by Sinar Project in Malaysia aimed exploring ways to of making critical information public and accessible to Malaysian citizens. The project is supported by the Open Data for Development programme and has been run in collaboration with Open Knowledge International & OpenSpending

In Malaysia, fiscal information exists at all three levels of government, the federal, the state and the municipal. There are complicated relationships and laws that dictate how budget flows through the different levels of government and, as the information is not published as open data, by any level of government, it is incredibly challenging for citizens to understand and track how public funds are being spent. This lack of transparency creates an environment for potential mismanagement of funds and facilitates corruption.

Earlier this year, the prime minister of Malaysia, Dato’ Seri Najib Razak, announced the revised budgets for 2016 in response to slow economic growth, that is a result of declining oil and commodity price coupled with stagnant demand from China. As a result, it was paramount to restructure the 2016 federal budget in order to find a savings of US $2.1 billion. That will make possible for the government  to maintain its 2016 fiscal budget target at least at 3.1 percent of the country’s GDP. One of the biggest cuts from the revised 2016 budget is the public scholarships for higher education.

“Higher education institutions had their budget slashed by RM2.4 billion (US$573 million), from RM15.78 billion (US$3.8 billion) in 2015 to RM13.37 billion (US$3.2 billion) for the year 2016.” – Murray Hunter, Asian Correspondent

When numbers get this big, it is often difficult for people to understand what the real impact and implications of these cuts are going to be on the service citizens depend on. While it is the role of journalists and civil society to act as an infomediary and relay this information to citizens, without access to comprehensive, reliable budget and spending data it becomes impossible for us to fulfil our civic duty of keeping citizens informed. Open budget and spending data is vital in order to demonstrate to the public the real life impact large budget cuts will have. Over the past few months, we have worked on a pilot project to try to make this possible.

While the federal budgets that have been presented to Parliament are accessible on the Ministry of Finance website, we were only able to access state and municipal governments budgets through directly contacting state assemblyman and local councillors.
Given this lack of proactive transparency and limited mechanisms for reactive transparency, it was necessary to employ alternative mechanism devised to hold governments accountable. In this case, we decided to conduct a social audit.

Kota Damansara public housing. Credit: Sze Ming

Social audits are mechanisms in which users collect evidence to publicly audit, as a community, the provision of services by government. One essential component of a social audit is taking advantage of the opportunity to work closely with communities in order to connect and empower traditionally disenfranchised communities.

Here in Malaysia, we started our social audit work by conducting several meetings with communities living in public house in Kota Damansara, a town in the district of Petaling Jaya in Selangor State, in order to gain a better understanding of the challenges they were facing and to map these issues against various socio-economic and global development indicators.

Then, we conducted an urban poverty survey where we managed to collect essential data on 415 residents from 4 blocks in Kota Damansara public housing. This urban poverty survey covered several indicators that were able to tell us more about the poverty rate, the unemployment rate, the child mortality rate and the literacy rate within this community. From the preliminary results of the survey, we have found that all residents are low income earners, currently living under the poverty line. These findings stand in contrast to the question asked in the Parliament last year on income distribution of the nation’s residents, where it was declared that there is a decrease of about 0.421% of people in poverty in Malaysia. Moreover, in order for citizens to hold the Selangor state government accountable, civil society could use this data as evidence to demand that allocated budgets are increased in order to give financial/welfare support to disenfranchised communities in Kota Damansara public housing.

What’s next? In order to measure the impact of open data and social audit, we are planning a follow up of urban poverty surveys. Since the upcoming general elections will be held on 2018, the follow up of the surveys  will be applied each 4 months after the first survey, in order to document if there are any changes or improvements made by the decision makers for better policies in the respective constituency and making better budget priorities that match the proposed/approved public policies.

 

REGISTER for Fedora Camp in NYC / DuraSpace News

Austin, TX  The Fedora Project is pleased to announce that Fedora Camp in NYC, hosted by Columbia University Libraries, will be offered at Columbia University’s Butler Library in New York City November 28-30, 2016.

CALL for Expressions of Interest in Hosting Annual Open Repositories Conference, 2018 and 2019 / DuraSpace News

From William Nixon and Elin Stangeland for the Open Repositories Steering Committee

Glasgow, Scotland  The Open Repositories Steering Committee seeks Expressions of Interest (EoI) from candidate host organizations for the 2018 and 2019 Open Repositories Annual Conference series. The call is issued for two years this time to enable better planning ahead of the conferences and to secure a good geographical distribution over time. Proposals from all geographic areas will be given consideration. 

Call for Nominations: LITA Top Tech Trends Panel at ALA Midwinter 2017 / LITA

It’s that time of year again! We’re asking for you to either nominate yourself or someone you know who would be a great addition to the panel of speakers for the 2017 Midwinter Top Tech Trends program in Atlanta, GA.

LITA’s Top Trends Program has traditionally been one of the most popular programs at ALA. Each panelist discusses two trends in technology impacting libraries and engages in a moderated discussion with each other and the audience.

Submit a nomination at: http://bit.ly/lita-toptechtrends-mw2017.  Deadline is Sunday, August 28th.

The LITA Top Tech Trends Committee will review each submission and select panelist based on their proposed trends, experience, and overall balance to the panel.

For more information about past programs, please visit http://www.ala.org/lita/ttt.

Call for Proposals, LITA @ ALA Annual 2017 / LITA

Call for Proposals for the 2017 Annual Conference Programs and Preconferences!

The LITA Program Planning Committee (PPC) is now accepting innovative and creative proposals for the 2017 Annual American Library Association Conference.  We’re looking for full or half day pre-conference ideas as well as 60- and 90-minute conference presentations. The focus should be on technology in libraries, whether that’s use of, new ideas for, trends in, or interesting/innovative projects being explored – it’s all for you to propose.

When and Where is the Conference?

The 2017 Annual ALA Conference will be held  in Chicago, IL, from June 22nd through 27th.

What kind of topics are we looking for?

We’re looking for programs of interest to all library/information agency types, that inspire technological change and adoption, or/and generally go above and beyond the everyday.

We regularly receive many more proposals than we can program into the 20 slots available to LITA at the ALA Annual Conference. These great ideas and programs all come from contributions like yours. We look forward to hearing the great ideas you will share with us this year.

This link from the 2016 ALA Annual conference scheduler shows the great LITA programs from this past year.

When are proposals due?

September 9, 2016

How I do submit a proposal?

Fill out this form bit.ly/litacfpannual2017

Program descriptions should be 150 words or less.

When will I have an answer?

The committee will begin reviewing proposals after the submission deadline; notifications will be sent out on October 3, 2016

Do I have to be a member of ALA/LITA? or a LITA Interest Group (IG) or a committee?

No! We welcome proposals from anyone who feels they have something to offer regarding library technology. Unfortunately, we are not able to provide financial support for speakers. Because of the limited number of programs, LITA IGs and Committees will receive preference where two equally well written programs are submitted. Presenters may be asked to combine programs or work with an IG/Committee where similar topics have been proposed.

Got another question?

Please feel free to email Nicole Sump-Crethar (PPC chair) (sumpcre@okstate.edu)

To LISTSERV or to Not LISTSERV / LITA

Screenshot of Outlook Inbox

Beginning in August 2016, the Special Libraries Association (SLA) discontinued its traditional discussion-based listserv in favor of a new service: SLA Connect. If you click through to the post on Information Today, Inc. you can see the host of services and tools and enhancements moving to SLA Connect provides for SLA members. However, change is difficult and this change caught a number of members by surprise. We all know how difficult it is to communicate change to patrons. It’s no easier with fellow professionals.

The rollout was going to start July 1, 2016 but got pushed back a month because of member feedback. Since this is technology, of course there were compliant issues with the new server so some services that were scheduled for a slower transition got moved more quickly and old platforms were shut down. The whole enterprise is a complete change to how people were used to communicating with fellow SLA professionals. Small changes are hard, wholesale changes even more so. It looks like the leaders of SLA have a good plan in mind and are listening to member feedback which is great.

We recently went through a transition here in WI where the state-wide public library listserv was transitioned to Google+. The Department of Public Instruction (DPI) did a good job in getting the message out to people but the decision was not popular. I came to the discussion late because historically I would check in with broader reach listservs (CODE4LIB, LITA, WISPUBLIB, Polaris, etc.) about once a month. Sometimes even less frequently. We have local listservs that I check on a daily basis, but those impact my job directly.

I wasn’t thrilled about the move to Google+ for a few reasons. First, while I had a Google account, I try to keep my personal and work lives separated. This would mean creating a new Google account to use with work. Which meant all the work needed with setting up a new account and making sure that I’m checking it on a regular basis. Second, the thing I like about an email listserv is that I can create a rule to move all the messages into a folder and then when I scan the folder I can see which subjects had the most discussion. That disappears using Google+. I can get the initial post sent to my inbox but any follow-up posts/discussion doesn’t show up there.

This was a problem since instead of seeing twenty messages on a subject I’d now see one. I’d have to launch that message in Google+ to see whether or not people were talking about it. It’s also a problem as the new platform was not getting the traffic the traditional email listserv got so a lot of the state-wide community knowledge was not being shared. It’s getting better and DPI is doing a great job in leading the initiative for discussions. It doesn’t have the volume it used to, but it’s improving.

I needed to figure out a way to make myself check the Google+ discussions with more regularity. In comes Habitica. Our own inestimable Lindsay Cronk wrote about Habitica back in February. Habitica gamifies your to-do list. You create a small avatar and work your way through leveling him/her up to become a more powerful character. There are three basic categories: habits, dailies, and to-dos. Habits are things to improve yourself. For me it’s things like hitting my step count for the day or not drinking soda. There can be a positive and/or negative effect for your habits. You can lose health. Your little character can die. To-dos are traditional to-do list things. You can add due dates, checklists, all sorts of things. Dailies are things you have to do on a regular basis.

This is where Habitica helps me most. I have weekly reminders to check my big listservs including DPI’s Google+ feed. I have daily reminders to check in with the new supervisors who report to me. These are all things that I should be doing anyway but it’s a nice little reminder when I got bogged down in a task to take a break and get something checked off my list. I’ve set these simple dailies at the ‘trivial’ difficulty level so I’m not leveling up my character too quickly. I’m currently a 19th level fighter on Habitica but there are still times when my health gets really low. More importantly its kept me on top of my listservs and communication with fellow professionals in a way that I was not doing of my own volition.

What’s your favorite way to keep on top of communication with fellow professionals?

Digital Displays on a Budget: Hardware / LITA

 

Digital Display at JPL Library

Introduction

At the JPL Library we recently remodeled our collaborative workspace. This process allowed us to repurpose underutilized televisions into digital displays. Digital displays can be an effective way to communicate key events and information to our patrons. However, running displays has usually required either expensive hardware (installing new cables to tap into local media hosts) or software (Movie Maker, 3rd Party software), sometimes both. We had the displays ready but needed cost effective solutions for hosting and creating the content. Enter Raspberry Pi and a movie creator that can be found in any Microsoft Office Suite purchased since 2010… Microsoft PowerPoint.

In this post I will cover how to select, setup, and install the hardware. The follow up post will go over the content creation aspect.

Hardware Requirements

Displays

Luckily for us, this part took care of itself. If you need to obtain a display, I have two recommendations:

  • Verify the display has a convenient HDMI port. You are looking for a port that allows you to discreetly tuck the Raspberry Pi behind the display. Additionally, the port should be easily accessible if the need arises to swap out HDMI cables.
  • Opt for a display that is widescreen capable (16:9 aspect ratio). This will provide you with a greater canvas for your content. Whatever aspect ratio you decide upon, make sure your content settings match. This graphic sums up the difference between the aspect ratios of widescreen and standard (4:3 aspect ratio).
    Wide_v_Standard

Raspberry Pi

Raspberry Pi 2

Description

There are plenty of blog posts and documentation that cover the basics of what Raspberry Pi is and what is fully capable of. In short, you can think of it as a mini and price effective computer. For this project are interested in its price point, native movie player, and operating system customization prowess.

Selection

Devices

There are three main iterations available for purchase:

Obviously I would recommend the Pi 3, which was just released in late February, over the rest. All three are capable of running HD quality videos, but the Pi 3 will definitely run smoother. Also, the Pi 3 has on-board Wi-Fi and Bluetooth connectivity, on previous versions this required purchasing add-ons and used up USB slots.

However, these prices are only for the computer itself. You would still need, at the minimum, an SD card to store the operating system and files, power adaptor, keyboard and mouse, and an HDMI cable. The only advantage of selecting the 2 is that there are several pre-selected bundles created by 3rd party sellers that can lower the costs. Make sure to check the bundle details to confirm it contains the Raspberry Pi iteration that you want.

Bundles

Here are some recommended bundles that contain all you need (minus keyboard and mouse) for this project:

Keyboard & Mouse

Most USB keyboards and mice will work with a Pi but opt for simple ones to avoid drawing too much power from it. If you do not have a spare one consider this Bluetooth Keyboard and Mouse Touchpad. The touchpad is a bit wonky but it’ll get the job done and the portability is worth it.

Physical Setup

Getting the Raspberry Pi ready to boot is fairly easy. We just need to plug in the power supply, insert Micro SD Card with the operating system, and attach a display. Granted this all just gets to you a basic screen with the Pi awaiting instructions. A mouse, keyboard, and network connection are pretty much required for setting up the Pi software in order to get the device into a usable state.

Software Setup

The program we use is the Raspberry Pi Video Looper. This setup works exactly how it sounds: the Raspberry Pi plays and loops videos. However, before we can install that we need to get the Raspberry Pi up and running with the latest Raspbian operating system.

Installing Raspbian

Using personal SD

If you decided to use your own SD card, see this guide on how to get up and running.

Using NOOBS

noobs

If you bought a bundle, chances are that it came with a Micro SD Card pre-loaded with NOOBS (New Out of Box Software). With NOOBS we can just boot up the Pi and select Raspbian from the first menu. Make sure to also change the Language and Keyboard to your preferred settings, such as English (US) and us.

Once you hit Install, the NOOBS software will do its thing. Grab a cup of coffee or walk the dog as it will take a bit to complete the install. After installation the Pi will reboot and load up Raspi-config to let you adjust settings. There is a wide range of options but the two that should be adjusted right now are:

  1. Change User Password
  2. SSH – If you want remote access, you will need to Enable to SSH. For more information on this option see the Raspberry Pi Documentation.

After adjusting the settings, the Pi will boot the desktop environment. Because the NOOBS version loaded onto the card might be dated, the next step is to update the firmware and packages. To do this, click on the start menu and select the terminal and type in the following commands:

  • sudo apt-get update
  • sudo apt-get upgrade
  • sudo rpi-update
  • sudo reboot

Once the Pi reboots we can continue to the next phase, installing the video looper.

Installing Video Looper

For a complete guide on installing and adjusting the Video Looper, see Adafruit’s Raspberry Pi Video Looper documentation. In short, the installation process is all but three terminal commands:

After a few minutes the install is complete and the Video Looper is good to go! If you do not have any movies loaded your PI will now display “Insert USB drive with compatible movies”. Inserting a USB drive into the Pi will initiate a countdown followed by video playback.

Using Video Looper

Now that the Pi is all set you can load your videos onto an USB stick and the Looper will take care of the rest.  The Video Looper is quite versatile and can display movies in the following formats:

  • AVI
  • MOV
  • MKV
  • MP3
  • M4V

If your Pi fails to read the files on the USB drive, try loading them on another. I had several USB sticks that I went through before it read the files. Sadly, most of the vendor USB stick freebies were incompatible.

Lastly, the Video Looper has a few configuration options that you adjust to best fit your needs. Of those listed in the documentation I would recommend adjusting the file locations (USB stick vs on the Pi itself) and video player. The last one only being relevant if you cannot live with the loop delay between movies.

Unit Install

Raspberry Pi - Unit Install               After the Video Looper Steup we can now install the unit behind the display. We opted to attach the device using Velcro tape and a 0.3m Flat HDMI cable. Thanks toe the Velcro I can remove and reattach the Pi as needed. The flat HDMI cable reduces the need for cable management . The biggest issue we had was tucking away the extra cable from the power supply, a few well placed Velcro ties. Velcro, is there anything it can’t solve?

Wrap Up

Well if you’ve made it this far I hope you are on your way to creating a digital display for your institution. In my next post I will cover how we used Microsoft PowerPoint to create our videos in a quick and efficient manner.

The Raspberry Pi is a wonderful device so even if it the Video Looper setup fails to live up to your needs, you can easily find another project for it to handle. May I suggest the Game Boy Emulator?

Towards the conformation of the Third Greek OGP Action Plan: Open Knowledge Greece makes three commitments / Open Knowledge Foundation

This blog post was written by Olga Kalatzi from OK Greece

On the 5th of July in Athens, the open dialogue on Greece’s Third National Action Plan to the Open Government Partnership commenced where Open Knowledge Greece presented its 3 commitments for the third action plan.

The commitments of OK Greece included School of Data for public servants, the Open Data Index for cities and local administrations and linked open and participatory budgets. All of them come with implementation resources and timetables and satisfy all the OGP principles.

The event has been supported by the Bodossaki Foundation and different stakeholders participated: OK Greece, Openwise (IRM), Gov2u, GFOSS, Vouliwatch, diaNEOsis, as well experts from OGP Support Unit and Mrs. Nancy Routzouni, advisor on e-Government to the Alternate Minister for Administrative Reform.
OK Greece was represented in the event by its President Dr. Charalampos Bratsas and Marinos Papadopoulos, while OK Greece OGP team in Thessaloniki participated remotely through Skype.

Tonu Basu from OGP Support Unit said that “Staff from the OGP Support Unit had some very productive meetings with representatives from both government and civil society. We were greatly encouraged to see that civil society and government are taking concrete steps to collaborate among themselves and with each other through the development of collaborative networks. Civil society and government collaboration is the key to the strengthening of the OGP process and to establishing a strong culture of a transparent, accountable, and responsive government”.

The discussion has been focused on the improvement of the third action plan and the importance of the collaboration between civil society and government on promoting and strengthening open governance and transparency in Greece.

“Bodossaki Foundation participates actively in the conformation of the Third Action Plan aiming to develop and act as an intermediary between civil society bodies and this cause. The goal is the conformation of the action plan with the participation of the civil society and its successful implementation through monitoring and evaluation”, comments Fay Koutzoukou, Deputy Program Director.

Among the challenges addressed in the meeting, great attention was given to the small ownership of the civil society groups in participating in the formation and implementation of the action plan that holds the process back. The suggestions made by the civil society organizations that participated were on monitoring closer the process with regular meetings and assigning specific commitments leveraging both people and government.

According to experts from OGP Support Unit, some of the potential commitments of the action plan, which include issues like subnational, open education, open justice, parliament and administrative reform, if implemented as scheduled, they could position Greece as a regional and global leader among the 70 OGP countries.

Nancy Routzouni, advisor on e-Government to the Alternate Minister for Administrative Reform, concludes the event by saying that: “We are very pleased to work and collaborate with civil society bodies as their ideas, knowledge, and feedback are crucial in the process of forming the national action plan”.

The third National OGP action plan had been discussed and approved by the Parliament last week, where the commitments by OK Greece were mentioned, as Nancy Routzouni said in the event in Athens.

Loading...

Loading…

We’re looking for Access 2017 hosts! / Access Conference

The Access 2016 Planning Committee is now accepting proposals from institutions and groups to host Access 2017! Bring Canada’s leading library tech conference (not to mention one of the best conference audiences to be found ANYWHERE) to your campus or city!

Interested? Submit your proposal to accesslibcon@gmail.com, including:

  • The host organization(s) name
  • Proposed dates
  • The location the event will likely be held (e.g. campus facility, hotel name, etc.)
  • Considerations noted in the hosting guidelines
  • Anything else to convince us that you would put on another fabulous Access conference!

Proposals will be accepted until September 2, 2016. The 2017 hosts will be selected by the 2016 Planning Committee, and notified in early September. The official announcement will be made on October 7th at Access 2016 in Fredericton!

Questions? Let us know at accesslibcon@gmail.com!

Libraries cheerlead for fair copyright / District Dispatch

I’ve written a lot about 3D printing on the District Dispatch. One of the most unlikely topics I’ve discussed in connection with this technology is cheerleading…That’s right. If you’re a loyal DD reader, think back to May. If you’re hearing bells ring, that’s because the first week of that month, I outlined a court case between two manufacturers of cheerleading uniforms. The case pits international supplier Varsity Brands against the much smaller supplier Star Athletica. Varsity Brands is suing Star Athletica on the grounds that the latter’s uniforms infringe on its copyrighted designs. Even though copyright protects creative expression and cheerleading uniforms are fundamentally utilitarian, Varsity Brands’ argument rests on a liberal interpretation of something called “separability.” If a utilitarian item has creative elements that can be clearly separated from its core “usefulness,” it may receive copyright protection. Varsity Brands says that the stripes and squiggles in their uniform designs represent these sorts of elements.

Young cheerleaders.

Photo by Peter Griffin.

The courts are divided on this argument…But the U.S. Supreme Court has agreed to hear the case and give Varsity Brands, Star Athletica and copyright junkies everywhere a final and – hopefully – clarifying ruling. So, what the heck does this have to do with 3D printing? Actually, a lot. If the Supremes were to come down in favor of Varsity Brands’ interpretation of separability, they would set a dangerous precedent: any design that’s not 100 percent functional – i.e., has one or more elements with even a whit of creativity – might be protected by copyright. Imagine the fear of infringement this might instill in an avid “maker.”…It would likely be enough to hamstring his or her creative potential. Thankfully, the 3D printing community thought of this early.

As I mentioned in my last post about this case, industry players Shapeways, Formlabs and Matter and Form already submitted an amicus brief to the Supreme Court warning of the “chill” an overbroad interpretation of separability in the Varsity Brands case might place on 3D innovation. Believing as we do in the importance of creativity inside and beyond library walls, the library community has decided to pick up its pom-poms and stand alongside them. ALA, the Association of Research Libraries (ARL) and the Association of College and Research Libraries (ACRL) have signed onto a similar brief penned by the D.C.-based public policy organization Public Knowledge. The brief argues that: “…copyright in useful articles ought to continue to be highly limited, such that a feature of a useful article may be copyrighted only upon a clear showing that the feature is obviously separable and indisputably independent of the utilitarian aspects of the article.”

Our argument on this case is in keeping with one of the basic tenets of our efforts to promote public access to information: that copyright should be limited and promote progress and innovation. Lucky for us, we have the Constitution on our side.

The post Libraries cheerlead for fair copyright appeared first on District Dispatch.

OITP welcomes new chair, Marc Gartler / District Dispatch

Marc Gartler

Marc Gartler, new chair of OITP’s Advisory Committee

I’m pleased to announce that Marc Gartler is the new chair of the Advisory Committee for ALA’s Office for Information Technology Policy (OITP), as appointed by ALA President Julie Todaro. Marc succeeds Dan Lee of the University of Arizona, who served as OITP chair for two years. We are deeply grateful for Dan’s leadership and service to OITP and ALA.

Marc Gartler recently chaired OITP’s subcommittee on America’s Libraries in the 21st Century, served on the advisory committee for ALA’s Policy Revolution! initiative, and served on OITP’s Copyright Education subcommittee. Over the past few years he has participated in policy discussions with representatives from the FCC, Google, the Gates Foundation, and other organizations whose interests dovetail with those of ALA. OITP, ALA, and libraries have benefited from Marc’s counsel on diverse issues from copyright to maker spaces.

Marc manages two neighborhood libraries for Madison (Wisc.) Public Library, a recipient of the 2016 National Medal for Museum and Library Service. He leads one of the City of Madison’s Neighborhood Resource Teams, which coordinate local government services and develop relationships among City staff, neighborhood residents, and other stakeholders. A former college library director, Marc served as a consultant for the Ohio Board of Regents and Colorado Department of Higher Education. He is a graduate of the PLA Leadership Academy, and holds an MS in Library & Information Science from the University of Illinois at Urbana-Champaign and an MA in Humanities from the University of Chicago.

We look forward to working with Marc.

The post OITP welcomes new chair, Marc Gartler appeared first on District Dispatch.

Announcing IODC Unconference / Open Knowledge Foundation

We all know the feeling of the end of a conference, where after long days full of content, you leave with more unanswered questions. Conferences are a great place for networking, learning different topics and sharing achievements (and sometimes even failures), but in their nature, they are organised in a way that is less participatory and more broadcast than an exchange of information.

The organising committee of the International Open Data Conference are aware of this, and try to use other ways to share ideas with people without stages and slideshows, that can complement the main event. This is why one of the pre-events to the conference will be an unconference that will give inputs to the main event.

OKfestUn

Post its from the unfestival @okfest

An unconference is an open event that allows its members to propose their own topics for discussion. Just like last year, the unconference will enable people to discuss open data issues that are close to their heart with like-minded peers from across the world. We hope that by having an unconference, we can give voice to a broad range of different experiences and points of view.

We believe that this will help us ignite discussions and find new ways to continue the conversation during the conference. So even if you are not part of a panel in the main event, you can influence the IODC’s outcomes by participating in the unconference.

This year, Open Knowledge International will lead the efforts of the unconference for IODC, with the support of the IDRC, The Web Foundation, ILDA and Civica Digital, and we want to share with you every step of the way. The goals that we set are:

  • To offer a safe space to promote understanding and experience sharing from the open data movement across the world, to have honest and open reflection on how we create change
  • To initiate new relationships and build solidarity within the open data community.
  • To create an opportunity to dive deeper into topics and issues that are important to the community.

To do so, we want to invite you to take an active role in the running of the event. Firstly, we need to hear from you and to set the mood for the event. We opened this forum category, and we are looking forward to seeing what kind of topics can be explored during the unconference.

In the next couple of weeks, we will send more information and registration details. In the meanwhile, save the date: Tuesday, October 4th, at 9.30 at IFEMA, North convention centre.

We hope to see you there and share experiences!

 

Accessible, sort of – #a11eh / LibUX

We use a hashtag (#a11y) which makes it convenient to talk about the accessibility of sites, apps, tools, and content. It’s a numeronym.

Accessibility is often abbreviated as the numeronym a11y, where the number 11 refers to the number of letters omitted. This parallels the abbreviations of internationalization and localization as i18n and l10n respectively. Computer accessibility

And what with the increasing risk of liability peaking interests in web accessibility standards, developers look for ways to quantifiably, programmatically validate the accessibility of their work.

But it’s my experience that this isn’t enough. Working on the accessibility of research databases in higher-ed as well as the content and tools made available by libraries, you learn how WCAG 2.0 AA compliance communicates nothing about the ease of its use. These tools — which are necessary to do good work — do not measure usability.

What’s more, our inability to communicate the quality of the experience using “#a11y” — which plays an important part growing awareness — means, in some small way, we fail to acknowledge the nuance and care required of meaningful, accessible code.

Statements like “we’re constantly working to make GOV.UK as accessible and usable as possible #a11y” and “you can search full-text articles entirely by keyboard #a11y” don’t really distinguish the grade of the former from the slog of the latter. To those of us who don’t rely on screen readers these just affirm that, hey, these sites are accessible. Equal kudos.

The disparity emerges in their use. Compliant, roger. But to navigate to search results in the site referred to in the last example requires first tabbing through 47 options, inputs, facets, all this after the skip link.

We are in the habit of doling out the applause for accessibility, and sure while accessible content after 47 tabs is better than none, I think — in the same vein as WTFMobileWeb — it is okay to wag the finger a little. We can do better. We can conceptually differentiate from measures that check all the boxes and those that remember the person on the other side of the screen reader.

So, I propose a new hashtag: #a11eh.

A new hashtag: #a11eh – describing content or an app that is accessible insofar that it validates, but still hard af to use, lazy #a11y. Twitter

The post Accessible, sort of – #a11eh appeared first on LibUX.

RNC forum focuses on rethinking STEM education / District Dispatch

This week, I was fortunate to attend the Information Technology and Innovation Foundation (ITIF) policy panel at the Republican National Convention (RNC), “How the next administration can foster innovation, boost productivity, and increase U.S. competitiveness.”

Information Technology and Innovation Foundation policy panelists at the Republican National Convention discuss "How the next administration can foster innovation, boost productivity, and increase U.S. competitiveness."

Information Technology and Innovation Foundation policy panelists at the Republican National Convention discuss ways the next administration can encourage innovation

The discussion evinced a key tension in the debate over the role regulation should play in innovation, and the means by which policymakers can foster growth and American competitiveness in the tech space.

The panel, moderated by ITIF President Robert Atkinson, included Congressmen Blake Farenthold (R-TX), Bob Latta (R-OH), and Michael Turner (R-OH), alongside Facebook Chief Privacy Officer Erin Egan, Entertainment Software Association President Michael Gallagher, Senior Vice President of Bayer Corporation Raymond Kerins, Corporate Vice President of Technology and Civic Engagement at Microsoft Dan’l Lewin, and James Greenwood, CEO of the Biotechnology Innovation Organization (BIO).

Atkinson opened the discussion by stating that conventions are about leadership and idea-sharing: and a key area of concern was the dearth of cooperative partnerships between government and private industry. He trumpeted the need for more public-private tech partnerships in the decade ahead.

Congressman Turner emphasized that the innovation economy in the US is critical to economic growth; for every intellectual property-based job created, 2.5 additional jobs are developed to support it. The panelists all agreed on the vital role that innovation industries will play in American growth, and focused in on what role policymakers can play to further develop the tech sphere and to identify current weaknesses.

A key area for improvement that is highly relevant to libraries is in STEM education, and telling the story of how innovation can work for people who may be concerned about new technologies disrupting job markets. The panelists agreed that to make technology work for everyone, decision makers must restructure education and workforce training.

At present, there are 43,000 students graduating in STEM fields each year, but 600,000 available jobs in the science and technology sector. Meaningful reforms are needed to close this gap between training and demand.

What are some specific ways to boost technology training in our education and workforce development systems?

Gallagher, Kerins and Lewin had some specific ideas – many of which have implications for libraries:

  • Start by teaching computational thinking. There is a fine line between what machines can do best, and what humans can do best. We need to educate students early on to think from a mindset of how to work with machines to innovate.
  • Add more computer science education in high schools. We need high-level engagement so students can see the real career possibilities in STEM fields Libraries can do their part by offering more STEM training and programs to get kids involved in technology, inspiring and motivating youth to pursue technology fields.
  • Use games, especially video games, to get people into STEM. Looking at the popularity of new gaming trends such as Pokémon Go, games are at the forefront of accessible, inexpensive human-machine interaction. They can accelerate tech learning by making it fun – students go from being tech consumers to creators when they interact in gaming worlds.
  • Further develop public-private partnerships such as Bayer Corporation’s initiative, “Making Science Make Sense,” in which employees are given three days per year to visit schools and teach about career opportunities in STEM. This not only builds a network of public-private partnerships, but advances awareness of careers in innovation. Perhaps libraries can tap into this space of partnering with businesses and corporate professionals to teach STEM topics through company programs.

During the Q-and-A, I posed a question for the panel: what are some of the main policy barriers to bringing more Americans into the tech and innovation space, especially at a time when a lot of people are fearing that new technology may disrupt their jobs?

Kerins advised that industry needs to better communicate the skills and jobs they need to schools and policymakers, so that so that programs can be designed at the local and state levels to fill new job fields.

Congressman Farenthold believes a key barrier is a one-size-fits-all education system, in which individual high schools don’t have a lot of room for innovation. He emphasized that community colleges are bright spot though, with lots of growth and training for people of all ages. Effecting successful education policies depends on educating policymakers on the need for increasing STEM partnerships and training at the community level and guiding solutions.

These perspectives provide unique insight for libraries as institutions of learning: many of the recommendations the panel made for schools can easily apply to libraries as well. They also underscore the importance of engaging in state and local advocacy to effect meaningful education policy change.

Ensuring U.S. education programs keep pace with technology development will be the key to ensuring the innovation economy works for everyone. Libraries can help make this happen by providing access to, and building programming around, cutting-edge digital technologies.

The post RNC forum focuses on rethinking STEM education appeared first on District Dispatch.

Co-Hosting a Datathon at the Library of Congress / Library of Congress: The Signal

Photo of about 20 people sitting at computers in a meeting room.

Archives Unleashed teams at wrap-up, day one. Photo by Jaime Mears.

On June 14 and 15, the Library of Congress hosted Archives Unleashed 2.0, a web archive “datathon” (otherwise known as a “hackathon,” but apparently any term with the word “hack” in it might sound a bit menacing) in which teams of researchers used a variety of analytical tools to query web-archive data sets in the hopes of discovering some intriguing insights before their 48-hour deadline is up. This was the second instance of this event- the University of Toronto hosted the first in March 2016- in what organizers plan to be a regular occurrence.

Why host a datathon?

For organizers Matthew Weber, Ian Milligan and Jimmy Lin, seasoned data scholars and educators, Archives Unleashed is an exercise in balancing discussion and practice — or what Milligan calls yacking and hacking — to help improve web archive research. The text on the Archives Unleashed website states, ”This event presents an opportunity to collaboratively unleash our web collections, exploring cutting-edge research tools while fostering a broad-based consensus on future directions in web archive analysis.”

Photo of writing on a white board about file types.

Team Museum’s URL text analysis of mimetypes found on museum websites. Photo by Jaime Mears.

But what is the value for the host institution – the Library of Congress or any other? There are actually many unique benefits; here are a few:

  • New patterns of information emerged from the web archives.
  • We networked with data scholars and, perhaps even more important, learned what we can do to support sophisticated technological research. (It helps the discovery process if staff members participate alongside researchers)
  • We collaborated across divisions within the Library of Congress and discovered areas of shared common interests. National Digital Initiatives was the main point of contact for event hosting, Library Services (which makes Library of Congress collections available) provided the data sets and the John L. Kluge Center and the Law Library provided content expertise and additional support.
  • We got our colleagues excited about the potential use of our collections and this emerging research service.

Even if none of these points are relevant for your institution, think of this exposure as a way to begin familiarizing your institution with the future of historical research. As Milligan said in his pre-workshop presentation, you “can’t do a faithful historical study post 1996 without web archives.”

Photo of computer engineers at work.

Team Turtle. Photo by Jaime Mears.

What do you need to host a datathon?

  • Technical experts. Weber, Lin, and Milligan provided support to researchers throughout the process, from feedback on initial proposals to technical support with tools and unruly data sets. If you don’t have anyone on staff to fill this role, look outside your staff for technical experts to partner with you.
  • Data sets. Not to be underestimated, data sets are the heart of the datathon. You don’t need to know everything about the data sets you serve (that’s what the researchers will provide), but the data sets need to be fairly small so they can be moved around easily (ours were no more than 10 GB each). It’s important to prepare effective messaging about the data sets in order to entice attendees to use them. If there are use restrictions associated with the data sets, you  may need to prepare release statements for the researchers to sign.
  • Content experts. This is where your library can shine. Not understanding the context of a data set can make skewed results difficult to untangle. Someone who understands the subject matter can save researchers a lot of time by helping them analyze visualization results. For example, The Law Librarians were able to look at a word cloud from a set of Supreme Court nomination websites and explain some of the dominant words that the researchers were seeing, and they were able to suggest particular buzzwords that researchers could use in their text analysis queries.
Whiteboard with writing.

Team Turtle’s ARCs to WARCs workflow. Photo by Jaime Mears.

  • Infrastructure. Researchers brought their own laptops but reliable and even enhanced broadband was crucial. The bandwidth needed to move or query data sets as large as a terabyte in size, especially when time is an issue, is formidable.Luckily, preventative actions can be taken to mitigate this stress on the network. The Unleashed organizers set size restrictions on our data sets (no more than 10GB each), and pre-loaded applications and all data sets (including those from the Internet Archive and University of Waterloo) onto virtual machines to minimize transfer times and surprises. If that isn’t an option, ask the researchers to download local copies of the data sets they wish to use in advance of the event so time isn’t wasted moving them around. “Infrastructure” also includes tables and chairs, whiteboards and presentation support.
  • Researchers. The datathon participants came from as far as Jakarta, from mixed backgrounds and interests, although the majority are involved in academia with specializations in media studies, history, computer science and political science. Some work for libraries and archives. Although they had varying technical abilities, most of the participants had experience with data-research methodologies and were familiar with the tools. To attract this group of people, organizers used a simple application process for participants and were able to provide some funding for travel and meals, and coordinated the workshop in conjunction with the Saving the Web symposium hosted by Dame Wendy Hall.

This list is scalable and can be tweaked to fit diverse budgets and spaces. Collaboration is essential. Even if you have staff members who are technical experts, even if you have all the money, partnering with other library units and external experts diversifies who might attend, the available data sets they bring and in general raises the potential for creativity and revelation.

Islandora Foundation 2016 Annual General Meeting / Islandora

The Islandora Foundation held its Annual General Meeting today, with representatives from just over half of our member institutions. The draft minutes are available here. Some highlights include:

  • Nick Ruest was elected as Treasurer for 2016 - 2018. He has served as acting Treasurer since Mark Leggott's resignation from the board earlier this year.
  • Amendments to the Islandora Foundation by-laws were approved, defining the Terms of Service for Directors and Officers of the Foundation and the role of the Vice Chair & Secretary as an ex-officio member of the Board of Directors.
  • A set of Strategic Goals for the Islandora Foundation were approved. For 2016 - 2017, the IF is committed to:
    • Grow and sustain the Islandora Community.
      • Retain and expand Islandora Foundation membership.
      • Promote the use of Islandora in larger institutions.
      • Promote and facilitate use of Islandora in library schools and related educational programs.
      • Research and pursue grant opportunities and partnerships.
    • Focus Islandora development resources on Islandora CLAW
      • Produce a clear communication plan and messaging around Islandora CLAW and contributions needed.
      • Hire a Technical Lead to focus on the technical direction of Islandora CLAW development. (editor's note: )
      • Continue Islandora CLAW development with Fedora 4 and Drupal 8.
      • Support the Fedora 4 community and development efforts.
      • Support the PCDM community.
      • Continue to pursue open source repository interoperability, such as Hydra and Archivematica.
    • Support legacy installations of Islandora
      • Continue support for Islandora 7.x-1.x releases as long as needed by the community.
    • Clarify and communicate Islandora Foundation Governance
      • Review our governance and structure against peer organizations.

 

Johnston Joins NC Cardinal / Equinox Software

FOR IMMEDIATE RELEASE

Duluth, Georgia–July 21, 2016

Equinox is pleased to announce that Johnston County Public Library has been successfully migrated to Evergreen in the NC Cardinal Consortium.  The Equinox team completed the migration in late May.  Johnston County Public Library includes ten branches and serves almost 48,000 patrons with over 174,000 items.

Johnston joins Cumberland, Neuse, Henderson, Rockingham, and Iredell in the use of the Acquisitions module within NC Cardinal.  The addition of Johnston’s 10 branches brings NC Cardinal’s grand total to 153.  Equinox is proud to be a part of NC Cardinal’s continued growth!

Mary Jinglewski, Equinox Training Services Librarian, worked closely with Johnston during the transition, providing training on Evergreen.  She remarked, “It was a lovely experience training with Johnston County Public Libraries. I believe they will be a wonderful addition and community member of NC Cardinal.”

About Equinox Software, Inc.

Equinox was founded by the original developers and designers of the Evergreen ILS. We are wholly devoted to the support and development of open source software in libraries, focusing on Evergreen, Koha, and the FulfILLment ILL system. We wrote over 80% of the Evergreen code base and continue to contribute more new features, bug fixes, and documentation than any other organization. Our team is fanatical about providing exceptional technical support. Over 98% of our support ticket responses are graded as “Excellent” by our customers. At Equinox, we are proud to be librarians. In fact, half of us have our ML(I)S. We understand you because we *are* you. We are Equinox, and we’d like to be awesome for you. For more information on Equinox, please visit http://www.esilibrary.com.

About Evergreen

Evergreen is an award-winning ILS developed with the intent of providing an open source product able to meet the diverse needs of consortia and high transaction public libraries. However, it has proven to be equally successful in smaller installations including special and academic libraries. Today, almost 1400 libraries across the US and Canada are using Evergreen including NC Cardinal, SC Lends, and B.C. Sitka. For more information about Evergreen, including a list of all known Evergreen installations, see http://evergreen-ils.org.

About Sequoia

Sequoia is a cloud-based library solutions platform for Evergreen, Koha, FulfILLment, and more, providing the highest possible uptime, performance, and capabilities of any library automation platform available. Over 27,000,000 items were circulated within the Sequoia platform in the last year.  It was designed by Equinox engineers in order to ensure that our customers are always running the most stable, up to date version of the software they choose.  For more information on Sequoia, please visit http://esilibrary.com/what-we-do/sequoia/.

 

QLC Flash on the horizon / David Rosenthal

Exabytes shipped
Last May in my talk at the Future of Storage workshop I discussed the question of whether flash would displace hard disk as the bulk storage medium. As the graph shows, flash is currently only a small proportion of the total exabytes shipped. How rapidly it could displace hard disk is determined by how rapidly flash manufacturers can increase capacity. Below the fold I revisit this question based on some more recent information about flash technology and the hard disk business.

First, economic stress on the hard disk industry has increased. Seagate plans a 35% reduction in capacity and 14% layoffs. WDC has announced layoffs. Unit shipments for both companies are falling. If disk is in a death spiral, massive increases in flash shipments will be needed.

Flash vs HDD capex
There are a number of ways flash manufacturers could increase capacity. They could build more flash fabs. This is extremely expensive, but as I reported in my talk, flash advocates believe that this is not a problem:
The governments of China, Japan, and other countries are stimulating their economies by encouraging investment, and they regard dominating the market for essential chips as a strategic goal, something that justifies investment. They are thinking long-term, not looking at the next quarter's results. The flash companies can borrow at very low interest rates, so even if they do need to show a return, they only need to show a very low return.
Since then the economic situation has become less clear, and the willingness of the governments involved to subsidize fabs may have decreased, so this argument may be less effective. If there aren't going to be a lot of new flash fabs, what else could the manufacturers do to increase shipments from the fabs they have?

The traditional way of delivering more chip product from the same fab has been to shrink the chip technology. Unfortunately, shrinking the technology from which flash is made has bad effects. The smaller the cells, the less reliable the storage and the fewer times it can be written, as shown by the vertical axis in this table:
Write endurance vs. cell size
Both in logic and in flash, the difficulty in shrinking the technology further has led to 3D, stacking layers on top of each other. Flash is in production with 48 layers, and this has allowed manufacturers to go back to larger cells with better write endurance.

Flash has another way to increase capacity. It can store more bits in each cell, as shown in the horizontal axis of the table. The behavior of flash cells is analog, the bits are the result of signal-processing in the flash controller. By improving the analog behavior by tweaking the chip-making process, and improving the signal processing in the flash controller, it has been possible to move from 1 (SLC) to 2 (MLC) to 3 (TLC) bits per cell. Because 3D has allowed increased cell size (moving up the table), TLC SSDs are now suitable for enterprise workloads.

Back in 2009, thanks to their acquisition of M-Systems, SanDisk briefly shipped some 4 (QLC) bits per cell memory (hat tip to Brian Berg). But up to now the practical limit has been 3. As the table shows, storing more bits per cell also reduces the write endurance (and the reliability).

As more and more layers are stacked the difficulty of the process increases, and it is currently expected that 64 layers will be the limit. Beyond that, manufacturers expect to use die-stacking. That involves taking two (or potentially more) complete 64-layer chips and bonding one on top of the other, connecting them via Through Silicon Vias (TSVs). TSVs are holes through the chip substrate containing wires. Although adding 3D layers does add processing steps, and thus some cost, it merely lengthens the processing pipeline. It doesn't slow the rate at which wafers can pass through and, because each wafer contains more storage, it increases the fab's output of storage. Die-stacking, on the other hand, doesn't increase the amount of storage per wafer, only per package. It doesn't increase the fab's output of bytes.

Now, Chris Mellor at The Register reports that Good gravy, Toshiba QLC flash chips are getting closer:
3D TLC flash is now good enough for mainstream enterprise use. ... QLC could become usable for applications needing read access to a lot of fast, relative to disk and tape, flash capacity but low write access. Archive data, on the active end of a spectrum of high-to-low archive access rates, is one such application.

Back in March, Jeff Ohshima, a Toshiba executive, presented ... QLC flash at the Non-Volatile Memory Workshop and suggested 88TB QLC 3D NAND SSDs with a 500 write cycle life could be put into production.
QLC will not have enough write endurance for conventional SSD applications. So will there be enough demand for manufacturers to produce it, and thus double their output relative to TLC?

Exabytes shipped
Cloud systems such as Facebook's use tiered storage architectures in which re-write rates decrease rapidly down the layers. Beacuse most re-writes would be absorbed by higher layers, it is likely that QLC-based SSDs would work well at the bulk storage level despite only a 500 write cycle life. It seems likely that only a few of the 2015 flash exabytes in the graph are 3D TLC, most would be 2D MLC. If we assume that half the flash from existing fabs becomes 3D QLC, flash output might increase 8x. This would still not be enough to completely displace hard disks, but it would reduce disk volumes and thus worsen the economics of building them. Fewer new flash fabs would be needed to displace the rest, which would be more affordable. Both effects would speed up the disk death spiral.

Young Women Rising: The Atlantic at the RNC / District Dispatch

Who are millennial voters, what do they want from government and what does this increasingly powerful demographic mean for public policy? These are questions that The Atlantic’s panelists discussed during their Republican National Convention (RNC) event, “Young Women Rising.”

As a research associate for ALA’s Office for Information Technology Policy (OITP) based in Cleveland, I am attending several of the policy events being held in conjunction with the Republican National Convention. Yesterday, I participated in this “Young Women Rising” event.

panelists at Young Women Rising event

Panelists discuss electoral values of millennials at Republican National Convention

Kicking off the event, Harvard University Polling Institute of Politics Director John Della Volpe highlighted the importance of authenticity to young voters. In this cycle, young voters see Bernie Sanders as an authentic leader, but have clear reservations about Mr. Trump and Secretary Clinton.

The statistics on millennials’ relationship to government are dour: 75% do not trust government; a strong majority doesn’t trust capitalism in its current practice; and over 50% don’t believe the American dream is accessible to them personally. Della Volpe provided context for this when he said, “Millennials are seeking a compassionate capitalism, a little Teddy Roosevelt ‘break up the banks’ and a little Franklin Roosevelt ‘provide a social infrastructure.’”

The panel of female Republican journalists, activists, and leaders expanded on the issues of including young people, especially young women between 18 and 35, in the platform. In a party dominated by male voices and often criticized for its policy views regarding women’s issues, it was interesting to hear from a group of female leaders who support the party and Donald Trump. They emphasized that in engaging millennials, the party message needs to connect with the young voters through original, authentic channels and not the forceful, dated methods typically used in political advertising and rhetoric.

Columnist Kristen Anderson emphasized that studies show that kindness towards people from all walks of life and policies that promote equality are the most important values young voters look for in leadership. In addition, young people are more likely to volunteer and become involved in their communities than previous generations, and due to low trust in traditional government institutions, are seeking to become involved in their communities through non-profit, non-governmental organizations.

Anderson also suggested that young Americans today are delaying traditional transitions to adulthood – marriage, home ownership, having children– until their 30s, allowing many of their early adulthood policy positions and values to solidify. Many 18 year olds will vote in 3 or 4 election cycles before they reach typical milestones of adulthood, suggesting that many of their political inclinations will become a strong part of their generational voting identity.

What can policymakers do now to empower young voters to again trust in the power of government as a force for social good and leadership in America? A question from the audience concluded the discussion on a thought-provoking note: who are we really talking about when we discuss millennial, particularly young female, voters? Where do lower income women, people of color, and young immigrant voters aged 18-35 fall? For a group that is concerned largely with equality and inclusive politics, looking at the issues that affect young Americans across more than partisan lines may be a good place for policymakers and influencers to start in building relationships with millennial voters.

How does this discussion affect technology policy and libraries? Young voters are a key demographic to study and educate about the issues that affect technology policy and libraries because libraries appeal to millennials’ desire for services that promote equal opportunity. Focusing on how the library provides a service for the whole community, is a safe space for people of all walks of life, and provides programs to create equal opportunities would be the key to influencing millennial voters through authentic and compassionate policy proposals.

More to come!

The post Young Women Rising: The Atlantic at the RNC appeared first on District Dispatch.

Jobs in Information Technology: July 20, 2016 / LITA

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Computercraft Corporation, PMC Journal Review Program Coordinator and Journal Selector, Bethesda, MD

University of North Carolina Wilmington, Web and Discovery Services Librarian, Wilmington, NC

Penn State University Libraries, Head, Engineering Library, University Park Campus, University Park, PA

Seton Hall University, Digital Collections Infrastructure Developer, South Orange, NJ

EBSCO Information Services, Software as a Service Specialist, Ipswich, MA

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

BibIt: Choosing a UI Framework / OCLC Dev Network

Learn about the UI frameworks used in Bib It, a simple application for allowing non catalogers to add data to WorldCat.

Wearable Technology Resources / LITA

The world of wearable technology (WT) is fascinating, but a little overwhelming. Last month I attended the Digital Humanities Summer Institute where I completed a week-long course entitled “Palpability and Wearable Computing.” We engaged in movement exercises, experimented with sensors, learned about haptics, and critiqued consumer wearables including the Fitbit, Spire, Leaf, and Athos. I expected to walk away with some light-up sneakers, but instead I left with lots of questions, inspiration, and resources.

What follows is a list of books, videos, and project tutorials that I’ve found most helpful in my exploration of wearable technology.

Textile Messages | Edited by Leah Buechley, Kylie Peppler, Michael Eisenberg, and Yasmin Kafai

  • Textile Messages is a great primer; it includes a little bit of history, lots of project ideas, and ample discussion of working with WT in the classroom. This is the most practical resource I’ve encountered for librarians of all types.

    Textile Messages: Dispatches from the World of E-Textiles and Education

Garments of Paradise | Susan Elizabeth Ryan

  • The history of WT goes back longer than you’d think. Chapter 1 from Garments of Paradise will take you all the way from the pocket watch to the electric dress to Barbarella.

Atsuko Tanaka models the electric dress, 1956.

MAKE Presents

  • If you want to make your own wearables, then you’ll need a basic understanding of electronics. MAKE magazine has a fantastic video series that will introduce you to Ohm’s Law, oscilloscopes, and a whole slew of teeny tiny components.

Wired Magazine

  • If you’re interested in consumer wearables, Wired will keep you up to date on all the latest gadgetry. Recent reviews include a temporary tattoo that measures UV exposure and Will.i.am’s smart watch.

    My UV Patch from L’Oreal is currently in development

Project Tutorials

  • One easy and inexpensive way to get started with WT is to create your own sensors. In class we created a stroke sensor made of felt and conductive thread. If you’re working with a limited budget, Textile Messages has an entire chapter devoted to DIY sensors.  
  • Adafruit is a treasure trove of project tutorials. Most of them are pretty advanced, but it’s interesting to see how far you can go with DIY projects even if you’re not ready to take them on yourself.
  • Sparkfun is a better option if you’re interested in projects for beginners.
My first attempt at making a stroke sensor.

My first attempt at making a stroke sensor

What WT resources have you encountered?

The Observer or Seeing What You Mean / Mita Williams

If you are new to my writing, my talks and work tends to resemble an entanglement of ideas. Sometimes it all comes together in the end and sometimes I know that I’ve just overwhelmed my audience.

I’m trying to be better at reducing the sheer amount of information I give across in a single seating. So for this post, I’m going to tell you what I’m going to say briefly before I tell you what I’m going to say in a more meandering fashion.

In brief, libraries would do better to acknowledge the role of the observer in our work.

Now, true to my meandering style, we need to walk it back a bit before we can move forward. In fact, I’m going to ask you to look back at my last post (“The Library Without a Map“) that was about how traditional libraries have library catalogues that do a poor job of modeling subject relationships and how non-traditional libraries such as The Prelinger Library have tried to improve discovery through their own means of organization.

One of the essays I linked to about The Prelinger was from a zine series called Situated Knowledges, Issue 3: The Prelinger Library.  The zine series is the only one that I know of that’s been named after a journal article:

Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective
Donna Haraway
Feminist Studies
Vol. 14, No. 3 (Autumn, 1988), pp. 575-599
Published by: Feminist Studies, Inc.
DOI: 10.2307/3178066
Stable URL: http://www.jstor.org/stable/3178066
Page Count: 25

I have to admit that I struggled with this paper but in the end I was glad to have worked through the struggle. To sum up the paper in one sentence: we need to resist the idea that there is exists ‘god-like’ vision of objectivity and remember that our vision and our knowledge is limited by location and situation. Or as Haraway puts it:

I want a feminist writing of the body that metaphorically emphasizes vision again, because we need to reclaim that sense to find our way through all the visualizing tricks and powers of modern sciences and technologies that have transformed the objectivity debates. We need to learn in our to name where we are and are not, in dimensions of mental and physical space we hardly know how to name. So, not so perversely, objectivity turns out to be about particular and specific embodiment and definitely not about the false vision promising transcendence of all limits and responsibility. The moral is simple: only partial perspective promises objective vision. All Western cultural narratives about objectivity are allegories of the ideologies governing the relations of what we call mind and body, distance and responsibility. Feminist objectivity is about limited location and situated knowledge, not about transcendence and splitting of subject and object. It allows us to become answerable for what we learn how to see.

 

I’ve been thinking a lot recently about the power of the observer recently.

On my other blog, The Magnetic North, I wrote about how a world-weariness brought on by watching tragedies unfold on social media has led me to spend more time with art. I go on to suggest that being better versed in observing art without the burden of taste might help us better navigate a world that shows us only what we chose to see and perhaps even bring about a more just world.

But on this blog, I want to direct your attention to a more librarian-focused reason to be concerned with the matter of the observer.

You see, after I published my last post about how our library catalogue and how it poorly handles subject headings, I received a recommended read from Trevor Owens:


 

I found the paper super interesting. But among all the theory, I have to admit my favourite takeaways from the paper was that its model incorporates business rules as a means to capture an institution’s particular point of view, restraints or reasons for interest. It is as if we are recognizing the constraints and situation of the observer who is describing a work:

Following the scientific community’s lead in striving to describe the physical universe through observations, we adapted the concept of an observation into the bibliographic universe and assert that cataloging is a process of making observations on resources. Human or computational observers following institutional business rules (i.e., the terms, facts, definitions, and action assertions that represent constraints on an enterprise and on the things of interest to the enterprise)5 create resource descriptions — accounts or representations of a person, object, or event being drawn on by a person, group, institution, and so on, in pursuit of its interests.

Given this definition, a person (or a computation) operating from a business rules–generated institutional or personal point of view, and executing specified procedures (or algorithms) to do so, is an integral component of a resource description process (see figure 1). This process involves identifying a resource’s textual, graphical, acoustic, or other features and then classifying, making quality and fitness for purpose judgments, etc., on the resource. Knowing which institutional or individual points of view are being employed is essential when parties possessing multiple views on those resources describe cultural heritage resources. How multiple resource descriptions derived from multiple points of view are to be related to one another becomes a key theoretical issue with significant practical consequences.

Murray, R. J., & Tillett, B. B. (2011). Cataloging theory in search of graph theory and other ivory towers: Object: Cultural heritage resource description networks. Information Technology and Libraries, 30(4), 170-184.

I’ll end this post with a video of the first episode of Ways of Seeing, a remarkable series four-part series about art from the BBC in 1972. It is some of the smartest TV I have ever seen and begins with the matter of the perspective and the observer:

The first episode is based on the ideas of Walter Benjamin’s The Work of Art in the Age of Mechanical Reproduction, which I must admit with some shame that I still have not read.

art-mechanical

Art takes into account the observer.

I’m not sure that librarianship does.

But perhaps this observation is not sound. Perhaps it is limited by my particular situation and point of view.

There’s a reason why we complain / District Dispatch

"Open" sign

Image from Pixabay

The Social Science Research Network (SSRN) could be called the “academic version” of user-generated content on the web. Scholars and academics generate content in the form of scholarly papers and post them on the SSRN for all to see, read, and comment on.  Often, academics who post their forthcoming papers or “pre-prints” intend to eventually publish them in scholarly journals that research libraries and academic societies acquire. But in the meantime, academics want to quickly share their works in a pre-published form on the SSRN.  It’s a valuable and heavily used resource with over 682,100 scholarly working papers and forthcoming papers freely available.

After the scholarly publisher Elsevier acquired the SSRN in May, people thought, what the h***?! Many were inclined to think that Elsevier would develop a way to monetize SSRN because Elsevier does that sort of thing, they have a history. They sell journal subscriptions to academics at lunatic prices — their current profit margin more than 40% — by re-selling content produced by scholars who work at publicly funded higher education institutions. Then libraries have to find the money to purchase the journal…you know the story. (if not, see SPARC) Elsevier assured those concerned that SSRN would remain unchanged – specifically that “both existing and future SSRN content will be largely unaffected”

The Authors Alliance, whose members want to facilitate the “widespread access to works of authorship” and “disseminate knowledge,” were particularly concerned because SSRN is one of the primary venues for sharing works of social science rapidly and freely. So they asked Elsevier to accept principles that acknowledged its willingness to accept open access preferences of scholars.

Well, they did not. Surprise!

Last week, several authors noted that their papers had been removed from SSRN by Elsevier without notice. Apparently Elsevier wants to remove all the papers whose copyright status is unclear. Ahh…come again? Elsevier is asking authors who have written an unpublished paper and have not transferred their copyright to submit documentation proving that they are the rights holder! What kind of world do we live in?

Now there is a movement by scholars and academics to drop SSRN.  Luckily, a new pre-print archive is under development. It is called SocArXiv. Stay tuned to the District Dispatch for more information.

The post There’s a reason why we complain appeared first on District Dispatch.

The UX of VR / LibUX

vr-libux

Max Glenister has curated a list of resources about the user experience of virtual reality. These range from actual code to conceptual principles and broadly applicable truisms about immersion and design, like

The last 40 years have seen the rise of the digital landscape; a two dimensional plane that abstracts familiar real-world concepts like writing, using a calendar, storing documents in folders into user interface elements (UI). This approach allows for a high level of information density and multitasking. The down-side is that new interaction models need to be learned and there is a higher cognitive load to decision making. Matt Sundstrom
Immersive Design: Learning to Let Go of the Screen

The UX of VR by Max Glenister

The post The UX of VR appeared first on LibUX.

Code4Lib Journal #33 / Code4Lib

Topic: 

LITA Forum 2016 – Call for Library School Student Volunteers / LITA

160328-lita-forum-web-badges-125x3002016 LITA Forum
Ft Worth, Texas
November 17-20, 2016

STUDENT REGISTRATION RATE AVAILABLE – 50% OFF REGISTRATION RATE — $180

The Library and Information Technology Association (LITA), a division of the American Library Association, is offering a discounted student registration rate for the 2016 LITA Forum. This offer is limited to graduate students enrolled in ALA-accredited programs. In exchange for the lower registration cost, these graduate students will be asked to assist the LITA organizers and Forum presenters with onsite operations. This is a great way to network and meet librarians active in the field.

The selected students will be expected to attend the full LITA Forum, Friday noon through Sunday noon. Attendance during the preconferences on Thursday afternoon and Friday morning is not required. While you will be assigned a variety of duties, you will be able to attend the Forum programs, which include 3 keynote sessions, over 50 concurrent sessions, and poster presentations, as well as many opportunities for social engagement.

The Forum will be held November 17-20, 2016 at the Omni Hotel in Fort Worth, Texas. The student rate is $180 – half the regular registration rate for LITA members. A real bargain, this rate includes a Friday night reception, continental breakfasts, and Saturday lunch.

For more information about the Forum, visit http://litaforum.org. We anticipate an attendance of 300 decision makers and implementers of new information technologies in libraries.

To apply to be a student volunteer, complete and submit this form by September 30, 2016.

http://goo.gl/forms/e6UeOsfqTW0hhsfu2

You will be asked to provide the following:
1. Contact information, including email address and cell phone number
2. Name of the school you are attending
3. Statement of 150 words (or less) explaining why you want to attend the LITA National Forum

Those selected to be volunteers registered at the student rate will be notified no later than Friday, October 14, 2016.

Additional questions should be sent to Christine Peterson, peterson@amigos.org, or Mary Duffy, mduffy@southalabama.edu

Emflix – Gone Baby Gone / Code4Lib Journal

Enthusiasm is no replacement for experience. This article describes a tool developed at the Emerson College Library by an eager but overzealous cataloger. Attempting to enhance media-discovery in a familiar and intuitive way, he created a browseable and searchable Netflix-style interface. Though it may have been an interesting idea, many of the crucial steps that are involved in this kind of high-concept work were neglected. This article will explore and explain why the tool ultimately has not been maintained or updated, and what should have been done differently to ensure its legacy and continued use.

Introduction to Text Mining with R for Information Professionals / Code4Lib Journal

The 'tm: Text Mining Package' in the open source statistical software R has made text analysis techniques easily accessible to both novice and expert practitioners, providing useful ways of analyzing and understanding large, unstructured datasets. Such an approach can yield many benefits to information professionals, particularly those involved in text-heavy research projects. This article will discuss the functionality and possibilities of text mining, as well as the basic setup necessary for novice R users to employ the RStudio integrated development environment (IDE). Common use cases, such as analyzing a corpus of text documents or spreadsheet text data, will be covered, as well as the text mining tools for calculating term frequency, term correlations, clustering, creating wordclouds, and plotting.