Planet Code4Lib

MarcEdit Linked Data Rules Files Enhancements / Terry Reese

I’m working on a couple enhancements to the MarcEdit Linked Data rules file to accommodate proposals being generated by the PCC regarding the usage of the $0 and the $1 in MARC21 records.  Currently, MarcEdit has the ability to generate $0s for any controlled data within a MARC record, so long as a webservice for the vocabulary has been profiled in the MarcEdit rules file.  This has allowed folks to begin embedding URIs into their MARC records…but one of the areas of confusion is where services like VIAF fit into this equation.

The $0, as described in MARC, should represent the control number or URI to the controlled vocabulary.  This will likely never (or in rare cases ever be), VIAF.  VIAF is an aggregator, a switching point, an aggregation of information and services about a controlled term.  It’s also a useful service (as are other aggregators), and this had led folks to start adding VIAF data into the $0.  This is problematic, because it dilutes the value of the $0 and makes it impossible to know if the data being linked is to the source vocabulary or an aggregating service.

To that end, the PCC will likely be recommending the $1 for URIs that are linked data adjacent.  This would allow users to embed references to aggregations like VIAF, while still maintaining the $0 and the URI to the actual vocabulary — and I very much prefer this approach.

So, I’m updating the MarcEdit rules file to allow users to denote multiple vocabularies to query against a single MARC field, and to denote the subfield for embedding the retrieved URI data on a vocabulary by vocabulary level.  Prior, this was set as a global value within the field definition set.  This means that if a user wanted to query LCNAF for main entry (in the 100) and then VIAF to embed a link in a $1, they would just need to use:

<field type=”bibliographic”>
<tag>100</tag>
<subfields>abcdqnpt</subfields>
<uri>0</uri>
<special_instructions>personal_name</special_instructions>
<vocab subfield=”0″>naf</vocab>
<vocab subfield=”1″>viaf</vocab>
</field>

The Subfield Attribute now defines per vocabulary element, where the URI should be stored, and the uri element, continues to act as the global subfield setting, if not value is defined in the vocab element.  Practically, this means that if you had the following data in your MARC record:

=100  1\$6880-01$aHu, Zongnan,$d1896-1962,$eauthor.

The tool will now automatically (with the rules file updates), generate both a $0 and $1.

=100  1\$6880-01$aHu, Zongnan,$d1896-1962,$eauthor.$0http://id.loc.gov/authorities/names/n84029846$1http://viaf.org/viaf/70322743

But users could easily add other data sources if they have interests beyond VIAF.

These changes will be available in all versions of MarcEdit when the next update is posted in a week or so.  This update will also include an updated rules file that will embed viaf elements in the $1 for all main entry content (when available).

 

–tr

 

 

Technarium hackerspace celebrates Open Data Day in Vilnius, Lithuania / Open Knowledge Foundation

This blog is part of the event report series on International Open Data Day 2017. On Saturday 4 March, groups from around the world organised over 300 events to celebrate, promote and spread the use of open data. 44 events received additional support through the Open Knowledge International mini-grants scheme, funded by SPARC, the Open Contracting Program of Hivos, Article 19, Hewlett Foundation and the UK Foreign & Commonwealth Office. This event was supported through the mini-grants scheme under the Open Research theme.

Open Data Day was celebrated for the first time at Technarium hackerspace (Vilnius) this year, with not one but two amazing events!

The morning part – Open Data Hackathon – kicked off with the Vilnius City Council, who brought some pizzas & told us all about the open data work they do. Vilnius has recently become a hotspot of open data in Lithuania, as the council elected in 2015 has made it one of its top priorities.

Council representatives demonstrated the Open Vilnius website, open data repository on GitHub, as well as the recently created opendata.lt portal, which was “fired up” in one weekend by one of the advisors of the mayor – Povilas Poderskis, an open data & IT expert. The portal has an exciting background story, as it was inspired by a 2.8 mln euros worth project for a national open repository, but was created in a few hours by Povilas entirely voluntarily, by purchasing a server and installing CKAN open source software. The portal now serves as an “unofficial national open data repository”, but is used by an increasing number of institutions under their own initiative or by encouragement from the Vilnius City Council.

According to Povilas Poderskis, the main goal of encouraging institutions to open up their data is transparency:

Open data ensures that mistakes, misuse and corruption are revealed sooner rather than later, persons responsible for abuse of power lose their positions, thus bringing positive change that affects everyone. – Povilas Poderskis

The city council representatives shared some success stories of open data use at a national level, including Lithuanian Road Administration, National Health Insurance Fund and others, and Vilnius-specific issues, such as opening up the kindergarten registry so citizens can better plan their applications, and Tvarkau Vilnių – an app that allows you to submit problems you see around the city, which are then displayed on a public map and passed to appropriate institutions.

Then a group of hackers took the challenge of hacking the open city data for the rest of the afternoon and came up with a couple of creative visualizations & tools, including an application that connects streets names with the areas of the city they’re in (very handy!), analysis of registered cats and dogs (suspiciously, about twice as many dogs are registered than cats, suggesting cat owners might be skipping this responsibility!) and other tools which are still in progress and will be reported on via Technarium blog!

Participants at the event

In the evening we held a separate event – Café Scientifique: Opening up YOUR research data – aimed at researchers of various disciplines. We had two fantastic speakers: Michael Crusoe, one of the founders of the Common Workflow Language (CWL) project, and Vida Mildažienė, a biochemist at Vytautas Magnus University.

Michael gave an engaging talk about the general purpose of communicating the scientific process clearly, and how having a shared specification make it easier for scientists to share their workflows, especially in data-intensive fields, such as bioinformatics, and highlighted the importance of doing so to enable greater reproducibility & usability of research data.

Michael Crusoe, co-founder of the Common Workflow Language (CWL) project leading a presentation.

Quite a few participants insisted on a demo of CWL after the talk, and that is what they got! :)

Michael Crusoe, demonstrating the Common Workflow Language (CWL) project to some participants

Next, Vida spoke about the state of open science in Lithuania for education & for society. She took us through the different science communication efforts and events that are ongoing or have been organised in the past in Lithuania and highlighted some cultural problems we are still facing with respect to connecting science and the society. Vida also shared some local citizen science projects that are brewing, and highlighted hackerspaces as places for open science to organically occur!

Vida Mildažienė, a biochemist at Vytautas Magnus University leading a discussion on open science in Lithuania.

After the talks we engaged in an important discussion regarding open science in Lithuania, and tried to answer these questions:

It was especially helpful to have Michael in the audience – someone who knows open science as it applies to the international scene, but also new to Lithuania, and can, therefore, ask insightful questions!

We discussed the general difficulty of getting researchers to share their data – the time it takes, the fear of sharing their ideas and results prematurely in case they get “scooped”, which are problems familiar across the world. We also raised the question of whether there is enough information about open science and its methods in Lithuania, as, e.g. people seem oblivious about pre-publishing or other alternative methods of sharing in the scientific process, or even national open research data repositories. On the other hand, since the concept of open science is reaching us a bit later, we have a chance to do it right the first time, by learning from mistakes already made!

The event was filmed by a grassroots science popularisation show called Mokslo Sriuba (“Science Soup”), and will be reported on Mokslo Sriubos TV on Youtube soon!

 

Steady but Slow – Open Data’s Progress in the Caribbean / Open Knowledge Foundation

Over the last two years, the SlashRoots Foundation has supported the Caribbean’s participation in the Open Knowledge International’s Global Open Data Index, an annual survey which measures the state of  “open” government across the world. We recently completed the 2016 survey submissions and were asked to share our initial reactions before the full GODI study is released in May.

In the Global Open Data Index, each country is assessed based on the availability of “open data” as defined in the Open Knowledge Foundation’s Open Data Definition across key thematic areas that Governments are expected to publish information on. These include: National Maps, National Laws, Government Budget, Government Spending, National Statistics, Administrative Boundaries, Procurement, Pollutant Emissions, Election Results, Weather Forecast, Water Quality, Locations, Draft Legislation, Company Register, and Land Ownership.

For the 2016 survey, the Caribbean was represented by ten countries—Antigua & Barbuda, Barbados, Bahamas, Dominican Republic, Jamaica, Guyana, Trinidad and Tobago, St. Lucia, St. Kitts & Nevis, and St. Vincent & the Grenadines. As the Caribbean’s Regional Coordinator, we manage and source survey submissions from citizens, open data enthusiasts, and government representatives. These submissions then undergo a quality review process led by global experts. This exercise resulted in 150 surveys for the region and provided both an excellent snapshot of how open data in the Caribbean is progressing and how the region ranks in a global context.

Unfortunately, progress in the Caribbean has been mixed, if not slow. While Caribbean governments were early adopters of Freedom of Information legislation–7 countries (Antigua and Barbuda, Belize, Dominican Republic, Guyana, Jamaica, St. Vincent and the Grenadines, Trinidad and Tobago) having passed FOI law–the digital channels through which many citizens are increasingly accessing government information remain underdeveloped. Furthermore, the publication of raw and baseline data, beyond references in press releases, remains a challenge across the region.

For example, St. Kitts, which passed FOI legislature in 2006, only had 2 “open” data sets, Government Budget and Legislature, published readily online. Comparatively, Puerto Rico, the Dominican Republic and Jamaica governments have invested in open data infrastructure and websites to improve the channels through which citizens access information. Impressively, the Dominican Republic’s data portal consisted of 373 data sets from 71 participating Ministries, Departments and Agencies. However, updates to data portals and government websites remain a challenge. In the case of Jamaica’s open data portal, which launched in 2016, it has received a handful of updates since its first publication. While St Lucia and Trinidad & Tobago have published no updates since the first month of the portal’s publication.

Despite these shortcomings, Caribbean governments and civil society organisations continue to make important contributions to the global open data discourse that demonstrate tangible benefits of open data adoption in the lives of Caribbean citizens. These range from research demonstrating the economic impact of open data to community-led initiatives helping to bridge the data gaps that constrain local government planning. In December 2016, Jamaica became the fourth country in the region, after Guyana, the Dominican Republic and Trinidad & Tobago, to indicate its interest in joining the Open Government Partnership, a multilateral initiative consisting of 73 member countries that aims to secure concrete commitments from governments to promote transparency, empower citizens, fight corruption, and harness new technologies to strengthen governance.

Find out on how the Caribbean ranks in the full GODI report to be published on May 2nd.

Michigan Daily - From Newspaper Prints To Digital Archives / Library Tech Talk (U of Michigan)

screenshot of michigan daily digital archive interface

The Michigan Daily Digital Archives is a joint collaboration between the University of Michigan Library IT division, Michigan Daily, and the Bentley Historical Library. The Michigan Daily Digital Archives provides searchable access to over 300 volumes, 23,000 issues of digitized student newspaper, from 1891 through 2014. New volumes of the newspaper will be added in the future as they become available. The Library IT team developed a robust discovery interface for the archives. The team made the choice of building a discovery system instead of using an out of the box application or vended solutions. The development team followed Scrum-like Agile approach for website development.

Jobs in Information Technology: April 26, 2017 / LITA

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Yewno, Product Manager, Content, Redwood City, CA

Sonoma County Library, Curator, Wine Library, Healdsburg Regional Library Full-time, Santa Rosa, CA

University of Chicago, Social Sciences Data and Sociology Librarian, Chicago, IL

University of Chicago, Data Research Services and Biomedical Librarian, Chicago, IL

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

OKI Agile: How to create and manage user stories / Open Knowledge Foundation

This is the first in a series of blogs on how we are using the Agile methodology at Open Knowledge International. Originating from software development, the Agile manifesto describes a set of principles that prioritise agility in work processes: for example through continuous development, self-organised teams with frequent interactions and quick responses to change (http://agilemanifesto.org). In this blogging series we go into the different ways Agile can be used to work better in teams and to create more efficiency in how to deliver projects. The first blog is dedicated to user stories, a popular agile technique. 

User stories are a pretty nifty way of gathering requirements in an agile environment, where one of the key values is responding to change over following a plan. They are a good anchor for conversation that can then take place at the right time.

What is a user story?

A user story is a short sentence that encapsulates three things:

  1. Who?
  2. What?
  3. Why?

Notice that this does not include “How?” The “How?” is left to the team delivering the requirement. After all, the team consists of the professionals. They know how to deliver the best solution. 

The most common way to encapsulate a user story is to use the template:

  • As a [WHO] I want [WHAT] so that [WHY]

Be careful not to sneak in any Hows into that template. That usually happens in the What so stay focussed! Words like by, using or with should be avoided like the plague because they usually result in a HowBasically avoid anything that has to do with the actual implementation.

Bad user stories

    • As a government official I want a Range Rover so that I can get from A to B quickly
  • Problem: A Range Rover is an actual implementation, it might not be what is needed even though it is what’s believed to be desired.
      • As a visitor to a website I want to be able to read a landing page using my spiffy MacBook Air and have the content presented in the Lato typeface, size 14, and with good white space between paragraphs so that I can know what the website is all about
    • Problem: A whole lot! What about GNU/Linux and Windows users? What if there is a better typeface out there? What about language of the content? The Why isn’t really a why. The list goes on. Don’t go into detail. It’s bad practice and creates more problems than it solves.

    Good user stories

    • As a government official I want means of transportation so that I can get from A to B quickly
    • As a website visitor I want to know what the website is about so that I can see how it can help me

    Why shouldn’t we go into details?

    It’s really quite simple. We expect the requirements to change and we’d just waste a lot of time going into the details of something that might change or get thrown out. We’re trying to be efficient while still giving the team an understanding of the broader picture. An extreme example would be that between project start and time when the team is going to tackle a user story the world might have moved to virtual governments that don’t need transportation any more (technology moves fast).

    The team also consists of experts so they know what works best (if not, why are they tasked to deliver?). The customers are the domain experts so they know best what is needed. In the website visitor example above, the team would know the best way of showing what a website is about (could be a landing page) but the customer knows what the customer is going to offer through the website and how they help people.

    We also value interactions and individuals over processes and tools. In an ever changing requirements environment we want non-details which can when the time comes be the basis for a conversation about the actual implementation. The team familiarises itself with the requirement at the appropriate time. So when starting work on the transportation user story, the team might discuss with the customer and ask questions like:

    • “How fast is quickly?”,
    • “Are A and B in the same city, country, on Earth?”,
    • “Are there any policies we need to be aware of?” etc.

    Acceptance of user stories

    Surely the customer would still want to be able to have a say in how things get implemented. That’s where acceptance criteria comes in. The customer would create a checklist for each user story when the time comes in a joint meeting, based on discussion. That’s the key thing. It comes out of a discussion.

    This criteria tells the team in a bit more detail what they need to fulfill to deliver the requirement (user story). For the government in need of transport this might be things like:

    • Main area of interest/focus is London area
    • Applicable to/usable in other cities as well
    • Allows preparations for a meeting while in transit
    • Very predictable so travel time can be planned in detail
    • Doesn’t create a distance between me and the people I serve

    Then the implementation team might even pick public transportation to solve this requirement. A Range Rover wasn’t really needed in the end (albeit this would probably go against the “satisfy the customer” principle but hey! I’m teaching you about user stories here! Stay focussed!).

    How is this managed?

    One key thing we want to get out of user stories is to not scope the requirement in detail until it becomes clear that it’s definitely going to be implemented. How then do you know what you’ll be doing in the future?

    User stories can be of different sizes; From very coarse to detailed stories. The very coarse ones don’t even need to be written as user stories. They’re often referred to as epics.

    Many break requirements into three stages. The releases or the projects or whatever the team works on. Then each of these can be broken up into features and each feature can be broken up into tasks. It’s up to the team to decide when it’s best to formulate these as user stories and it really depends on the team and the project.

    Some might have epics as the big long term project, break that up into user stories, and then break each user story up into tasks. Others might have a single product, with the releases (what you want to achieve in each release: “The geospatial release”) at the top and then have features as sentences (epics) underneath the release and then transform the sentences into user stories you work on.

    Whatever way you do, this is the general guideline of granularity:

    • Coarsest: Long-term plans of what you’ll be doing
    • Mid-range: Delivery in a given time period (e.g. before deadlines)
    • Finest: What team will deliver in a day or two

    The reason the finest level is in a day or two is to give the team a sense of progress and avoid getting stuck at: “I’m still doing the guildhall thing” which is very demoralizing and inefficient (and not really helpful for others who might be able to help).

    There is a notion of the requirements iceberg or pyramid which tries to visualise the three stages. The bottom stage is larger and bigger items (the coarse stuff), mid range is what you’re delivering in a time period, and the finest is the smallest blocks of work. That’s what’s going to be “above” surface for the core team. That’s still just a fraction of the big picture.


    When should who be involved?

    So the core team has to decide at what stage of the iceberg they want to write the user stories, and that kind of depends on the project, the customer, and the customer’s involvement. So we need to better understand “the team”.

    The core team should always be present and work together. Who is in the core team then? If that’s not clear, there’s a story/joke, about the pig and the chicken, that can guide us:

    A pig and a chicken decided to open up a restaurant. They were discussing what name to give the restaurant when the chicken proposed the name: Ham & Eggs. The pig sneered its nose and said: “That’s unfair, I’d be committed but you’d only be involved!”

    That’s the critical distinction between the core team and others. The core team is the pigs. Everyone else who is only involved to make the project happen is a chicken. The pigs run the show. The chickens are there to make sure the pigs can deliver.

    Chickens come in various sizes and shapes. It can be team managers (planning persons), unit heads, project managers, biz-dev people, and even customers.

    The term customer is pretty vague. You usually don’t have all your customers involved. Usually you only have a single representative. For bespoke/custom development (work done at the request of someone else), that person is usually the contact person for the client you’re working for.

    At other times the single customer representative is an internal person. That internal individual is sometimes referred to as the product owner (comes from Scrum) and is a dedicated role put in place when there is no single customer, e.g. the product is being developed in-house. That person then represents all customers and has in-depth knowledge about all customers or has access to a focus group or something.

    This individual representative is the contact point for the team. The one who’s there for the team to help them deliver the right thing. More specifically this individual:

    • Creates initial user stories (and drives creation of other user stories)
    • Helps the team prioritise requirements (user stories)
    • Accepts stories (or rejects) when the team delivers
    • Is available to answer any questions the team might have

    So the representative’s role is to provide the implementers with enough domain knowledge to proceed and deliver the right thing. This individual should not have any say in how the core team will implement it. That’s why the team was hired/tasked with delivering it, because they know how to do it. That’s also why user stories do not focus on the how.

    The core team, the pigs, need to decide at what intersections in the iceberg they want to have this representative present (where discussions between the core team and the representative will happen). When they go from coarsest to mid-range or from mid-range to finest. So in a weird sense, the core team decides when the customer representative decides what will be done.

    As a rule of thumb: the user stories feed into the stage above the intersection where representative is present.

    So if the representative helps the team go from coarse to mid-range, the user stories are created for the mid-range stage. If the representative is there for mid-range to finest, the user stories are going to be very fine-grained.

    As a side note, because the chickens are there to make sure the pigs can deliver, they will always have to be available to answer questions. Many have picked up the standup activity from the Scrum process to discuss blockers, and in those cases it’s important that everyone involved, both pigs and chickens, is there so the chickens can act quickly to unblock the pigs.

    Now go and have fun with user stories. They shouldn’t be a burden. They should make your life easier… or at least help you talk to chickens.

    The fight for library funding is on in the U.S. Senate / District Dispatch

    The Fight for Libraries! has moved to the United States Senate. Today, two “Dear Appropriator” letters began circulating in the Senate, one seeking $186.6 million for Library Services and Technology Act (LSTA) and the other $27 million for the Innovative Approaches to Libraries (IAL) program for FY 2018. Senators Jack Reed (D-RI) and Susan Collins (R-ME) are again championing funds for LSTA, while Sens. Reed, Grassley (R-IA) and Stabenow (D-MI) are leading the fight for IAL. For more information about each program and the appropriations process, visit our previous posts on this topic or watch our most recent webinar.Fight For Libraries! Tell Congress to save library funding.

    Senators have until May 19 to let our champions know that they will sign the separate LSTA and IAL “Dear Appropriator” letters, so there’s no time to lose. Use ALA’s Legislative Action Center today to contact both of your Senators and ask them to support federal funding for libraries by signing on to both the Reed/Collins LSTA and Reed/Grassley/Stabenow IAL Dear Appropriator letters.

    Many Senators will only sign if their constituents ask them to. Let them know why libraries are important to your community and ask them directly to show their support.

    Last month, library advocates succeeded in convincing a record one-third of all Members of the House to sign the House versions of these LSTA and IAL letters. We need you to keep that momentum going by collectively convincing at least half of all Senators to do the same!

    Given the President’s proposal to eliminate the Institute of Museum and Library Services (IMLS) and virtually all other library funding sources, the support of both your Senators is more important than ever before. Five minutes of your time could help preserve over $210 million in library funding that’s at serious risk.

    To take action, visit the Action Center for additional talking points and easy-to-send email templates. Then keep an eye on our database to see if your Senators have signed.

    Have a few more minutes to invest in the fight for library funding? Here are some fast and enormously helpful things you can do as well:

    1. Share your library’s federal funding story and support for LSTA and IAL on Twitter using the #SaveIMLS hashtag. Tell us how IMLS funding supports your local community through LSTA or other means. (If you aren’t sure which IMLS grants your library has received, you can check the searchable database available on the IMLS website.)
    2. Whether you tweet it or not, tell us your story so we can make sure that your Members of Congress know how federal library funding is working for them and their constituents at home.
    3. Sign up to receive our action alerts so we can let you know when and how to take action, and send you talking points and background information to make that easy, all through the year.
    4. Participate in Virtual Library Legislative Day starting on May 1 and sign up for our Thunderclap.

    Thank you for your indispensable support. Together, we can win the Fight for Libraries!

    The post The fight for library funding is on in the U.S. Senate appeared first on District Dispatch.

    York U job: head of science library / physical sciences librarian / William Denton

    At York University Libraries, where I work, there is a search on right now for Physical Sciences Librarian and Head of Steacie Science and Engineering Library.

    The deadline for applications is 2 June 2017. If you know a librarian with a background in the physical sciences who might be looking for a job, please send them the link.

    I’m on the search committee, so I can’t give any tips, but I’ll point out a few things:

    • York University pays well. For historical pay equity reasons there’s a sort of grid that determines salaries based on the year one got one’s MLIS, so there’s no bargaining that will happen. Someone who got their MLIS in 2007, ten years ago, could expect to make about $120,000.
    • Librarians are in the York University Faculty Association (a union that takes social and progressive issues very seriously) and have academic status.
    • The benefits are good.
    • Americans are welcome to apply. (In Canada health care is publicly funded, etc.)
    • York University is an exciting place to work!
    • The strategic plan mentioned in the ad is a little hard to find on our site, so have a look.
    • There’s an affirmative action plan in place, and in this search we added this to the standard paragraph: “People with disabilities and Aboriginal people are priorities in the York University Libraries Affirmative Action plan and are especially encouraged to apply. Consideration will also be given to those who have followed non-traditional career paths or had career interruptions.” We mean it.

    If you want to find out more about York and what the job would be like, email me at wdenton@yorku.ca and I can put you in touch with someone not on the search committee.

    A Report from the 2017 DuraSpace Member Summit / DuraSpace News

     Albuquerque, NM The annual DuraSpace Summit report to members was held in Albuquerque, New Mexico, on April 4-5 following the CNI Spring Member Meeting. DuraSpace Members met to focus on strategy and tactics aimed at broadening and extending the organization’s reach in support of global community ecosystem efforts towards preservation and accessibility of cultural heritage and academic resources.

    Beginning Git and GitHub / LITA

    In a new LITA web course learn how to use the powerful and popular project management and collaboration tools, Git and GitHub. Perfect for anyone who works with code and on projects such as web sites, apps, classes, scripts, and presentations.

    Beginning Git and GitHub
    Instructors: Kate Bronstad, Web Developer, Tufts University Libraries; and Heather Klish, Systems Librarian, Tufts University Libraries.
    May 4 – June 1, 2017
    Register here, courses are listed by date and you need to log in

    Work smarter, collaborate faster and share code or other files with the library community using the popular version control system Git. Featuring a mix of git fundamentals and hands-on exercises, participants learn the basics of Git, learn how to use key commands, and how to use GitHub to their advantage, including sharing their own work and building upon the projects of others.

    View details and Register here.

    This is a blended format web course

    The course will be delivered as separate live webinar lectures, one per week. You do not have to attend the live lectures in order to participate. The webinars will be recorded for later viewing.

    Check the LITA Online Learning web page for additional upcoming LITA continuing education offerings.

    Questions or Comments?

    For all other questions or comments related to the course, contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

    Listen: On design in user stories and user experience departments (20:48) / LibUX

    One new episode of Metric (a user experience podcast) over coffee before a string of interviews to round-out the month of April. In this episode:

    • What role does photoshop play in UX?
    • Should “design” be part of a user story?
    • What are the necessary technical abilities for doing UX?
    • What are your thoughts on UX Departments

    Enjoy.


    You can also  download the MP3 or subscribe to Metric: A UX Podcast on OverCastStitcher, iTunes, YouTube, Soundcloud, Google Music, or just plug our feed straight into your podcatcher of choice.

    adventures with parsing Django uploaded csv files in python3 / Andromeda Yelton

    Let’s say you’re having problems parsing a csv file, represented as an InMemoryUploadedFile, that you’ve just uploaded through a Django form. There are a bunch of answers on stackoverflow! They all totally work with Python 2! …and lead to hours of frustration if, say, hypothetically, like me, you’re using Python 3.

    If you are getting errors like _csv.Error: iterator should return strings, not bytes (did you open the file in text mode?) — and then getting different errors about DictReader not getting an expected iterator after you use .decode('utf-8') to coerce your file to str — this is the post for you.

    It turns out all you need to do (e.g. in your form_valid) is:

    
    csv_file.seek(0)
    csv.DictReader(io.StringIO(csv_file.read().decode('utf-8')))
    

    What’s going on here?

    The seek statement ensures the pointer is at the beginning of the file. This may or may not be required in your case. In my case, I’d already read the file in my forms.py in order to validate it, so my file pointer was at the end. You’ll be able to tell that you need to seek() if your csv.DictReader() doesn’t throw any errors, but when you try to loop over the lines of the file you don’t even enter the for loop (e.g. print() statements you put in it never print) — there’s nothing left to loop over if you’re at the end of the file.

    read() gives you the file contents as a bytes object, on which you can call decode().

    decode('utf-8') turns your bytes into a string, with known encoding. (Make sure that you know how your CSV is encoded to start with, though! That’s why I was doing validation on it myself. Unicode, Dammit is going to be my friend here. Even if I didn’t want an excuse to use it because of its title alone. Which I do.)

    io.StringIO() gives you the iterator that DictReader needs, while ensuring that your content remains stringy.

    tl;dr I wrote two lines of code (but eight lines of comments) for a problem that took me hours to solve. Hopefully now you can copy these lines, and spend only a few minutes solving this problem!


    LITA Conference Buddy Program / LITA

    Going to ALA Annual this summer?

    Sign up for the LITA Conference Buddy Program

    The LITA Conference Buddy Program, which was inspired by the GLBTRT Buddy Program, is designed to make conference attendance more approachable, foster inclusion, and build connections between new LITA members and members who have attended past conferences. We hope that this program will leave both participants with a new friend and a new perspective on LITA participation.

    To participate in the program as either a new or experienced conference attendee, get details and complete the sign up form by June 4, 2017

    If you have any questions about the program, please contact the Diversity & Inclusion Committee at LITAConferenceBuddy@gmail.com

    Best,

    The LITA Diversity & Inclusion Committee

    Survey on innovation trends and priorities in European research libraries / HangingTogether

    OCLC Research is currently conducting a survey on innovation trends and priorities in European research libraries. The survey was sent to library directors at 238 institutions in the UK, the Netherlands, Germany, Austria, Switzerland, Denmark, Spain, France, and Italy.*

    Our aim is to gain intelligence regarding trends, capacities and priorities within the European research library community. This will inform future joint activities between OCLC and the research library community in Europe, and scope the opportunity space for OCLC Research, and the OCLC Research Library Partnership.

    Findings from this survey will reveal trends, capacities and priorities within the research library community. We will produce a report, which can be shared broadly. Additionally, OCLC Research will share the data gathered in this survey, so that others can make their own interpretations.

    survey marker

    Survey marker, by Bidgee (Own work) [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)], via Wikimedia Commons

    If you or a colleague has received the survey, we would be grateful if you would take some time to fill it out — it should take no more than 25 minutes and the results will be valuable both to us, and to the broader community.

    We are grateful to representatives from OCLC EMEA Regional Council who have helped to guide the development of this survey: Hubert Krekels (Wageningen University), Annette le Roux (University of South Africa Library), and Rupert Schaab (State and University Library Göttingen).

    For questions, please contact Merrilee Proffitt.

    *Institutions were chosen from the Times Higher Education World University Rankings 2016-2017

    Making European Subsidy Data Open / Open Knowledge Foundation

    One month after releasing subsidystories.eu a joint project of Open Knowledge Germany and Open Knowledge International, we have some great news to share. Due to the extensive outreach of our platform and the data quality report we published, new datasets have been directly sent to us by several administrations. We have recently added new data for Austria, the Netherlands, France and the United Kingdom. Furthermore, first Romanian data recently arrived and should be available in the near future.

    Now that the platform is up and running, we want to explain how we actually worked on collecting and opening all the beneficiary data. Subsidystories.eu is a tool that enables the user to visualize, analyze and compare subsidy data across the European Union thereby enhancing transparency and accountability in Europe. To make this happen we first had to collect the datasets from each EU member state and scrape, clean, map and then upload the data. Collecting the data was an incredible frustrating process, since EU member states publish the beneficiary data in their own country (and regional) specific portals which had to be located and often translated.

    A scraper’s nightmare: different websites and formats for every country

    The variety in how data is published throughout the European Union is mind-boggling. Few countries publish information on all three concerned ESIF Funds (ERDF, ESF, CF) in one online portal, while most have separate websites distinguished by funds. Germany provides the most severe case of scatteredness, not only is the data published by its regions (Germany’s 16 federal states), but different websites for distinct funds exist (ERDF vs. ESF) leading to a total of 27 German websites. Arguably making the German data collection just as tedious as collecting all data for the entire rest of the EU.

    Once the distinct websites were located through online searches, they often needed to be translated to English to retrieve the data. As mentioned the data was rarely available in open formats (counting csv, json or xls(x) as open formats) and we had to deal with a large amount of PDFs (51) and webapps (15) out of a total of 122 files. The majority of PDF files was extracted using Tabula, which worked fine some times and required substantial work with OpenRefine – cleaning misaligned data – for other files. About a quarter of the PDFs could not be scraped using tools, but required hand tailored scripts by our developer.

    Data Formats

    However, PDFs were not our worst nightmare: that was reserved for webapps such as this French app illustrating their 2007-2013 ESIF projects. While the idea of depicting the beneficiary data on a map may seem smart, it often makes the data useless. These apps do not allow for any cross project analysis and make it very difficult to retrieve the underlying information. For this particular case, our developer had to decompile the flash to locate the multiple dataset and scrape the data.

    Open data: political reluctance or technical ignorance?

    These websites often made us wonder what the public servants that planned this were thinking? They already put in substantial effort (and money) when creating such maps, why didn’t they include a “download data” button? Was it an intentional decision to publish the data, but make difficult to access? Or is the difference between closed and open data formats simply not understood well enough by public servants? Similarly, PDFs always have to be created from an original file, while simply uploading that original CSV or XLSX file could save everyone time and money.

    In our data quality report we recognise that the EU has made progress on this behalf in their 2013 regulation mandating that beneficiary data be published in an open format. While publication in open data formats has increased henceforth, PDFs and webapps remain a tiring obstacle. The EU should assure the member states’ compliance, because open spending data and a thorough analysis thereof, can lead to substantial efficiency gains in distributing taxpayer money.

    This blog has been reposted from https://okfn.de/blog/2017/04/Making-EU-Data-Open/

    Webinar: Powering Linked Data and Hosted Solutions with Fedora / DuraSpace News

    Fedora is a flexible, extensible, open source repository platform that forms a strong  base for digital preservation, and supports Linked Data.  Fedora is used in a wide variety of institutions around the world, including libraries, museums, archives, and government organizations. Join us for a webinar on Tuesday, May 16* at 9:30am AEST (convert to your timezone) to learn more about Fedora.

    robots.txt / Ed Summers

    The Internet Archive does some amazing work in the Sisyphean task of archiving the web. Of course the web is just too big and changes too often for them to archive it all. But Internet Archive’s crawling of the web and serving it up out of their Wayback Machine, plus their collaboration with librarians and archivists around the world make it a truly public service if there ever was one.

    Recently they announced that they are making (or thinking of making) a significant change to the way they archive the web:

    A few months ago we stopped referring to robots.txt files on U.S. government and military web sites for both crawling and displaying web pages (though we respond to removal requests sent to info@archive.org). As we have moved towards broader access it has not caused problems, which we take as a good sign. We are now looking to do this more broadly.

    The robots.txt was developed to establish a conversation between web publishers and the crawlers, a.k.a. bots, that come to gather and index content. It allows web publishers to provide guidance to automated agents from companies like Google about what parts of the site to index, and to point to a sitemap that lets them do their job more efficiently. It also allows the web publisher to ask a crawler to slow down with the Crawl-delay directive, if their infrastructure doesn’t support rapid crawling.

    Up until now the Internet Archive have used the robots.txt in two ways:

    • their ia_archiver web crawler consults a publisher’s robots.txt to determine what parts of a website to archive and how often
    • the Wayback Machine (the view of the archive) consults the robots.txt to determine what to allow people to view from the archived content it has collected.

    If the Internet Archive’s blog post is read at face value it seems like they are going to stop doing these things altogether, not just for government websites, but for the entire web. While conversation in Twitter makes it seem like this is a great idea whose time has come, I think this would be a step backwards for the web and for its most preeminent archive, and I hope they will reconsider or take this as an opportunity for a wider discussion.

    I think it’s crucial to look at the robots.txt as an imperfect, but much needed part of a conversation between web publishers and archives of the web. The idea that there is a perfect archive that contains all the things is a noble goal, but it has always been a fantasy. Like all archives the Internet Archive represents only a sliver of a sliver of the thing we call the web. They make all kinds of decisions about what to archive and when, which are black boxed and difficult to communicate. While some people view the robots.txt as nothing better than a suicide note that poorly optimized websites rely on, robots.txt is really just small toehold in providing transparency about the decisions about what to archive from the web.

    If a website really wants to block the Internet Archive it can still do so by limiting access by IP addresses or by ignoring any clients named ia_archiver. If the Internet Archive starts to ignore robots.txt it pushes the decisions about who and what to archive down into the unseen parts of web infrastuctures. It introduces more uncertainty, and reduces transparency. It starts an arms race between the archive and the sites that do not want their content to be archived. It treats the web as one big public information space, and ignores the more complicated reality that there is a continuum between public and private. The idea that Internet Archive is simply a public good obscures the fact that ia_archiver is run by a subsidiary of Amazon, who sell the data, and also make it available to the Internet Archive through a special arrangement. This is a complicated situation and not about a simple technical fix.

    The work and craft of archives is one that respects the rights of content creators and involves them in the process of preservation. Saving things for the long term is an important task that shapes what we know of the past and by extension our culture, history and future. While this process has historically privileged the powerful in society, the web has has lowered the barrier to publishing information, and offers us a real opportunity to transform whose voices are present in the archive. While it makes sense to hold our government to a particular standard, the great thing about the web is that not all web publishers are so powerful. It is important that Internet Archive not abandon the idea of a contract between web publishers and the archive.

    Most importantly we don’t know what the fate of the Internet Archive will be. Perhaps some day it will decide to sell its trove of content to a private company and closes its doors. That’s why its important that we not throw the rights of content creators under the bus, and hold the Internet Archive accountable as well. We need web archives that are partners with web publishers. We need more nuance, understanding and craft in the way we talk about and enact archiving the web. I think archivists and Archive-It subscribers need to step up and talk more about this. Props to the Internet Archive for starting the conversation.

    Job Opportunity: Developer (Frontend and Web Applications) / DPLA

    The Digital Public Library of America seeks a full-time Developer to support ongoing work on the DPLA public-facing Web applications.

    We are seeking a curious and enthusiastic individual who recognizes both their technical strengths and areas for growth, who can help us work effectively to further DPLA’s mission to bring together the riches of America’s libraries, archives, and museums, and make them freely available to all. A belief in this mission, and the drive to accomplish it over time in a collaborative spirit within and beyond the organization, is essential.

    Responsibilities

    Reporting to the Director for Technology, the Developer:

    • Will participate in a high-impact, upcoming site-wide redesign effort
    • Builds out functionality on the DPLA website and internal tools, including tools related to search, content management, and community engagement
    • Customizes and deploys open source software to suit organizational needs
    • Performs other related duties and participates in special projects as assigned.

    As a member of the DPLA Technology Team, the Developer:

    • Contributes to design, development, testing, integration, support, and documentation of user-facing applications and back-end systems.
    • Participates in software engineering team group activities including sprint rituals, code reviews, and knowledge sharing activities.
    • Supports content management policies, process, and workflows, and contributes to the development of new ones.
    • Collaborates with internal and external stakeholders in planning and implementation of applications supporting DPLA’s mission, strategic plan, and special initiatives.
    • Maintains knowledge of emerging technologies to support the DPLA’s evolving services.
    • Embodies and promotes the philosophy of open source, shared, and community-built software and technologies.
    • Brings creative vision around possibilities for work with data that we haven’t yet imagined.

    Requirements

    • 5+ years professional experience in software development or a related discipline.
    • A proven ability to build websites and web applications targeted at the general public and that operate at public scale
    • Experience with server and client webapp languages such as Ruby, Python, and Javascript and associated frameworks.
    • Ability to build building user-centric, accessible websites that conform to responsive design principles and work across a variety of devices
    • A passion for writing clean, performant, and testable code.
    • Understanding the importance of continuous integration, automated testing and deployments, and static analysis of code quality
    • Demonstrated experience working effectively in a team environment and the ability to interact well with stakeholders.
    • Desire and enthusiasm about learning new toolsets, programming languages, or methods to support software development.

    Preferred Qualifications

    • A history of collaboration with open source projects
    • Knowledge of client-side JS frameworks like Angular/React
    • A history of work with data centric applications and search-oriented architecture
    • A successful history of working effectively in a geographically-distributed organization.
    • Experience working on an agile team using methodologies such as Scrum and Kanban

    Nice to Have

    • Mobile development experience, particularly using tools that promote reuse of web assets
    • Experience building and using REST APIs and distributed architectures
    • Advanced knowledge of modern Javascript/Typescrip
    • Experience working with or in a library, museum, archive or other cultural heritage organization
    • Experience working with multiple formats such as audio, video, ebooks, and newspapers in the browser environment

    This position is full-time. DPLA is a geographically-distributed organization, with headquarters in Boston, Massachusetts. Ideally, this position would be situated in the Northeast Corridor between Washington, D.C. and Boston, but remote work based in other locations will also be considered.

    Like its collection, DPLA is strongly committed to diversity in all of its forms. We provide a full set of benefits, including health care, life and disability insurance, and a retirement plan. Starting salary is commensurate with experience.

    About DPLA

    DPLA connects people to the riches held within America’s libraries, archives, museums, and other cultural heritage institutions. Since launching in April 2013, it has aggregated more than 16 million items from 2,350 institutions. DPLA is a registered 501(c)(3) non-profit.

    To apply, send a letter of interest detailing your qualifications, a resume, and a list of 3 references in a single PDF to jobs@dp.la. Applications will be considered until the position is filled.

    A decade of blogging / David Rosenthal

    A decade ago today I posted Mass-market scholarly communication to start this blog. Now, 459 posts later I would like to thank everyone who has read and especially those who have commented on it.

    Blogging is useful to me for several reasons:
    • It forces me to think through issues.
    • It prevents me forgetting what I thought when I thought through an issue.
    • Its a much more effective way to communicate with others in the same field than publishing papers.
    • Since I'm not climbing the academic ladder there's not much incentive for me to publish papers anyway, although I have published quite a few since I started LOCKSS.
    • I've given quite a few talks too. Since I started posting the text of a talk with links to the sources it has become clear that it is much more useful to readers than posting the slides.
    • I use the comments as a handy way to record relevant links, and why I thought they were relevant.
    There weren't  a lot of posts until in 2011 I started to target one post a week. I thought it would be hard to come up with enough topics, but pretty soon afterwards half-completed or note-form drafts started accumulating. My posting rate has accelerated smoothly since, and most weeks now get two posts. Despite this, I have more drafts lying around than ever.

    On the graphic design of rubyland.news / Jonathan Rochkind

    I like to pay attention to design, and enjoy good design in the world, graphic and otherwise. A well-designed printed page, web page, or physical tool is a joy to interact with.

    I’m not really a trained designer in any way, but in my web development career I’ve often effectively been the UI/UX/graphic designer of apps I work on, and I do my best, and always try to do the best I can (our users deserve good design), and to develop my skills by paying attention to graphic design in the world, reading up (I’d recommend Donald Norman’s The Design of Everyday Things, Robert Bringhurt’s The Elements of Typographic Style, and one free online one, Butterick’s Practical Typography), and trying to improve my practice, and I think my graphic design skills are always improving.   (I also learned a lot looking at and working with the productions of the skilled designers at Friends of the Web, where I worked last year).

    Implementing rubyland.news turned out to be a great opportunity to practice some graphic and web design. Rubyland.news has very few graphical or interactive elements, it’s a simple thing that does just a few things. The relative simplicity of what’s on the page, combined with it being a hobby side project — with no deadlines, no existing branding styles, and no stakeholders saying things like “how about you make that font just a little bit bigger” — made it a really good design exercise for me, where I could really focus on trying to make each element and the pages as a whole as good as I could in both aesthetics and utility, and develop my personal design vocabulary a bit.

    I’m proud of the outcome, while I don’t consider it perfect (I’m not totally happy with the typography of the mixed-case headers in Fira Sans), I think it’s pretty good typography and graphic design, probably my best design work. It’s nothing fancy, but I think it’s pleasing to look at and effective.  I think probably like much good design, the simplicity of the end result belies the amount of work I put in to make it seem straightforward and unsophisticated. :)

    My favorite element is the page-specific navigation (and sometimes info) “side bar”.

    Screenshot 2017-04-21 11.21.45

    At first I tried to put these links in the site header, but there wasn’t quite enough room for them, I didn’t want to make the header two lines — on desktop or wide tablet displays, I think vertical space is a precious resource not to be squandered. And I realized that maybe anyway it was better for the header to only have unchanging site-wide links, and have page-specific links elsewhere.

    Perhaps encouraged by the somewhat hand-written look (especially of all-caps text) in Fira Sans, the free font I was trying out, I got the idea of trying to include these as a sort of ‘margin note’.

    Screenshot 2017-04-21 11.32.51

    The CSS got a bit tricky, with screen-size responsiveness (flexbox is a wonderful thing). On wide screens, the main content is centered in the screen, as you can see above, with the links to the left: The ‘like a margin note’ idea.

    On somewhat narrower screens, where there’s not enough room to have margins on both sides big enough for the links, the main content column is no longer centered.

    Screenshot 2017-04-21 11.36.48.png

    And on very narrow screens, where there’s not even room for that, such as most phones, the page-specific nav links switch to being above the content. On narrow screens, which are typically phones that are much higher than they are wide, it’s horizontal space that becomes precious, with some more vertical to spare.

    Screenshot 2017-04-21 11.39.16

    Note on really narrow screens, which is probably most phones especially held in vertical orientation, the margins on the main content disappear completely, you get actual content with it’s white border from edge-to-edge. This seems an obvious thing to do to me on phone-sized screens: Why waste any horizontal real-estate with different-colored margins, or provide a visual distraction with even a few pixels of different-colored margin or border jammed up against the edge?  I’m surprised it seems a relatively rare thing to do in the wild.

    Screenshot 2017-04-21 11.39.36

    Nothing too fancy, but I quite like how it turned out. I don’t remember exactly what CSS tricks I used to make it so. And I still haven’t really figured out how to write clear maintainable CSS code, I’m less proud of the actual messy CSS source code then I am of the result. :)


    Filed under: General

    Evergreen 3.0 development update #2 / Evergreen ILS

    Charles the male King Eider duck. Photo courtesy Arlene Schmuland.

    As of this writing, 34 patches have been committed to the master since the previous development update. Many of them were bugfixes in support of the 2.10.11, 2.11.4, and 2.12.1 releases.

    The 3.0 road map can be considered complete at this point, although as folks come up with additional feature ideas — and more importantly for the purposes of the road map, working code — entries can and should be added.

    One of the latest road map additions I’d like to highlight in this update is bug 1682923, where Kathy Lussier proposes to add links in the public catalog to allow users to easily share records via social media. This is an example of a case where the expedient way of doing it — putting in whatever JavaScript Twitter or Facebook recommends — would be the wrong way of doing it. Why? Because it’s up to the user to decide what they share; using the stock social media share JavaScript could instead expose users to involuntary surveillance of their habits browsing an Evergreen catalog. Fortunately, we can have our cake and eat it too by building old-fashioned share links.

    Another highlight for this week is offline mode… using only the web browser. This is something that would have been essentially impossible to implement back when Evergreen was getting off the ground, as short of writing a bespoke plugin, there was no way to store either the offline transactions or the block list. Nowadays it’s much easier; we can put pretty much whatever we like in a browser’s IndexedDB. IndexedDB’s API is pretty low-level, so Mike Rylander is working on using Google’s Lovefield, which offers a “relational database for web apps that can be serialized to IndexedDB. Here’s a snippet of how Mike propose’s to wrap Lovefield for use by the offline module:

    /**
     * Core Service - egLovefield
     *
     * Lovefield wrapper factory for low level offline stuff
     *
     */
    angular.module('egCoreMod')
    
    .factory('egLovefield', ['$q','$rootScope','egCore', 
                     function($q , $rootScope , egCore) { 
        
        var osb = lf.schema.create('offline', 1);
    
        osb.createTable('Object').
            addColumn('type', lf.Type.STRING).         // class hint
            addColumn('id', lf.Type.STRING).           // obj id
            addColumn('object', lf.Type.OBJECT).
            addPrimaryKey(['type','id']);
    

    Duck trivia

    The Cornell Lab of Ornithology operates the All About Birds website, a great resource for birders. If you find yourself waiting for the test suite to finish running, you can pass the time solving one of their online jigsaw puzzles and learn how to identify some diving ducks.

    Submissions

    Updates on the progress to Evergreen 3.0 will be published every Friday until general release of 3.0.0. If you have material to contribute to the updates, please get them to Galen Charlton by Thursday morning.

    Library Blog Basics / LITA

    I think we can probably agree that libraries are no longer exclusively geographical locations that our users come to: patrons also visit virtually. Many of their tasks at a library’s website are pragmatic — renewing books, checking their records, searching the online catalog and placing holds — but, increasingly, libraries are beginning to think of their online spaces as destinations for patrons; as communities of web denizens.

    Victoria recently discussed social media planning for libraries. Another way librarians can create community in the library’s virtual space is by designing and sustaining blogs.

    Last year, my library decided to expand our blog, from a repository of new titles lists and the occasional notice of a change in policy, to a content-rich space for library users to get to know staff, learn more about services, find topical book reviews, read about recent developments, and, yes, also to find the new titles lists they love.

    To start the process of revamping our blog space as a virtual living room of ideas, we went through a process that took about a month in total. This was to be an experiment of sorts, something we’d try out and see if it was interesting to our members.

    • A colleague and I brainstormed the logistics of changing style and format for the blog; an example of the kind of decisions we made is deciding how many lines of a post are visible before the reader would need to click through to read the piece; our answer was four.
    • Then we settled on some metrics for measuring engagement — namely, pageviews (how often a post was clicked through to be read in full).
    • I researched blogs that other libraries were hosting, and analyzed how often they posted, recurring topics, and formats for posts (e.g., video, text, image).
    • Colleagues and I discussed the feasibility of posting regularly, considering our existing workloads. In my research, I’d found that most libraries were publishing an average of three or four pieces on their blogs each week, so we set our aim on posting two or three topical articles and a new titles list.
    • After settling on a frequency, we hashed out a list of possible topical categories for posts, based on what I’d seen on other libraries’ blogs: library services, events and programs, reading recommendations, history articles, and personal essays of staff.
    • I created a style guide so that we could develop a consistent tone, while preserving each author’s individual voice. The style guide indicates image guidelines (both size and sourcing), a list of topical categories, desired word count ranges, how to link to web sources and to materials in our catalog, and other technical specs.
    • I set a new posting schedule every six months. Each of sixteen contributors is scheduled to post once every two months, and we’re flexible on this schedule — if any writer is busy with other projects, they are free to skip that post deadline.
    • As we got this project underway, I hosted a peer-to-peer learning session in which I demonstrated all the features of our blog, a step-by-step how-to of posting, and a discussion of topics and categories of articles, followed by Q&A.

    Within a few months of beginning this experiment in institutional blogging, we measured results — blog pageviews had increased by over 300%! Anecdotally, we were hearing about some of the pieces at the reference desk. Members began to request books listed in the posts. Although our blog isn’t open for comments, we began to feel this sense of online community bleeding into the IRL world of the library building.

    Thus far, blogging has been a successful venture for us, allowing our patrons to share in the life of the library more fully by engaging with staff on a regular basis a few times a week. To be sure, members still visit our website to renew their books and check the library hours. But for those who are interested in content — whether they’re reading about our Chess Coordinator’s personal experience as a child coming to Mechanics’ Institute to watch the chess matches of Boris Spassky, or a readers’ advisory article on resistance-themed fiction, or a collection of the writerly quotes of Truman Capote — there’s also something on our site for these patrons to linger over. Our blog has become a virtual leisure space on the website, and, all things being equal, it’s something we plan to sustain over the long haul.

    Does your library have a blog? What tips do you have for developing one?

    Gender inequality on focus in São Paulo Open Data Day / Open Knowledge Foundation

    This blog is part of the event report series on International Open Data Day 2017. On Saturday 4 March, groups from around the world organised over 300 events to celebrate, promote and spread the use of open data. 44 events received additional support through the Open Knowledge International mini-grants scheme, funded by SPARC, the Open Contracting Program of Hivos, Article 19, Hewlett Foundation and the UK Foreign & Commonwealth Office. This event was supported through the mini-grants scheme under the human rights theme.

    This blog has been translated from this Portuguese original post.

    The International Open Data Day was celebrated for the seventh time on March 4th, 2017. It is always a good opportunity to present open data and show its benefits for newcomers. This year, as a joint initiative between PoliGNU, PoliGen, MariaLab e Transparência Hacker, on the Human Rights theme and we focused on the discussion about women participation in public policy development by looking at related open datasets.

    Our open data day activity was designed with the following 4 steps:

    1.     Initial presentations and explanations;
    2.     Open data initiatives mapping;
    3.     Women’s fights related initiatives mapping;
    4.     Data analysis and visualization made by thematic groups;

    1st Step – Initial presentations and explanations

    We started with a brief introduction from each participant to allow everyone to know each other. This showed how diverse of a group we were: engineers, developers, business consultants, designers, social assistants, teachers, journalists, students and researchers.

    Some of the participants had collaborated with the Brazilian Freedom of Information Act (FOIA – 12.527/2012), so we had a small discussion about how this law was produced, its purposes and limitations. There was also a brief presentation about what is open data, focusing on the eight principles: Complete, Primary, Timely, Accessible, Machine processable, Non-discriminatory, and License-free.

    2nd Step – Open Data initiatives mapping

    We started with a brainstorm in which everybody wrote open data related solutions onto post-its notes. The solutions were grouped into four macro themes: Macro Politics, Local Politics, Services and Media.


    .

    3rd Step – Women’s fights related initiatives mapping

    After we had a second brainstorm about initiatives connected to women’s fights, claims and demands were mapped and added onto post-its. Those initiatives could be not internet-related, as long as they would be related to open data. The post-its were grouped into 5 themes: “Empowerment through Entrepreneurship”, “Empowerment through Technology”, “Visualisations”, “Campaigns” and “Apps”.

    4th Step – The teams’ work on Data Analysis and DataViz

    Two groups of complementary interests were formed: one that focused on the underrepresentation of women in elected public positions, and another, which sought to address gender inequality from an economic bias perspective.

    The team that focused on political perspective, sought open data from the Electoral High Court referred to the Brazilian 2016 elections (available here). The group spent considerable time downloading and data wrangling the database. But even so, they got interesting statistics such as the average expenditure per candidate: ~ R$16,000 for male candidates and ~ R$6,000 for female candidates. Although all parties and states have reached the share of 30% of women, as defined by the law, women’s campaigns receive much less investment. For example, all women’s campaigns, together, did not reach 7% of the total amount of money in Rio de Janeiro City Hall Elections.

    Tables, graphs and maps were generated in Infogr.am and the code produced is available in PoliGNU’s GitHub. With this disparity in women representativeness, it is undeniable that the decision-making power is concentrated in the hands of rich white men’s hands. How is it possible to ensure the human rights of such diverse society if the decisions are taken by a such a homogeneous group of rich white men, majority of whom happens to be old? This and other questions have remained and are waiting another hackday to delve again into the data.

    The team that focused on economic perspective sought open data from the IBGE website of income, employed population, unemployed population, workforce, individuals microentrepreneur profile, among others. Much of the open data available was structured in a highly aggregated form, preventing manipulation from generating or doing any kind of analysis. As a consequence, this team had to redefine their question a few times.

    Some pieces of information deserve to be highlighted:

    • women’s workforce increasing rate (~ 40%) is higher than that of the men (~ 20%)
    • the main segments of women’s small business are: (i) hairdressers, (ii) clothing and accessories sales, and (iii) beauty treatment activities;
    • the main segments of men’s small business are: (i) masonry works, (ii) clothing and accessories sales, and (iii) electrical maintenance.

    These facts show an existing sexual division of labour segments – if this happened only due to vacation, it would not be a problem. However, this sexual division of work reveals that some areas impose barriers and prevent women’s entrance, although, these areas often provide better pay than those with a female majority.

    Graphs were generated in Infogr.am and the data used for the graphs is available here.

    Build relationships to advance advocacy / District Dispatch

    Rep. Grijalva at the press conference.

    This advocacy guest post was written by Arizona’s Pima County Public Library Director Amber Mathewson, whose member of Congress, Rep. Raul Grijalva (AZ-3), led the recent effort to gather 144 signatures on a “Dear Appropriator” letter in support of LSTA funding. To highlight the important local uses of Federal LSTA funding, Rep. Grijalva held a press conference in front of the library at the El Pueblo Neighborhood Center during Congress’ spring recess.

    A crowd gathered this week outside the El Pueblo Library in South Tucson where Congressman Raúl Grijalva (D) and other library advocates to discuss the possible effects of President Trump’s proposed budget cuts — including the elimination of the IMLS —on libraries in Arizona and nationwide. A statement by ALA Julie Todaro was read at the event, in which the American Library Association thanked Rep. Grijalva for his leadership in fighting for library funding.

    A statement by ALA Julie Todaro was read at the event, in which the American Library Association thanked Rep. Grijalva for his leadership in fighting for library funding.

    Manager of the El Pueblo Library Anna Sanchez was among those who spoke: “Public libraries play a significant role in maintaining and supporting our free democratic society. They are America’s great equalizers, providing everyone the same access to information and opportunities for success.”

    At Pima County Public Library, across 26 locations and 9,200 square miles in Southern Arizona, we passionately embrace that role in all that we do. From innovative programming helping entrepreneurs launch their dreams to high-tech youth centers where young adults engage in life-long learning, the Library gives everyone — regardless of age, gender, ethnicity or economic status — a chance to thrive.

    Sanchez added: “Libraries are truly the one place in America where the doors are open to everyone.”

    Pima County (Ariz.) Public Library Director Amber Mathewson

    Arizona’s Pima County Public Library Director Amber Mathewson, whose member of Congress, Rep. Raul Grijalva (AZ-3), led the recent effort to gather 144 signatures on a “Dear Appropriator” letter in support of LSTA funding.

    While libraries nationwide form the cornerstone of our democratic society, they cannot afford to be complacent. As the current threat to funding demonstrates, it is critical that we dedicate ourselves to building relationships with elected officials. It is their votes that can drastically affect the future of libraries. In Southern Arizona’s 3rd Congressional District, we have a champion and steadfast ally in Congressman Grijalva. He recently secured 144 lawmakers’ signatures, across party lines, on a letter to Congress, urging against the cuts and requesting more than $186 million in funding for library programs. Last year, the letter was signed by 88 Representatives.

    Grijalva has helped to preserve and defend libraries, elevating library service in the local, state and national arenas. We must build upon that support and expand relationships with other policymakers. Like Rep. Grijalva, they are the ones who will help ensure a future in which libraries are valued as pillars not only of our communities but of our nation.

    Last year, as the President of the Arizona Library Association, I attended ALA’s 42nd Annual National Library Legislative Day. Alongside State Librarian Holly Henley, citizen advocate Teresa Quale, and Legislative Chair Kathy Husser, we spoke to all 11 Arizona staff representatives from the House and Senate. We highlighted STEM programming and workforce development, answered funding questions, discussed collaborations and made plans for onsite visits.

    In-person meetings are immeasurably meaningful. They are vital if we wish lawmakers to view libraries and librarians as true changemakers. It is in those meetings where we are afforded the space to share the powerful stories of transformation that take place at our libraries every day.

    Pima County Public Library is an active partner in the Arizona State Library Association and the Arizona State Library, Archives and Public Records. These organizations are committed to our success and offer much to help us become our own best advocates.

    Staff training provides tools to communicate effectively, while easy-to-use resources guide us in identifying and securing meetings with elected officials.

    As a county-run system, the relationship we have with our Board of Supervisors is one of paramount importance. To be fully engaged in a library’s vision, one must see for themselves what the library makes possible.

    We regularly invite supervisors to attend events and to visit their district libraries. The location of our Library Board Retreat, held annually, alternates between districts which help strengthen those relationships.

    At Pima County Public Library, we believe it is our job to educate others so they can advocate on our behalf. The value we bring to our community is incalculable. Every day, we provide people with pathways to a better future. For many, we are a lifeline.

    “Free and public libraries are a great tradition in this nation,” said Grijalva. Thankfully, he vows to continue fighting on our behalf. But it is up to us to make sure others — from lawmakers to board members, volunteers to citizen advocates — do, too.

    As writer Caitlin Moran once said, “a library in the middle of a community is a cross between an emergency exit, a life raft and a festival.” We have seen it in our libraries and on the faces of our customers whom we serve. Now is the time to make their stories heard and to ensure our future.

    The post Build relationships to advance advocacy appeared first on District Dispatch.

    BCLA 2017: Hot Topic: Never Neutral: Ethics & Digital Collections / Cynthia Ng

    Notes from the hot topic panel. Tara Robertson, CAPER-BC Jarrett M. Drake, Princeton University Archives Michael Wynne, Washington State University How Libraries Can Trump the Trend to Make America Hate Again (Jarrett) I apologize in advance as it was difficult to take notes for this talk. campaign slogan: Make America Great Again; it signals to … Continue reading BCLA 2017: Hot Topic: Never Neutral: Ethics & Digital Collections

    Evergreen 2.10.11, 2.11.4, and 2.12.1 released / Evergreen ILS

    The Evergreen community is pleased to announce three maintenance releases of Evergreen, 2.10.11, 2.11.4, and 2.12.1.

    If you upgraded to 2.12.0 from an earlier version of Evergreen, please note that Evergreen 2.12.1 contains an important fix to the new geographic and chronological term browse indexes and requires that a browse reingest of your bibliographic records be run. If your Evergreen database started at 2.12.0, the browse reingest can be skipped; if you have not yet upgraded to 2.12.x, you need run the browse reingest only once, after applying the 2.12.0 to 2.12.1 database updates.

    Evergreen 2.10.11 is the final regular release in the 2.10.x series. Further releases in that series will be made only if security bug fixes warrant, and community support will end on 17 June 2017.

    Please visit the downloads page to view the release notes and retrieve the server software and staff clients.

    async is more than await / Alf Eaton, Alf

    If you want to use await in JavaScript, it has to be inside an function marked as async.

    It's not just sugar, though: async means that the function always returns a Promise.

    async function () { return 'foo' }
    is equivalent to
    function () { return Promise.resolve('foo') }

    BCLA 2017: Maximizing Library Vendor Relationships: The Inside Scoop / Cynthia Ng

    Notes from the first afternoon session on developing library vendor relationships. Scott Hargrove, Jeff Narver, FVRL How to develop a relationship of trust, mutual respect, and partnership. You’re the customer, you can do whatever you want. Vendors are an integral part of a library’s business. Goal of the Presentation define and enhance optimal vendor/library relationships. … Continue reading BCLA 2017: Maximizing Library Vendor Relationships: The Inside Scoop

    Next CopyTalk webinar: The Durationator / District Dispatch

    Join us for the next CopyTalk webinar: code + copyright = the Durationator.

    Cup of coffee with beans on the rim of the saucer.

    Plan ahead! One hour CopyTalk webinars occur on the first Thursday of every month at 11 a.m. Pacific / 2 p.m. Eastern.

    For the last decade, the Copyright Research Lab at Tulane University has been building the Durationator — a tool, helpdesk and resource for solving copyright questions. Designed to be used by libraries, archives, museums, artists and content owners (and everyone else!), the Durationator Copyright System combines complex legal research + code + human experts. The Durationator looks at every kind of cultural work (poems, films, books, photographs, art, sound recordings) in every country and territory of the world. It even covers state sound recordings! Elizabeth Townsend Gard will discuss what was learned during the ten-year development process. She will touch on basic information that is available for determining whether a work is under copyright or in the public domain, and how to think through copyright questions at the help desk.

    Dr. Elizabeth Townsend Gard is an Associate Professor of Law and the Jill H. and Avram A. Glazer Professor of Social Entrepreneurship at Tulane University. She teaches intellectual property, art law, copyright and trademark law, advertising, property and law and entrepreneurship. Her research interests include fan fiction, the role of law in creativity in the content industries, and video games. She also fosters kittens, which makes Elizabeth an even more appealing speaker!

    Details:

    Date: Thursday, May 4, 2017

    Time: 2:00 p.m. (Eastern) / 11:00 a.m. (Pacific)

    Link: Go to http://ala.adobeconnect.com/copytalk/ and sign in as a guest. You’re in!

    This program is brought to you by the Office for Information Technology Policy’s copyright education subcommittee. An archive of previous CopyTalk webinars is available.

    The post Next CopyTalk webinar: The Durationator appeared first on District Dispatch.

    Who Does What? Defining the Roles & Responsibilities for Digital Preservation / Library of Congress: The Signal

    This is a guest post by Andrea Goethals, Manager of Digital Preservation and Repository Services at Harvard Library.

    Harvard Library’s digital preservation program has evolved a great deal since the first incarnation of its digital preservation repository (“the DRS”) was put into production in October 2000. Over the years, we have produced 3GB worth of DRS documentation – everything from security policies to architectural diagrams to format migration plans to user documentation. Some of this documentation helps me to manage this repository; in fact, there are a handful of documents I could not effectively do my job without. This post is about one of them – the “DRS Roles & Responsibilities” document.

    Like many other libraries, Harvard Library has gone through several reorganizations. Back in 2000, the DRS was solely managed by a library IT department called the Office for Information Services (OIS). When the Library’s digital preservation program was officially launched in 2008, it was naturally set up within OIS. Then in 2012, digital preservation was integrated with its analog preservation counterpart in a new large department called Preservation, Conservation & Digital Imaging (PCDI). But, the IT staff who managed the DRS’ technical infrastructure were moved into a new department called Library Technology Services (LTS) within the university’s central IT. So essentially the management and maintenance of the DRS would now be distributed across departments. Once the reorganization dust settled, it became clear that there was a lot of confusion throughout the Library and even within the departments directly involved over who’s responsibility it was to do what, and even which were digital preservation vs. IT responsibilities. For example, who creates the DRS enhancement roadmaps? Is that a responsibility of digital preservation or of the system development manager? And how should decisions be made about preservation storage? Clearly that should be influenced by both digital preservation and IT best practices.

    In response, in 2013, a small group of us met to consider a first draft of what now has come to be known as the DRS Roles & Responsibilities document. It was essential to the eventual buy-in of the division of responsibilities that the group was composed of the head of the 2 departments (PCDI and LTS) as well as myself (the manager of the digital preservation program and the DRS), and the manager of the library’s system development. Over the course of a few meetings we refined the document into something we all agreed on.

    Since then we have continued to refine it whenever it’s clear that we forgot to define who has responsibility for something, or when multiple departments think they are responsible for the same thing. Having this document has proved enormously helpful not only in making the day-to-day operations more efficient, it has also improved working relationships, removing contention over responsibilities. Most recently we used the document as a guide for deciding which information belongs on websites managed by Digital Preservation Services vs LTS. It has also proved useful as a communication tool. Now we can better explain to other staff who to go to for what.

    This document has now been used as a model within Harvard Library in other areas, to clarify responsibilities for a functional area that is distributed across departments. My hope in sharing this is that it might serve as a useful tool for other institutions – to clarify digital preservation responsibilities distributed across departments, or possibly even among different cooperating institutions.

    Page one of the DRS Roles & Responsibilities document.

    Version 6 of the DRS Roles & Responsibilities can be found at http://bit.ly/2p3kqgI

    Linked Data is People: Building a Knowledge Graph to Reshape the Library Staff Directory / Code4Lib Journal

    One of our greatest library resources is people. Most libraries have staff directory information published on the web, yet most of this data is trapped in local silos, PDFs, or unstructured HTML markup. With this in mind, the library informatics team at Montana State University (MSU) Library set a goal of remaking our people pages by connecting the local staff database to the Linked Open Data (LOD) cloud. In pursuing linked data integration for library staff profiles, we have realized two primary use cases: improving the search engine optimization (SEO) for people pages and creating network graph visualizations. In this article, we will focus on the code to build this library graph model as well as the linked data workflows and ontology expressions developed to support it. Existing linked data work has largely centered around machine-actionable data and improvements for bots or intelligent software agents. Our work demonstrates that connecting your staff directory to the LOD cloud can reveal relationships among people in dynamic ways, thereby raising staff visibility and bringing an increased level of understanding and collaboration potential for one of our primary assets: the people that make the library happen.

    Recommendations for the application of Schema.org to aggregated Cultural Heritage metadata to increase relevance and visibility to search engines: the case of Europeana / Code4Lib Journal

    Europeana provides access to more than 54 million cultural heritage objects through its portal Europeana Collections. It is crucial for Europeana to be recognized by search engines as a trusted authoritative repository of cultural heritage objects. Indeed, even though its portal is the main entry point, most Europeana users come to it via search engines. Europeana Collections is fuelled by metadata describing cultural objects, represented in the Europeana Data Model (EDM). This paper presents the research and consequent recommendations for publishing Europeana metadata using the Schema.org vocabulary and best practices. Schema.org html embedded metadata to be consumed by search engines to power rich services (such as Google Knowledge Graph). Schema.org is an open and widely adopted initiative (used by over 12 million domains) backed by Google, Bing, Yahoo!, and Yandex, for sharing metadata across the web It underpins the emergence of new web techniques, such as so called Semantic SEO. Our research addressed the representation of the embedded metadata as part of the Europeana HTML pages and sitemaps so that the re-use of this data can be optimized. The practical objective of our work is to produce a Schema.org representation of Europeana resources described in EDM, being the richest as possible and tailored to Europeana's realities and user needs as well the search engines and their users.

    Autoload: a pipeline for expanding the holdings of an Institutional Repository enabled by ResourceSync / Code4Lib Journal

    Providing local access to locally produced content is a primary goal of the Institutional Repository (IR). Guidelines, requirements, and workflows are among the ways in which institutions attempt to ensure this content is deposited and preserved, but some content is always missed. At Los Alamos National Laboratory, the library implemented a service called LANL Research Online (LARO), to provide public access to a collection of publicly shareable LANL researcher publications authored between 2006 and 2016. LARO exposed the fact that we have full text for only about 10% of eligible publications for this time period, despite a review and release requirement that ought to have resulted in a much higher deposition rate. This discovery motivated a new effort to discover and add more full text content to LARO. Autoload attempts to locate and harvest items that were not deposited locally, but for which archivable copies exist. Here we describe the Autoload pipeline prototype and how it aggregates and utilizes Web services including Crossref, SHERPA/RoMEO, and oaDOI as it attempts to retrieve archivable copies of resources. Autoload employs a bootstrapping mechanism based on the ResourceSync standard, a NISO standard for resource replication and synchronization. We implemented support for ResourceSync atop the LARO Solr index, which exposes metadata contained in the local IR. This allowed us to utilize ResourceSync without modifying our IR. We close with a brief discussion of other uses we envision for our ResourceSync-Solr implementation, and describe how a new effort called Signposting can replace cumbersome screen scraping with a robust autodiscovery path to content which leverages Web protocols.

    Outside The Box: Building a Digital Asset Management Ecosystem for Preservation and Access / Code4Lib Journal

    The University of Houston (UH) Libraries made an institutional commitment in late 2015 to migrate the data for its digitized cultural heritage collections to open source systems for preservation and access: Hydra-in-a-Box, Archivematica, and ArchivesSpace. This article describes the work that the UH Libraries implementation team has completed to date, including open source tools for streamlining digital curation workflows, minting and resolving identifiers, and managing SKOS vocabularies. These systems, workflows, and tools, collectively known as the Bayou City Digital Asset Management System (BCDAMS), represent a novel effort to solve common issues in the digital curation lifecycle and may serve as a model for other institutions seeking to implement flexible and comprehensive systems for digital preservation and access.

    Medici 2: A Scalable Content Management System for Cultural Heritage Datasets / Code4Lib Journal

    Digitizing large collections of Cultural Heritage (CH) resources and providing tools for their management, analysis and visualization is critical to CH research. A key element in achieving the above goal is to provide user-friendly software offering an abstract interface for interaction with a variety of digital content types. To address these needs, the Medici content management system is being developed in a collaborative effort between the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, Bibliotheca Alexandrina (BA) in Egypt, and the Cyprus Institute (CyI). The project is pursued in the framework of European Project “Linking Scientific Computing in Europe and Eastern Mediterranean 2” (LinkSCEEM2) and supported by work funded through the U.S. National Science Foundation (NSF), the U.S. National Archives and Records Administration (NARA), the U.S. National Institutes of Health (NIH), the U.S. National Endowment for the Humanities (NEH), the U.S. Office of Naval Research (ONR), the U.S. Environmental Protection Agency (EPA) as well as other private sector efforts. Medici is a Web 2.0 environment integrating analysis tools for the auto-curation of un-curated digital data, allowing automatic processing of input (CH) datasets, and visualization of both data and collections. It offers a simple user interface for dataset preprocessing, previewing, automatic metadata extraction, user input of metadata and provenance support, storage, archiving and management, representation and reproduction. Building on previous experience (Medici 1), NCSA, and CyI are working towards the improvement of the technical, performance and functionality aspects of the system. The current version of Medici (Medici 2) is the result of these efforts. It is a scalable, flexible, robust distributed framework with wide data format support (including 3D models and Reflectance Transformation Imaging-RTI) and metadata functionality. We provide an overview of Medici 2’s current features supported by representative use cases as well as a discussion of future development directions

    An Interactive Map for Showcasing Repository Impacts / Code4Lib Journal

    Digital repository managers rely on usage metrics such as the number of downloads to demonstrate research visibility and impacts of the repositories. Increasingly, they find that current tools such as spreadsheets and charts are ineffective for revealing important elements of usage, including reader locations, and for attracting the targeted audiences. This article describes the design and development of a readership map that provides an interactive, near-real-time visualization of actual visits to an institutional repository using data from Google Analytics. The readership map exhibits the global impacts of a repository by displaying the city of every view or download together with the title of the scholarship being read and a hyperlink to its page in the repository. We will discuss project motivation and development issues such as authentication with Google API, metadata integration, performance tuning, and data privacy.

    DPLA Celebrates Continued Growth and Plans for the Future at DPLAfest 2017 / DPLA

    Chicago, IL— DPLAfest 2017, the fourth annual event bringing together members of the broad DPLA community, officially kicked off Thursday morning at Chicago Public Library’s Harold Washington Library Center. In addition to Chicago Public Library, DPLAfest 2017 is co-hosted by the Black Metropolis Research Consortium, Chicago Collections, and the Reaching Across Illinois Library System (RAILS). Over the next two days, over 350 participants, representing diverse fields including libraries, archives, museums, technology, education, and more, will come together to learn, converse, and collaborate in a broad range of sessions, workshops, and working sessions. At this morning’s opening plenary, DPLAfest-goers received a warm welcome to the city of Chicago from Chicago Public Library Commissioner and CEO Brian Bannon as well as greetings from Amy Ryan, Chair of DPLA’s Board of Directors, and a report on DPLA’s recent milestones and new initiatives from DPLA Executive Director Dan Cohen.

    Following the welcoming remarks, panelists Luis Herrera, City Librarian of San Francisco, Nell Taylor, Executive Director of the Read/Write Library, and Jennifer Brier, Associate Professor of History and Gender and Women’s Studies at the University of Illinois Chicago, discussed community archives, the future of open access to library, archive, and museum collections, and intersections between local community practice and DPLA’s national network in a panel entitled, “Telling Stories of Who We Are,” moderated by DPLA Board Member Sarah Burnes.

    Selected announcements from the DPLAfest opening plenary include:

    Continued growth of the DPLA network

    DPLA celebrated the continued expansion of its partner network over the past year with the addition of new collections from Service Hubs in Wisconsin, Illinois, and Michigan as well as newly accepted applications from Service Hubs representing Ohio, Florida, Montana, Colorado and Wyoming, and the District of Columbia. In addition to its growing list of Service Hubs, DPLA was proud to officially welcome the Library of Congress as a contributing Content Hub in November 2016. With these new collections and others from established partners, DPLA now makes over 16 million items from 2,350 libraries, archives, and museums freely discoverable for all. With the growth of the collections, use of the site has grown dramatically, with new analytics implemented this year showing the important role of both search and curated projects like the Exhibitions and Primary Source Sets in ensuring discovery of and engagement with partner collections.

    Implementing Rights Statements

    Launched one year ago at DPLAfest 2016, RightsStatements.org has been well received by cultural heritage professionals within the DPLA network and around the world. Partners across the DPLA network have begun working towards implementation of the new statements, which will be the subject of the Turn the Rights On session Thursday at 3:30pm CT. RightsStatements.org partners DPLA and Europeana also look forward to welcoming new international partners to the project over the coming months. Digital libraries in Brazil, Australia, New Zealand, and India will be joining the project, with interest from additional libraries on every continent.

    Reading the Ebooks Landscape

    DPLA celebrated continued success and new initiatives towards its mission of maximizing access to ebooks. Open eBooks, a collaboration between DPLA, The New York Public Library, FirstBook, and Clever, with support from Baker and Taylor, marked its first full year in February  2017, during which children across the country read over 1.5 million ebooks using the app. In addition to Open eBooks, DPLA announced a $1.5 million grant from the Sloan Foundation in January to support the development of DPLA’s mobile-friendly open collection of ebooks and exploration into new ways of facilitating discovery of free, open content; unlocking previously gated content through new licensing and/or access models; and facilitating better purchasing options for libraries.

    Expanding our Education Work

    Since 2015, DPLA has collaborated with an Education Advisory Committee of ten teachers in grades 6-12 and higher education to design and curate 100 Primary Source Sets about topics in history, literature, and culture using DPLA partner content. These educators come from a variety of geographic and institutional settings including public K-12 schools, community colleges, school district administration, and research universities.

    In 2017-2018, DPLA will continue to work with these ten teachers and add six more members from higher education with funding from the Teagle Foundation. With this team, DPLA will continue to develop primary source sets and build and pilot a curriculum for professional development. Professional development workshops with educators in diverse institutional settings will help instructors form next steps for implementing DPLA and the Primary Source Sets into their teaching practices and course syllabi.

    Announcing our Values Statement

    In today’s society, where fake news abounds, funding for arts and humanities programs is at risk, inequality is expanding, and our nation continues to wrestle with questions of belonging and inclusion for many people, we at DPLA believe it is more important than ever to be clear about who we are and what we value as an organization. As such, we are proud to unveil DPLA’s new Values Statement, which outlines the following core commitments of our organization and our staff:

    • A Commitment to our Mission
    • A Commitment to Constructive Collaboration
    • A Commitment to Diversity, Inclusion, and Social Justice
    • A Commitment to the Public and our Community
    • A Commitment to Stewardship of Resources

    The ideas captured in the Values Statement emerged from discussions among our entire staff, with input from our board, about the mission of our institution, the ways we approach our work, and why we as professionals and individuals are committed to the essential goals of DPLA. For each tenet of the statement, we have outlined the core principle to which we aspire as well as specific ways that each value drives our everyday practice. We intend for this document to be a dynamic guide for our practice going forward and a reference against which we can track our progress as we continually strive to embody these values throughout the institution.

    Volunteer Opportunity: Join the DPLA Community Reps

    DPLA is currently accepting applications for the next class of Community Reps, a grassroots network of enthusiastic volunteers who help connect DPLA with members of their local communities through outreach activities. DPLA staff have worked with hundreds of terrific reps from diverse places and professions so far and look forward to welcoming a new cohort this spring. The application will remain open until Monday, April 24, 2017.

    Welcome to DPLAfest Awardees

    Cohen introduced and welcomed the five talented and diverse members of the extended DPLA community who are attending DPLAfest 2017 as recipients of the inaugural DPLA travel awards. After receiving a tremendous response to the call from many excellent candidates, DPLA was pleased to award travel support to Tommy Bui of Los Angeles Public Library, Amanda H. Davis of Charlotte Mecklenberg Library, Raquel Flores-Clemons of Chicago State University, Valerie Hawkins of Prairie State College, and Nicole Umayam of Arizona State Library.

    Thanks to our Hosts and Sponsors

    DPLA would like to acknowledge and thank the gracious DPLAfest 2017 host organizations, Chicago Public Library, Black Metropolis Research Consortium, Chicago Collections, and Reaching Across the Illinois Library System (RAILS) as well as the generous sponsors of DPLAfest 2017, Datalogics, OCLC, Lyrasis, Sony, and an anonymous donor.

    DPLA invites all participants and those interested in joining the conversation from afar to follow and contribute to the conversation on Twitter using #DPLAfest.

    For additional information or media requests, please contact Arielle Perry, DPLA Program Assistant at info@dp.la.

    Right to Education Index 2016 Data Now Live! / Open Knowledge Foundation

    RESULTS Educational Fund and Open Knowledge International are pleased to present the 2016 data from the Right to Education Index (RTEI), a global accountability initiative that aims to ensure that all people, everywhere, enjoy the right to a quality education. RTEI is an action research project using a monitoring tool based on international human rights law and collecting data about the right to education with national civil society organizations. In 2016 RTEI approached us to develop a platform to facilitate an open public dialogue on the right to education across the world, inline with our mission to empower civil society organisations to use open data to improve people’s lives. The current site sees a number of improvements to the site, along with the new data. Civil society organizations, advocates, researchers, and policy makers then use the data in national advocacy campaigns and to better understand national satisfaction of the right to education. The resulting data is now available at www.rtei.org.

    RTEI 2016 collected data with civil society partners in 15 countries:

    Civil society partners completed the RTEI Questionnaire. Their findings were peer reviewed by two national independent researchers and provided to government officials for their feedback and comments.

    The Questionnaire consists of five themes (Governance, Availability, Accessibility, Acceptability, and Adaptability, see link). Index scores are derived by the average of theme scores. Theme scores are an average of subtheme scores, which are calculated by averaging representative data points. Unique values are also calculated to account for:

    • Missing data;
    • National minimum standards concerning pupil-per-classroom, pupil-per-trained teacher, pupil-per-toilet, and pupil-per-textbook ratios;
    • Disaggregated outcome and enrollment data by gender, rural and urban disparity, income quintiles, and disability status;
    • Progressively realized rights weighted by GDP per capita purchasing power parity (PPP).

    Further information about calculations is available on rtei.org and will be detailed in a forthcoming RTEI technical brief.

    The resulting data for 2016 is now available at www.rtei.org. In 2016, RTEI found that Australia, Canada, and the UK had the most robust framework for the right to education across the five themes represented in RTEI; Governance, Availability, Accessibility, Acceptability, and Adaptability. Each theme is made up of subthemes specifically referenced in the international right to education framework. Australia’s, Canada’s, and the UK’s scores were highest on Availability, reflecting the infrastructure and resources of schools, including textbooks, sanitation, classrooms, and pupil-per-trained teacher ratios.

    On the Index’s other end, Chile, the DRC, and Zimbabwe struggled to satisfy indicators monitored in RTEI 2016. These countries had low Acceptability or Adaptability scores, signifying weaker education systems and difficulty addressing progressively realized rights, such as the rights of children with disabilities. For all RTEI 2016 participating countries, the lowest scoring theme was Adaptability, focused on education for children with disabilities, out-of-school children, and out-of-school educational opportunities. Outside of Adaptability indicators, the Classrooms subtheme had the lowest average score of all Availability subthemes across all countries because of the lack of infrastructure data available in RTEI 2016 and high pupil-per-classroom ratios in several countries. RTEI 2016 also included an analysis of education financing given increase attention to equitable resource allocation and access worldwide.

    Research to Action

    In 2017, RTEI enters the advocacy phase of data application. In January 2017, RESULTS Educational Fund invited ten current RTEI partners from the Global South to submit proposals to implement in-country advocacy strategies in 2017 using RTEI 2016 findings.  RESULTS and RTEI Advisory Group members reviewed applications and selected the following five RTEI 2017 Advocacy Partners:

    1. Honduras –  Foro Dakar will use data collected in RTEI 2016 related to SDG 4 to focus on national education sector planning, discrimination, and monitoring progress towards SDG 4.
    2. Indonesia – New Indonesia will use data about teacher quality and education for children with disabilities to implement strategies focused on improving national training programs related to inclusive education to further the right to education.
    3. Palestine – Teacher Creativity Center (TCC) will use data related to SDG 4 to measure progress towards SDG 4 through shadow reporting to UNESCO, the UN Special Rapporteur on the right to education, the Ministry of Education in Palestine, and local media.
    4. Tanzania – HakiElimu will use data specifically about girls’ education and inclusive education to focus advocacy on evidence-based policies that promote girls’ education, inclusive, and quality education.
    5. Zimbabwe – Education Coalition of Zimbabwe (ECOZI) will highlight RTEI 2016 findings about continued use of corporal punishment in schools to develop and disseminate alternative policy on positive discipline in schools, training Parliamentarians on corporal punishment issues, and submitting policy recommendations on corporal punishment and free education.

    RESULTS and other RTEI partners look forward to supporting these advocacy strategies throughout 2017. Be on the lookout for in-country advocacy updates from our partners posted on www.rtei.org.

    Our Values / DPLA

    In today’s society, where fake news abounds, funding for arts and humanities programs is at risk, inequality is expanding, and our nation continues to wrestle with questions of belonging and inclusion for many people, we at DPLA believe it is more important than ever to be clear about who we are and what we value as an organization. As such, we are proud to unveil DPLA’s new Values Statement, which outlines the following core commitments of our organization and our staff:

    • A Commitment to our Mission
    • A Commitment to Constructive Collaboration
    • A Commitment to Diversity, Inclusion, and Social Justice
    • A Commitment to the Public and our Community
    • A Commitment to Stewardship of Resources

    The ideas captured in the Values Statement emerged from discussions among our entire staff, with input from our board, about the mission of our institution, the ways we approach our work, and why we as professionals and individuals are committed to the essential goals of DPLA. For each tenet of the statement, we have outlined the core principle to which we aspire as well as specific ways that each value drives our everyday practice. We intend for this document to be a dynamic guide for our practice going forward and a reference against which we can track our progress as we continually strive to embody these values throughout the institution.

    View the full Values Statement to read more about each of our core commitments and how it shapes our practice, today and in the future.

    Hello world! / LibUX

    Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

    Designing & Building for Ourselves / LITA

    I’m in the throes of designing a new help desk for our department that will serve to triage help tickets for approximately 15,000 employees. This has been a major undertaking, and retaining the confidence that I can get it done has been a major challenge. However, it’s also been a really great exercise in forcing me to be introspective about how I design my own ethics and culture into the system.

    When we design and build systems for ourselves, we design for what we need, and if you’re like me, you also aim to design for simplicity and the least work possible that still accomplishes your end goal. When I’m designing for myself, I find that I am more willing to let go of a feature I thought I needed because another one will do the job okay, and okay was enough, especially if it means less work for me.

    Designing for ourselves in a way is easier than designing for someone else. You essentially know what you need; there’s no guess work or communication gap. Yes, we can get caught up in semantics about how we may not actually understand what we need, and thus you may build something that doesn’t achieve the end goal you had. But hopefully, in the process, you evolve and learn to design and build what you really need.

    Also, designing for ourselves forces us to let go on the complex and unnecessary features and build a more simple product that will hopefully be easier to maintain over time. I do not know a time while working in libraries where we (library folk) were not hooting and hollering about the awfulness of the library technology ecosystem. As I mentioned, I’m in the depths of designing a new service desk for my team (in JIRA Service Desk), and I find myself asking “Do we REALLY need this? Can this complex setup be accomplished through a different, simpler method? Can we maximize the use of this setup and use it in more than just one functional way?” When I have to do all the legwork, I think more carefully about essentials and nice-to-haves than when we hired someone else and I was the “ideas person” – and probably much less flexible on the tedious items.

    If the load that I carry and my intimate connection to the build force me to think differently about what we do and don’t need, this suggests that maybe we have the wrong people designing library systems. Or at least maybe we don’t have the right people involved throughout the design and build process. Vendors need to include librarians who work in the trenches in the design process. There needs to be representation from the academic, public, corporate, museum, medical, special, etc. communities,  at a level that is more than just “We’re looking for feedback we might incorporate in the future!”  I don’t yet have an answer to how we can accomplish that, but I have ideas on where to start. Stay tuned for “Why you should leave your library and work for the ‘Dark Side.’”

    The flip side to this is that maybe my intimate connection with the workload also encourages me to overlook and take shortcuts that seem fine but really ought to be examined carefully. What comes to mind is a presentation I refer to frequently: Andreas Orphanides’ Code4Lib 2016 talk Architecture is politics: The power and the perils of systems design[1]. Design persuades; system design reflects the designer’s values and the cultural context [Lesson 2 in Andreas’ talk].

    Fortunately for me, this came to light while I’m still in the middle of the design process. While not an ideal time because I’ve already done a lot of work, the opportunity to step back, adjust and try again sits in perfect reach. I’ve started reexamining our workflows, frontend and backend; it’s going to take more time, had I thought about the shortcuts I was making sooner and the impact they had on the user experience maybe I’d have less reexamining to do.

    When we design for ourselves, how often do we make a compromise on something because it makes the build easier? Does our desire to just get the job done cause us to drop features that might have made the design stronger, but leaving it out meant less work in the end? If someone else was building your design, would you demand that that feature be included – even though it’s difficult to do? Does our intimate connection with the system design encourage us to continue to build in poor values? Can we learn to be more empathetic [2] in our design process when we’re designing for ourselves?

    I hope I’ve encouraged you to consider what you may be missing when you design a system for yourself; what habits you’re creating that will be an influence when you design a system for another.
    Cheers, Whitni


    [1] Slide deck: http://bit.ly/dre_code4lib2016  Video of Talk: https://youtu.be/P03kD_Q5qcU?t=38m36s

    [2] Empathy on the Edge http://bit.ly/erl17_empathyontheedge

    This is how Guatemala joined the worldwide celebration of Open Data / Open Knowledge Foundation

    This blog is part of the event report series on International Open Data Day 2017. On Saturday 4 March, groups from around the world organised over 300 events to celebrate, promote and spread the use of open data. 44 events received additional support through the Open Knowledge International mini-grants scheme, funded by SPARC, the Open Contracting Program of Hivos, Article 19, Hewlett Foundation and the UK Foreign & Commonwealth Office. This event was supported through the mini-grants scheme under the human rights theme.

    This blog has been translated from this Spanish blog at Medium.

    In parallel, in five continents, activists, public officials and researchers gathered to have 345 different activities on #OpenDataDay 2017. This is what we did in Guatemala.

    It was a Saturday, it was early, but that didn’t prevent us from gathering to talk about data. The morning of March 4 – Open Data Day– started with two proposals from civil society researchers who reminded us that the conversation about open data isn’t only a matter of government.

    At the start, Ronal Ochaeta from Open Knowledge in Guatemala reminded us that information can contribute to the Sustainable Development Goals. He spoke about the need to close the gap between what technology can create and the needs of users: “it’s useless having a really good open data portal that people don’t use”. Ochaeta emphasized the power of data literacy and how these should be adapter for a broad population, so they can make knowledge important for themselves.  

    Silvio Gramajo, an experienced researcher of the public sector, gave us a list of ideas about how to generate data that the open government initiatives aren’t producing. We also need to develop indicators to measure its performance. Gramajo also called to push not only government but other sectors that can join the wave, like universities, think tanks, colleges and companies.

    After these presentations we changed the direction and went from civil society to government so three institutions could share their progress on this matter.

    Zaira Mejía, in charge of the Open Government Partnership in Guatemala, emphasized that when you go in to portal gobiernoabierto.gob.gt you can find how the Third Action Plan – a document created by the government and civil society organizations to promote transparency, accountability and citizen participation – advances. In this website the user can search in the 5 core lines of work (access to information, citizen participation, innovation, fiscal transparency and accountability) and the 22 commitments that were made to follow how these goals move forward, as well as to keep this government initiative accountable.

    Later, Carlos Dubón, the director of the access to information unit of the Ministry of Finance mentioned that they have managed to change their information delivery policy. As a result, they can respond with editable files instead of PDFs in approximately 80% of the requests they get. He specified that even though they are advancing, they not only have access and availability gaps but they also need to let citizens know what they can request and what this information means. In one word: understanding.

    Last but not least, Edgar Sabán from the National Secretary of Science and Technology mentioned that they are working on an unified open data portal (one of the Open Government Partnership commitments) and mentioned they will use open source code.

    Carlos Dubón, the director of the access to information unit of the Ministry of Finance; Zaira Mejía, in charge of the Open Government Partnership in Guatemala, and Edgar Sabán from the National Secretary of Science and Techology

    We had assistance from journalists, communications and political science students and officials in charge of processing the information requests, as well as other people interested in the subject. Along with Red Ciudadana and Escuela de Datos we managed to gather a community to meet and learn.

    Thus, while chatting, drinking coffee and having some pastries the morning went by. What’s next is working in generating a culture of access and transparency from our positions and push for the commitments to be fulfilled. Hopefully, for Open Data Day 2018 we’ll have more progress made and more projects to show.

    Also, we hope in next year’s photo, the group photo will have more people. The more, the merrier ;)

    The DSpace 7 Project–A Simple Summary / DuraSpace News

    From Tim Donohue, DSpace Tech Lead and the DSpace 7 UI Outreach Group. DSpace 7 development will be highlighted at OR2017 next month including demonstrations. Recordings and slides from the recent Hot Topics webinar series, "Introducing DSpace 7: Next Generation UI" are available here.

    Background to the DSpace 7 project