Blogs and feeds of interest to the Code4Lib community, aggregated.
March 11, 2014
H.R. 4186, the Frontiers in Innovation, Research, Science and Technology Act (FIRST) was introduced yesterday in the House of Representatives by Chairman Lamar Smith (R-TX) and Rep. Larry Bucshon (R-IN) and referred to the Committee on Science, Space, and Technology, and to the Committee on Small Business. The ALA stands with SPARC in opposing Section 303 of this bill; a provision that would create unnecessary obstacles to the public’s ability to access research funded by tax-payers.
Now is not the time to create unwarranted challenges to the access of tax-payer funded research. After years of effort, the open access community celebrated the White House Directive on Public Access to the Results of Federally Funded Research and the addition to the FY14 Omnibus Appropriations Act to expand the National Institutes of Health’s access program to include the Departments of Labor, Education and Health and Human Services. These programs would more rapidly make the results of this research available to the public; while Section 303 would create challenges to federal agencies as they endeavor to participate.
Among other things, Section 303 would:
- Establish a minimum allowed embargo period of 24 months, and allow its further
extension to 36 months. No provisions to reduce embargo periods are included in this legislation.
- Sanction simply linking to full text of articles on publishers websites, without ensuring that federal agencies retain a copy of the text of the articles reporting on their funded research.
- Require federal agencies to repeat the work that they have already done in drafting plans for policies as required by the White House Directive on public access, and introduce an additional 18 month minimum delay while this work is duplicated.
Please take a moment to contact your Representative to express your dismay of a bill that would delay the public’s right to information.
The post Road blocks to federally funded research appeared first on District Dispatch.
by jmcgilvray at March 11, 2014 05:01 PM
Today, the American Library Association and the Internet Archive joined forces to file a “friend of the court” brief in David Leon Riley v. State of California and United States v. Brima Wurie, two Supreme Court cases examining the constitutionality of cell phone searches after police arrests. In the amicus brief, both nonprofit organizations argue that warrantless cell phone searches violate privacy principles protected by the Fourth Amendment.
Both cases began when police officers searched the cell phones of defendants Riley and Wurie without obtaining a warrant. The searches recovered texts, videos, photos, and telephone numbers that were later used as evidence. The Supreme Court of California found the cell phone search lawful in Riley’s case, but the U.S. Court of Appeals for the First Circuit, in Boston, reached the opposite conclusion and reversed Wurie’s conviction.
In the brief, the Internet Archive and the American Library Association argued that reading choices are at the heart of the expectation of personal privacy guaranteed by the Fourth Amendment. Allowing police officers to rummage through the smartphones of arrestees is akin to giving government officials permission to search a person’s entire library and reading history.
Today, ALA and Internet Archive leaders weighed in on the court case. Barbara Stripling, president of the American Library Association:
Today’s cell phones are much more than simple dialing systems—they are mobile libraries, holding our books, photos, banking information, favorite websites and private conversations. The Constitution does not give law enforcement free rein to search unlawfully through our private records.
Brewster Kahle, founder and digital librarian of Internet Archive:
The fact that technology has made it easy to carry voluminous sensitive and personal information in our pockets does not suddenly grant law enforcement unchecked availability to it in the case of an arrest. Constitutional checks are placed on the search of, for instance, a personal physical library and these checks should also apply to the comparably vast and personally sensitive stores of data held on our phones.
William Jay, Goodwin Procter partner and counsel of record on the amicus brief, added:
The Supreme Court has recognized that people don’t lose all privacy under the Fourth Amendment when they’re arrested. And one of the strongest privacy interests is the right not to have the government peer at what you’re reading, without a good reason and a warrant. We are pleased to have the chance to represent both traditional and Internet libraries, which have a unique ability to show the Supreme Court why our electronic bookshelves deserve the same protection as our home bookshelves.
“In my experience as a former federal prosecutor, a person’s smartphone is one of the things law enforcement are most eager to search after an arrest,” said Goodwin Procter partner Grant Fondo, a co-author of the brief. “This is because it holds so many different types of important personal information, telling law enforcement what the arrested person has been doing over the past few weeks, months, and even years—who they have been in contact with, what they read, and where they have been. Simply because this information is now all contained in a small smartphone we carry with us, rather than at home, should not take the search of this information outside the scope of one of our most important Constitutional protections—the right to protection from warrantless searches.”
The post Supreme Court: “Stop warrantless cell phone searches” appeared first on District Dispatch.
by Jazzy Wright at March 11, 2014 04:57 PM
Yes, more of this please. From Dave Thomas, one of the originators of the ‘agile manifesto’, who I have a newfound respect for after reading this essay.
Agile Is Dead (Long Live Agility)
However, since the Snowbird meeting, I haven’t participated in any Agile events, I haven’t affiliated with the Agile Alliance, and I haven’t done any “agile” consultancy. I didn’t attend the 10th anniversary celebrations.
Why? Because I didn’t think that any of these things were in the spirit of the manifesto we produced…
Let’s look again at the four values:
Individuals and Interactions over Processes and Tools
Working Software over Comprehensive Documentation
Customer Collaboration over Contract Negotiation, and
Responding to Change over Following a Plan
The phrases on the left represent an ideal—given the choice between left and right, those who develop software with agility will favor the left.
Now look at the consultants and vendors who say they’ll get you started with “Agile.” Ask yourself where they are positioned on the left-right axis. My guess is that you’ll find them process and tool heavy, with many suggested work products (consultant-speak for documents to keep managers happy) and considerably more planning than the contents of a whiteboard and some sticky notes…
Back to the Basics
Here is how to do something in an agile fashion:
What to do:
- Find out where you are
- Take a small step towards your goal
- Adjust your understanding based on what you learned
How to do it:
When faced with two of more alternatives that deliver roughly the same value, take the path that makes future change easier.
And that’s it. Those four lines and one practice encompass everything there is to know about effective software development. Of course, this involves a fair amount of thinking, and the basic loop is nested fractally inside itself many times as you focus on everything from variable naming to long-term delivery, but anyone who comes up with something bigger or more complex is just trying to sell you something.
I think people tricked by others trying to sell them something isn’t actually the only, or even the main, reason people get distracted from actual agility by lots of ‘agile’ rigamarole which is anything but.
I think there are intrinsic distracting motivations and interests in many organizations too: The need for people in certain positions to feel in control; the need for blame to be assigned when something goes wrong; just plain laziness and desire for shortcuts and magic bullets; prioritizing all of these things (whether you realize it or not) over actual product quality.
Producing good software is hard, for both technical and social/organizational reasons. But my ~18 years of software engineering (and life!) experience lead me to believe that there are no ‘tool’ shortcuts or magic bullets, you do it just the way Thomas says you do it: you just do it, always in small iterative steps always re-evaluating next steps and always in continual contact with ‘stakeholders’ (who need to put time and psychic energy in too). Anything else is distraction at best but more likely even worse, misdirection.
And there’s a whole lot of distraction and misdirection labelled ‘agile’.
Filed under: General
by jrochkind at March 11, 2014 12:33 PM
March 10, 2014
In what ways are Washington issues affecting teen library users? How can librarians support technology policies that support teenagers? Ask these questions and more this Thursday when technology policy leaders from the American Library Association’s Office for Information Technology Policy (OITP) discuss digital learning via the Young Adult Library Services Association’s @yalsa Twitter account.
As part of Teen Tech Week, OITP will join several businesses, nonprofits, library organizations and Internet companies in highlighting the digital tools, resources and services that libraries offer to teens and their families. OITP will cover a variety of topics all-day Thursday, including current technology policies, internet filtering, copyright fair use, internet access and net neutrality.
Ask questions and follow the Twitter discussion using the #TTW14 hashtag.
The post Teen issues and tech policies intersect this Thursday appeared first on District Dispatch.
by Jazzy Wright at March 10, 2014 09:40 PM
The Code4Lib Journal (C4LJ) exists to foster community and share information among those interested in the intersection of libraries, technology, and the future.
We are now accepting proposals for publication in our 25th issue. Don't miss out on this opportunity to share your ideas and experiences. To be included in the 25th issue, which is scheduled for publication in mid-July 2014, please submit articles, abstracts, or proposals via web form or by email to firstname.lastname@example.org by Friday, April 11, 2014. When submitting, please include the title or subject of the proposal in the subject line of the email message.
C4LJ encourages creativity and flexibility, and the editors welcome submissions across a broad variety of topics that support the mission of the journal. Possible topics include, but are not limited to:
by dbs at March 10, 2014 06:23 PM
I am pleased to announce that Charles P. (Charlie) Wapner begins work today as an information policy analyst. Charlie will work on a broad range of topics that includes copyright, licensing, telecommunications and E-rate, and provide support for our new Policy Revolution! initiative sponsored by the Bill & Melinda Gates Foundation.
Charlie comes to the American Library Association from the Office of Representative Ron Barber (D-AZ) where he was a legislative fellow. Earlier, Charlie also served as a legislative correspondent for Representative Mark Critz (D-PA). Charlie also interned in the offices of Senator Kirsten Gillibrand (D-NY) and Pennsylvania Governor Edward Rendell. After completing his B.A. in Diplomatic History at the University of Pennsylvania, Charlie received his M.S. in public policy and management from Carnegie Mellon University.
We look forward to Charlie’s help in advancing our efforts on many different fronts.
The post OITP expands policy staff appeared first on District Dispatch.
by Alan Inouye at March 10, 2014 05:42 PM
(This is the English version of the Danish blog post originally posted on the Open Knowledge Foundation Danish site and translated from Danish by Christian Villum, “Openwashing” – Forskellen mellem åbne data og tilgængelige data)
Last week, the Danish it-magazine Computerworld, in an article entitled “Check-list for digital innovation: These are the things you must know“, emphasised how more and more companies are discovering that giving your users access to your data is a good business strategy. Among other they wrote:
(Translation from Danish) According to Accenture it is becoming clear to many progressive businesses that their data should be treated as any other supply chain: It should flow easily and unhindered through the whole organisation and perhaps even out into the whole eco-system – for instance through fully open API’s.
They then use Google Maps as an example, which firstly isn’t entirely correct, as also pointed out by the Neogeografen, a geodata blogger, who explains how Google Maps isn’t offering raw data, but merely an image of the data. You are not allowed to download and manipulate the data – or run it off your own server.
But secondly I don’t think it’s very appropriate to highlight Google and their Maps project as a golden example of a business that lets its data flow unhindered to the public. It’s true that they are offering some data, but only in a very limited way – and definitely not as open data – and thereby not as progressively as the article suggests.
Surely it’s hard to accuse Google of not being progressive in general. The article states how Google Maps’ data are used by over 800,000 apps and businesses across the globe. So yes, Google has opened its silo a little bit, but only in a very controlled and limited way, which leaves these 800,000 businesses dependent on the continual flow of data from Google and thereby not allowing them to control the very commodities they’re basing their business on. This particular way of releasing data brings me to the problem that we’re facing: Knowing the difference between making data available and making them open.
Open data is characterized by not only being available, but being both legally open (released under an open license that allows full and free reuse conditioned at most to giving credit to it’s source and under same license) and technically available in bulk and in machine readable formats – contrary to the case of Google Maps. It may be that their data are available, but they’re not open. This – among other reasons – is why the global community around the 100% open alternative Open Street Map is growing rapidly and an increasing number of businesses choose to base their services on this open initiative instead.
But why is it important that data are open and not just available? Open data strengthens the society and builds a shared resource, where all users, citizens and businesses are enriched and empowered, not just the data collectors and publishers. “But why would businesses spend money on collecting data and then give them away?” you ask. Opening your data and making a profit are not mutually exclusive. Doing a quick Google search reveals many businesses that both offer open data and drives a business on them – and I believe these are the ones that should be highlighted as particularly progressive in articles such as the one from Computerworld.
One example is the British company OpenCorporates, which offer their growing repository of corporate register data as open data, and thereby cleverly positions themselves as a go-to resource in that field. This approach strengthens their opportunity to offer consultancy services, data analysis and other custom services for both businesses and the public sector. Other businesses are welcome to use the data, even for competitive use or to create other services, but only under the same data license – and thereby providing a derivative resource useful for OpenCorporates. Therein lies the real innovation and sustainability – effectively removing the silos and creating value for society, not just the involved businesses. Open data creates growth and innovation in our society – while Google’s way of offering data probably mostly creates growth for…Google.
We are seeing a rising trend of what can be termed “open-washing” (inspired by “greenwashing“) – meaning data publishers that are claiming their data is open, even when it’s not – but rather just available under limiting terms. If we – at this critical time in the formative period of the data driven society – aren’t critically aware of the difference, we’ll end up putting our vital data streams in siloed infrastructure built and owned by international corporations. But also to give our praise and support to the wrong kind of unsustainable technological development.
To learn more about open data visit the Open Definition and this introduction to the topic by the the Open Knowledge Foundation. To voice your opinion join the mailing list for Open Knowledge Foundation.
by Christian Villum at March 10, 2014 03:55 PM
chosen-rails already existed as a gem to package chosen.js assets for the Rails asset pipeline.
But I was having trouble getting it to work right, not sure why, but it appeared to be related to the compass dependency.
The compass dependency is actually in the original chosen.js source too — chosen.js is originally written in SASS. And chosen-rails is trying to use the original chosen.js source.
I made a fork which instead uses the post-compiled pure JS and CSS from the chosen.js release, rather than it’s source. (Well, it has to customize the CSS a bit to change referenced url()s to Rails asset pipeline asset-url() calls.)
I’ve called it chosen_assets. (rubygems; github). Seems to be working well for me.
Filed under: General
by jrochkind at March 10, 2014 02:59 PM
Anyone in the digital archivist community want to weigh in on this, or provide citations to reviews or evaluations?
I’m not sure exactly who the market actually is for these “Archival Discs.” If it was actually those professionally concerned with long-term reliable storage, I would think the press release would include some information on what leads them to believe the media will be especially reliable long-term, compared to other optical media. Which they don’t seem to.
Which makes me wonder how much of the ‘archival’ is purely marketing. I guess the main novelty here is just the larger capacity?
Press Release: ”Archival Disc” standard formulated for professional-use next-generation optical discs
Tokyo, Japan – March 10, 2014 – Sony Corporation (“Sony”) and Panasonic Corporation (“Panasonic”) today announced that they have formulated “Archival Disc”, a new standard for professional-use, next-generation optical discs, with the objective of expanding the market for long-term digital data storage*.
Optical discs have excellent properties to protect themselves against the environment, such as dust-resistance and water-resistance, and can also withstand changes in temperature and humidity when stored. They also allow inter-generational compatibility between different formats, ensuring that data can continue to be read even as formats evolve. This makes them robust media for long-term storage of content. Recognizing that optical discs will need to accommodate much larger volumes of storage going forward, particularly given the anticipated future growth in the archive market, Sony and Panasonic have been engaged in the joint development of a standard for professional-use next-generation optical discs.
Filed under: General
by jrochkind at March 10, 2014 12:34 PM
Yes, GettyImages have decided to encourage people to embed their images. Despite opinions to the contrary I think this is A Good Thing. So what happens when you embed a Getty image into your HTML? To get something like this in your page:
you need to include a little snippet of HTML in your pages:
<iframe src="//embed.gettyimages.com/embed/81901686?et=4td6Xm2f0k6pMgQVX7pNFA&sig=fhRom4eoepnZbyWjZ0_2N3SdVG1dxQTC2GUAK4XrPjg=" width="462" height="440" frameborder="0" scrolling="no"></iframe>
which in turn embeds this HTML into your page:
<base target="_parent" />
<title>20 - 30 year old female worker pulls box off of warehouse shelf [Getty Images]</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
<!--[if lt IE 10]>
<link rel="stylesheet" type="text/css" href="//embed.gettyimages.com/css/style.css" />
<section id="embed-body" data-asset-id="81901686" data-collection-id="41">
<a href="http://gty.im/81901686" target="_blank"><img src="http://d2v0gs5b86mjil.cloudfront.net/xc/81901686.jpg?v=1&c=IWSAsset&k=2&d=F5B5107058D53DF50D8BA2399504758256BF753C679B89B417A38C0E9F1FBB9F&Expires=1394499600&Key-Pair-Id=APKAJZZHJ4LGWQENK3OQ&Signature=UC1YXxhGwSAY0BduwMZqnFQ7fcAQTdCksDvYu4WVmNWlTou7NktH7rZ8uk7BLbupJ4sp0ijiDaA93Yi2XijnC-TtcUO1Kylcew4nZpM~Al9jD0OSfx5yNe7jcIalweGpLGOdMLTXn0wRs6XfEh3~1fc~csMrAesHJkUayhBqNxo6Xja-35XQLx98d5fg6UXazOsCRT-UzebWA4dFURz~BSxXgq0RtU~LhKVKRZvkUTvl2RrsqBcN4bW3i~dbNMwHKn~7s9dMy5CxH-7k4ELyJaBClWEO2Jgr5WV9cXy~WGBQnNd-5Lb7CMcZclzn88-LbmDnFcO~BVLgtSU5x-KTpw__" /></a>
<li class="gi-logo icon icon-logo"></li>
<li>Bob O'Connor / Stone</li>
<a href="//twitter.com/share" title="Share on Twitter" class="twitter-share-button" data-lang="en" data-count="none" data-url="http://gty.im/81901686"></a>
<a class="icon-tumblr" target="_self" title="Share on Tumblr" href="//www.tumblr.com/share/video?embed=%3Ciframe%20src%3D%22%2f%2fembed.gettyimages.com%2fembed%2f81901686%3fet%3d4td6Xm2f0k6pMgQVX7pNFA%26sig%3dfhRom4eoepnZbyWjZ0_2N3SdVG1dxQTC2GUAK4XrPjg%3d%22%20width%3D%22462%22%20height%3D%22440%22%20frameborder%3D%220%22%20%3E%3C%2Fiframe%3E"></a>
<aside class='modal embed-modal' style='display: none;'>
<a class="icon modal-close icon-close" href="#close" title="Close"></a>
<h3>Embed this image</h3>
<p>Copy this code to your website or blog. <a href="http://www.gettyimages.com/helpcenter" target="_blank" id="learn-more">Learn more</a></p>
Note: Embedded images may not be used for commercial purposes.</p>
By embedding this image, you agree to Getty Images
You can see Amazon’s CloudFront is being used as a CDN for the images, and that Getty are using CloudFront’s Signed URLs to expire the images…it looks like after 24 hours? This isn’t a problem because Getty are serving the page up, but anyone that’s tried to snag the image URL for reuse (Google Images?) will end up getting a 400 error.
I thought it was interesting that the embedded iframe gives you not only the image, author and collection, but also links to re-share the image on Twitter and Tumblr. I guess this is Viral Marketing 101, but it’s smart I think, since it encourages reuse, and the recycling of content on the Web. Conspicuously absent from the reshare buttons is Facebook — maybe there’s a story there? Also, as we’ll see in a second, the description of the image is missing from the embedded view:
20 – 30 year old female worker pulls box off of warehouse shelf
Of course the other big thing the iframe does is gives Getty an idea of where their content is being used. Anyone who uses this one line embed iframe will trigger an HTTP request to a embed.gettyimages.com URL (hosted on Amazon EC2 incidentally). These requests, and their referral information can be stashed away and analyzed, so that Getty can get a picture of who is using their content, and how. Embedded images and the Twitter and Tumblr reshares are automatically linked to Getty’s specific short URLs, such as:
The number used in the short URL is also used in the expanded URL:
But the title text is just there for SEO, it can be changed to anything:
Ordinarily I’d be down on the use of a short URL, but in this case it’s role is more of a permalink. Of course these short URLs have the same problem as Handles and PURLs in that people won’t ordinarily bookmark them. But, Que Sera Sera. As the Verge pointed out these embedded iframes could end up depriving Web content of lead images, if the GettyImages decides to pull the plug on the embeds and they suddenly 404. But their credibility would suffer quite a bit by a decision like that. I think it’s important that they are encouraging the Web to rely on these URLs, and that they are putting their reputation on the line.
Of course lots of inbound links to those pages should do wonders for their PageRank. Plus, following that link allows you to purchase the image, explore other images by the photographer, related images in the GettyImages collection, as well as see some additional metadata about the photo: item number, rights, license type, original file dimensions, size, dots-per-inch. Some of this metadata is even expressed using RDFa (Facebook’s OpenGraph metadata) … which makes the lack of a Facebook share button even more interesting. In addition there is also some minimal use of schema.org HTML microdata for the search engine’s to nibble on. If you are curious, Google’s Structured Data Testing Tool provides a view on this metadata.
It seems like there’s an opportunity to express more information in RDFa or microdata, specifically the details about the original, as well as licensing/rights metadata. Oddly the RDFa doesn’t even mark up the author of the image, I suppose because Facebook’s OpenGraph doesn’t give a way of expressing it. They could start by marking up the author of the image, but what if Getty established photographer pages, so instead of Bob O’Connor linking to:
What if it linked to a vanity URL like:
This would be a perfect place to share links to author’s other social media accounts, a bio, their photographer friends, etc. I’m thinking of the sort of work that National Geographic are doing with their YourShot application, for example this Profile page for Bahareh Mohamadian.
The licensing restrictions and iframes around these images would have ordinarily turned me off. But given Getty’s market position in this space it’s completely understandle, and seems like a useful compromise for now. These landing pages are a perfect place to make more structured metadata available that could be used by integrating applications. Getty should invest in this real estate, not only for the Web, but also for data resuse across their enterprise. The landing pages are an example of just how influential Facebook and Google have been in promoting the use of metadata on the Web. Without them, I think it is safe to assume we wouldn’t have seen any structured metadata on these pages at all.
by ed at March 10, 2014 09:23 AM
A belated congratulations to the Memento team on the publication of their RFC and Google Chrome plugin for the Memento WWW time travel protocol. A fan of the Internet Archive Wayback Machine? Ever look at the history of a Wikipedia page? Curious to know about changes to a particular web page? The first is now easier to access…the second is a work in progress…and the third may come to a website near you. See what I mean through this demonstration video.
If you want to see more of the details, check out the guided introduction. If you are a hardcore techie, take a look at the text of RFC 7089. If you’d like to try it out yourself, load up Chrome and install the Memento extension. Because the Chrome Web Store won’t let you see the details of an extension unless you are actually using Chrome, I’ve reproduced the description here:
Travel to the past of the web by right-clicking pages and links.
Memento for Chrome allows you to seamlessly navigate between the present web and the web of the past. It turns your browser into a web time travel machine that is activated by means of a Memento sub-menu that is available on right-click.
First, select a date for time travel by clicking the black Memento extension icon. Now right-click on a web page, and click the “Get near …” option from the Memento sub-menu to see what the page looked like around the selected date. Do the same for any link in a page to see what the linked page looked like. If you hit one of those nasty “Page not Found” errors, right-click and select the “Get near current time” option to see what the page looked like before it vanished from the web. When on a past version of a page – the Memento extension icon is now red – right-click the page and select the “Get current time” option to see what it looks like now.
Memento for Chrome obtains prior versions of pages from web archives around the world, including the massive web-wide Internet Archive, national archives such as the British Library and UK National Archives web archives, and on-demand web archives such as archive.is. It also allows time travel in all language versions of Wikipedia. There’s two things Memento for Chrome can not do for you: obtain a prior version of a page when none have been archived and time travel into the future. Our sincere apologies for that.
Technically, the Memento for Chrome extension is a client-side implementation of the Memento protocol that extends HTTP with content negotiation in the date time dimension. Many web archives have implemented server-side support for the Memento protocol, and, in essence, every content management system that supports time-based versioning can implement it. Technical details are in the Memento Internet Draft at http://www.mementoweb.org/guide/rfc/ID/. General information about the protocol, including a quick introduction, is available at http://mementoweb.org.
For queries about the Memento for Chrome extension and the Memento protocol, get in touch at email@example.com.
The Memento team is also developing a plugin for Mediawiki that speaks the Memento protocol. The effort to get it into the English Wikipedia has stalled at the moment, but I expect the developers will give it another go at some point. Congratulations to Herbert Van de Sompel, Michael Nelson, Rob Sanderson and the rest of the team at Los Alamos National Lab and Old Dominion University.
by Peter Murray at March 10, 2014 12:19 AM
In many of the talks that I have given over the years I have taken pains to point out a key fact about library budgets: the vast majority of any library’s budget the budget for most libraries goes to staff. Usually I use this as a way to put investment in computer hardware in perspective. That is, should your most expensive resource (staff, duh) be forced to waste time dealing with inferior equipment? No, I would assert. It’s just stupid. [correction made to correct an overstatement]
But that, of course, is merely the tip of the iceberg. It’s also one of the easiest problems to fix, since all it requires is better equipment. A much more difficult way to squeeze the most out of your most expensive investment is to build additional skills. And yet that is exactly what nearly all libraries should be doing.
Why? Because hardly any job in a library is the same as it was even just a few years ago. The kinds of tasks we are doing may be quite different than they were when we were hired. Doing these new things effectively often requires building new skills.
Therefore every library manager needs to have a plan for constant staff retooling. What makes this difficult is that people can have a variety of ways in which they learn best. Some learn best in a formal class. Others need only a few good books and some time to experiment. One of your first steps, then, is to help your staff determine how they learn best and find avenues for learning based on those preferences.
Sure, I’m talking about a lot of work. But isn’t your single largest investment worth it? Of course it is. Now get out there and start using your license to skill.
by Roy Tennant at March 10, 2014 12:11 AM
March 09, 2014
A colleague e-mailed me the other day expressing appreciation for the DLTJ blog in part, and also describing a mystery that she is running in her library:
Because I am staring out the window, at yet another snow-storm-in-the-works, having just learned that school is called off AGAIN (waiting for the library urchins to pour in), I am trying to get caught up on life outside of a small prairie town.
Adrian (MN) Police Chief Shawn Langseth gathering evidence in the library “crime”.
To combat some serious winter blues (and who doesn’t have them this year?), we have decided to have a just-for-fun “crime spree” at our library. Thus far, the local Chief of Police has no leads (he has graciously agreed to participate and has been kept in the dark as to the identities of the perpetrators). We decided that having a crime spree might be a more interesting way to get people to talk about the library.
If you find yourself looking for something to take your mind off the weather, feel free to take a look at our crime spree: http://adrianbranchlibrary.blogspot.com/
Take a look at the posts created by Meredith Vaselaar, Librarian at the Adrian Branch Library. She even does have the police chief involved with the story. The articles are posted on Blogspot and in the local newspaper. This sounds like a great way to bring the community into the local branch. Congratulations, Meredith! I’ll be watching from afar to see how this turns out.
by Peter Murray at March 09, 2014 10:13 PM
Come on in and take a stroll around our new site! Check out the new Web services documentation, try the API Explorer, and request a WSKey just for fun (Steve says, "not really" to that last idea): We invite you to spend some time, poke around, and let us know what you think of our new digs.
by Shelley Hostetler at March 09, 2014 02:30 PM
I’ve been working with the Solarized color theme in my Emacs for a while. The homebrew recipe for Emacs has an option to pull in a patch which corrects the Cocoa port for Emacs to handle srgb colors correctly. But for the longest time I couldn’t get the colors to exactly line up to the references.
But I finally figured out that the theme was expecting a variable to be set:
(setq solarized-broken-srgb nil)
From the customize information:
Emacs bug #8402 results in incorrect color handling on Macs. If this is t (the default on Macs), Solarized works around it with alternative colors. However, these colors are not totally portable, so you may be able to edit the “Gen RGB” column in solarized-definitions.el to improve them further.
The gotcha is that if you set this through customize, generally the default custom.el loads after init.el with a lightly managed Emacs. So if you thought you were setting the variable in customize and it would work, you are wrong, since normally themes are loaded through your init.el, either through a separate library or directly in mine.
So for me to load solarized with correct srbg support:
(setq solarized-broken-srgb nil)
(load-theme 'solarized-dark t)
by Sean Chen at March 09, 2014 04:13 AM
March 08, 2014
So tonight, we had a get together this evening for library people (many SLAIS students in attendance) who want to learn how to code in a more informal manner without having to take a full course. Trying to Start Difficult to get the kind of necessary learning in many formal settings including library schools. So […]
by Cynthia at March 08, 2014 04:41 AM
March 07, 2014
This posting outlines the implementation of a Semantic Web application.
Many people seem to think the ideas behind the Semantic Web (and linked data) are interesting, but many people are also waiting to see some of the benefits before committing resources to the effort. This is what I call the “chicken & egg problem of the linked data”.
While I have not created the application outlined below, I think it is more than feasible. It is a sort of inference engine feed with a URI and integer, both supplied by a person. Its ultimate goal is to find relationships between URIs that were not immediately or readily apparent.* It is a sort of “find more like this one” application. Here’s the algorithm:
- Allow the reader to select an actionable URI of personal interest, ideally a URI from the set of URIs you curate
- Submit the URI to an HTTP server or SPARQL endpoint and request RDF as output
- Save the output to a local store
- For each subject and object URI found the output, go to Step #2
- Go to step #2 n times for each newly harvested URI in the store where n is a reader-defined integer greater than 1; in other words, harvest more and more URIs, predicates, and literals based on the previously harvested URIs
- Create a set of human readable services/reports against the content of the store, and think of these services/reports akin to a type of finding aid, reference material, or museum exhibit of the future. Example services/reports might include:
- hierarchal lists of all classes and properties – This would be a sort of semantic map. Each item on the map would be clickable allowing the reader to read more and drill down.
- text mining reports – collect into a single “bag of words” all the literals saved in the store and create: word clouds, alphabetical lists, concordances, bibliographies, directories, gazetteers, tabulations of parts of speech, named entities, sentiment analyses, topic models, etc.
- maps – use place names and geographic coordinates to implement a geographic information service
- audio-visual mash-ups – bring together all the media information and create things like slideshows, movies, analyses of colors, shapes, patterns, etc.
- search interfaces – implement a search interface against the result, SPARQL or otherwise
- facts – remember SPARQL queries can return more than just lists. They can return mathematical results such as sums, ratios, standard deviations, etc. It can also return Boolean values helpful in answering yes/no questions. You could have a set of canned fact queries such as, how many ontologies are represented in the store. Is the number of ontologies greater than 3? Are there more than 100 names represented in this set? The count of languages used in the set, etc.
- Allow the reader to identify a new URI of personal interest, specifically one garnered from the reports generated in Step #6.
- Go to Step #2, but this time have the inference engine be more selective by having it try to crawl back to your namespace and set of locally curated URIs.
- Return to the reader the URIs identified in Step #8, and by consequence, these URIs ought to share some of the same characteristics as the very first URI; you have implemented a “find more like this one” tool. You, as curator of the collection of URIs might have thought the relations between the first URI and set of final URIs was obvious, but those relationships would not necessarily be obvious to the reader, and therefore new knowledge would have been created or brought to light.
- If there are no new URIs from Step #7, then go to Step #6 using the newly harvested content.
- Done. If a system were created such as the one above, then the reader would quite likely have acquired some new knowledge, and this would be especially true the greater the size of n in Step #5.
- Repeat. Optionally, have a computer program repeat the process with every URI in your curated collection, and have the program save the results for your inspection. You may find relationships you did not perceive previously.
I believe many people perceive the ideas behind the Semantic Web to be akin to investigations in artificial intelligence. To some degree this is true, and investigations into artificial intelligence seem to come and go in waves. “Expert systems” and “neural networks” were incarnations of artificial intelligence more than twenty years ago. Maybe the Semantic Web is just another in a long wave of forays.
On the other hand, Semantic Web applications do not need to be so sublime. They can be as simple as discovery systems, browsable interfaces, or even word clouds. The ideas behind the Semantic Web and linked data are implementable. It just a shame that nothing is catching the attention of the wider audiences.
* Remember, URIs are identifiers intended to represent real world objects and/or descriptions of real-world objects. URIs are perfect for cultural heritage institutions because cultural heritage institutions maintain both.
by LiAM: Linked Archival Metadata at March 07, 2014 09:47 PM
We’re not sure how you best characterize waiting with your finger poised over the refresh key anticipating the release of an FCC Public Notice. But, nonetheless, we at ALA were not the only ones who impatiently awaited the latest installment of the E-rate modernization proceeding that began last June (if not before with the 2010 National Broadband Plan) with the President’s ConnectED initiative announcement.
Since the summer release of the Notice of Proposed Rulemaking (NPRM), the Commission has logged over 1500 comments and ex parte filings. Some of the issues raised in the NPRM warrant further public input to help the Commission determine the best path forward. To that end the Commission is seeking detailed input on three specific issues:
- “How best to focus E-rate funds on high-capacity broadband, especially high-speed Wi-Fi and internal connections;
- Whether and how the Commission should begin to phase down or phase out support for traditional voice services in order to focus more funding on broadband; and
- Whether there are demonstration projects or experiments that the Commission should authorize as part of the E-rate program that would help the Commission test new, innovative ways to maximize cost-effective purchasing in the E-rate program.” (Paragraph 4)
Within these issues, there are a number of critical questions asked that are important to libraries and decisions made through the public record will certainly influence library broadband capacity and the ability of libraries to deliver key community services. The opportunity to shape the future direction of the E-rate program is immense and therefore somewhat daunting as we begin to fine-tune our own proposals. In our initial comments and throughout the process we have sought input and feedback from a wide range of librarians and expect to do so again through the guidance of the ALA E-rate Task force, other ALA member leaders, and expert consultants.
Once a Notice is released, the most common thing to do is count the pages (phew, “only” 20 pages of questions) and search for your key interest – in our case “librar.” It is noteworthy to mention that a number of comments by libraries are cited as well as those from ALA. Moreover, some of our ideas are discussed explicitly (paragraph 59). This is reflective of the Commission’s dedication to capturing the important role libraries play in their communities that broadband enables; the great need libraries have in boosting broadband capacity; and potential differences in needs of libraries from our school counterparts.
We are gratified to see that the Commission remains open and even aggressive in soliciting new ideas about how to make sure the program is efficient and effective – that funds are targeted to the most critical services that build library and school broadband capacity. As stated in the Notice, the targeted issues are not the sole issues that could be included in a final order. They are simply the ones for which the record to date was either unclear or commenters were equally split, or where the Commission needs more detail to fully understand how to address some of the stickiest challenges raised in the NPRM.
Many of the questions posed ask commenters to make difficult choices – such as what is the most equitable means to ensure applicants receive funding for internal wiring and Wi-Fi networks and determining how to handle voice services in a program gearing toward a broadband capacity focus. Though not mentioned in the Notice, we do understand that increasing the overall size of the fund is still somewhere on the table – while we feast on this Public Notice, we expect the overall funding question to be the next course.
So what’s the next step?
Comments are due April 7th with reply comments due April 21st. For the next few days we will be nose down reading and parsing the Notice which will be a break of sorts from the multiple meetings we have had with FCC commissioners and staff, as ALA, and as part of inside-the-beltway coalitions – in person and by phone. These meetings were a combination of advocacy for the role libraries play in education, employment and entrepreneurship, and empowering people though providing access to e-government, health information, digital literacy training, and similar services and providing the Commission with library data of the current state of broadband capacity, network configuration, and projecting future trends in library services that call for a scalable approach to building capacity for libraries.
We are appreciative of the careful review the Commission has given to the public record. In reality, the process, though sometimes murky and arguably long, is successful. Concepts are being analyzed, issues debated, and solutions weighed. There will be some changes in the future which will have to be worked through when implemented and we expect some discomfort. However, we support the direction of the Commission as we think about library needs five, ten, and twenty years from now. We view the Notice as another important step in the future of the E-rate program and thus the future of broadband capacity for all libraries.
Alan Inouye, OITP director, did the hard part of this post by getting the first thoughts down and contributed with thoughtful suggestions.
The post Up Next? E-rate and it’s worth the wait appeared first on District Dispatch.
by Marijke Visser at March 07, 2014 08:51 PM
The new Developer Network Website is only 3 days away! While we're busy doing the final touches over the weekend, we wanted to give you a small taste of what's coming on Monday. We're pleased with the new design and structure, of course--but the 3 things below are ones most likely to help make it easier to work with OCLC's Web services.
by Shelley Hostetler at March 07, 2014 04:45 PM
LITA is offering a webinar All Aboard – The Party’s Starting! Setting a Course for Social Media Success, presented by Mary Anne Hansen, Doralyn Rossmann, Angela Tate, and Scott Young of Montana State University Library, from 2:00 p.m. – 4:00 pm CDT on April 2, 2014.
Social media is more than a way to inform users; social media is a powerful way to build community online. Presenters will go beyond the basics by demonstrating how to create a social media guide for developing communities on Facebook, Twitter, Tumblr, and Pinterest. We will explore data tracking and assessment tools such as ThinkUp, HootSuite, Google Analytics, focus group data, and survey methods. We will also discuss strategies for integrating social media efforts into your organization’s strategic plan and educating peer organizations about best practices.
Participants will take home a template for creating a comprehensive plan for social media usage and assessment, with an emphasis on creating a meaningful voice and a compelling personality.
For registration and additional information, visit the course page.
by mprentice at March 07, 2014 03:35 PM
The new Developer Network Website is only 3 days away! While we're busy doing the final touches over the weekend, we wanted to give you a small taste of what's coming on Monday. We're pleased with the new design and structure, of course--but the 3 things below are ones most likely to help make it easier to work with OCLC's Web services.
by meyerst at March 07, 2014 01:46 PM
March 06, 2014
In an American Libraries article published today, Alan S. Inouye, director of the American Library Association’s Office for Information Technology Policy, reported on his participation in the Connecticut State Library’s Ebook Symposium, a one-day event where library and publishing experts explored the current state of ebook affairs and the future of ebook lending for libraries, publishers, and readers.
As a presenter at the statewide ebook conference, Inouye discussed the large number of challenges faced by libraries working to meet patron demands for ebooks, including concerns related to fair pricing, equitable access to ebook titles, digital preservation, privacy, digital rights management and accommodations for readers with limited vision.
My presentation provided a national view of library ebook challenges through the lens, naturally, of ALA’s work during the past several years. While we saw good progress in 2013 (having come from the depths of despair in 2012), the present state of library ebook lending is nonetheless not good. I talked about high prices as the paramount problem, though there are many other ones, including lack of availability to libraries of the full range of ebook titles and lack of full access by library consortia. Additionally, we also have concerns relating to archiving and preservation, privacy, accommodations for people with disabilities, among others.
It is essential that we think bigger. The publishing model itself is evolving from a simple linear progression of author to reader to a complex set of relationships in which nearly any entity could relate to another directly. For example, authors can work with libraries directly, or publishers can take on distribution or retailing operations. The library community needs to be creative and innovative in contemplating the models that will work best for us and our users.
Also on hand to comprise the publisher panel were Skip Dye, vice president of library and academic sales at Random House and Adam Silverman, director of digital business development at HarperCollins. This session produced the most heat for the symposium, as a couple of Connecticut librarians pointedly criticized high prices for library ebooks. Through subsequent informal discussion, I got the sense that this dissatisfaction resonated with the other attendees.
Read the full article
The post Thinking bigger about ebooks appeared first on District Dispatch.
by Jazzy Wright at March 06, 2014 10:02 PM
All critical services have been restored. Please contact us at firstname.lastname@example.org if you continue to experience problems.
We apologize again for the inconvience.
by hostetls at March 06, 2014 09:21 PM
All critical services have been restored. Please contact us at email@example.com if you continue to experience problems.
by Shelley Hostetler at March 06, 2014 09:21 PM
OCLC is currently experiencing network issues that are affecting most of our Web services. We are working on the issue and will update you when service has been restored.
We apologize for the inconvenience.
by hostetls at March 06, 2014 05:48 PM
At the Open Knowledge Foundation, we aspire to create environments that connect diverse audiences, thus enabling a diverse groups of thinkers, makers and activists to come together and collaborate to effect change. This year, the Open Knowledge Festival is fuelled by our theory that change happens when you bring together knowledge – which informs change - tools – which enable change – and society – which effects change. Whether you’re building better, cooler tech, creating stronger ideas for the open movement or aiming to shift the gears of society, this year’s OKFestival is the place for you; a place of diverse interests and learning experiences, highlighted by this year’s emphasis on collaboration across the three streams.
In the past, Open Knowledge Foundation events have been organised around topical streams. This has enabled us to grow the movement across communities as diverse as science, transparency, development and linguistics.
However, topical streams have a tendency to further entrench topical silos. Researchers working to open up academia, for example, could almost certainly benefit from learning about the experiences of their colleagues in other fields and from teaching others about their area of expertise. Everyone could benefit from some facetime with a maker who builds cool, useful technology in their sleep! At OKFestival 2014 we want to ensure this type of knowledge sharing in order to offer everyone the chance to cross-collaborate in meaningful, impactful ways. We can all recognise that issues such as privacy, data protection and net neutrality affect all domains within the open space, and we want to ensure that these issues are addressed and worked through from a diversity of perspectives to produce truly global solutions. In order to build an impactful open coalition which can effect change around the world, we need to draw on and incorporate the experiences and knowledge of multiple local communities. Only by avoiding such topical silos and building a cross-topic network of understanding and collaboration can we inform inclusive and context-appropriate open practices.
This year, we are mixing things up to achieve all of this and more! We are promoting cross-domain collaborations and urging you to collectively work through the complex problems that keep resurfacing. The individual sessions which are being submitted and proposed as we speak are pieces of this global puzzle, and this year’s Programme Team is responsible for putting that puzzle together. It’s a tough job, and we don’t want to do it alone, so if you want to start piecing it together before you submit your proposal, then collaborating with your colleagues who work in different spaces is a sure-fire way to create an interesting and attention-worthy session. We fully encourage you to reach out to those colleagues who you believe may hold a piece of your puzzle, and we’ve set up this mailing list for you to do just that.
We understand that by mixing things up, questions are sure to arise. That is why we have put together this handy page with tips and tricks for organising your session, booted that aforementioned mailing list for session organisers to discuss their proposals and foster new collaborations, and even organised two hangouts (Friday and Monday – pick one!) to give you the opportunity to ask questions and be inspired.
Finally, we need your help. We believe that at the heart of the open movement are values such as diversity and inclusivity. We need you to make sure that your OKFestival is as diverse and inclusive as possible, because as we all know, there’s so much more to learn that way. If you know awesome people who have something key to say about sharing knowledge, building amazing tools and stirring up society to make an impact, then send them our way. If that’s you, then what are you waiting for?! Start thinking about a collaborative, interactive and powerful session for OKFestival!
by Katelyn Rogers at March 06, 2014 04:17 PM
OCLC is currently experiencing network issues that are affecting most of our Web services. We are working on the issue and will update you when service has been restored.
by Shelley Hostetler at March 06, 2014 03:18 PM
LITA is offing three full day preconferences at ALA Annual Conference in Las Vegas, all held Friday, June 27 from 8:30 am – 4:00 pm.
Managing Data: Tools for Plans and Data Scrubbing with Abigail Goben, University of Illinois, Chicago; Sarah Sheehan, George Mason University; Nathan B. Putnam; University of Maryland. As data continues to come to the fore, new tools are becoming available for librarians to assist faculty and use with their own data. This preconference would focus on the DMPTool and OpenRefine. The DMPTool will be presented to demonstrate customization features, review data management plans, best and worst practices, and writing a data plan for a data set a library may collect. OpenRefine will be demonstrated with sample data to show potential use with library data sets and more of the data lifecycle process; metadata will also be covered.
Practical Linked Data with Open Source with Galen Charlton, Equinox Software; Jodi Schneider, DERI, NUI Galway; Dan Scott, Laurentian University; Richard Urban, Florida State University. Linked Data can improve how libraries share their metadata, harvest it from non-library sources, and build better applications to connect patrons with library resources. However, what does this mean for the daily work of catalogers? This preconference will narrow the gap between theory and practice by presenting the state of the art for Linked Data management in open source integrated library systems and giving participants the chance to try it out.
Web Therapy with Nina McHale, ninermac.net; Christopher Evjy, Jefferson County Library. Having trouble managing your library’s web site? Content in chaos? Platform the pits? Statistics staggering? The doctors are in! In this full-day preconference, we will tackle a number of tough topics to help cure the ills that are keeping your library site from achieving total wellness. Specific topics will be determined by a survey sent in advance to attendees. Enjoy networking and problem solving with fellow web-minded library folks.
How to Register
To register for any of these events, you can include them with your initial conference registration or add them later using the unique link in your email confirmation. If you don’t have your registration confirmation handy, you can request a copy by emailing firstname.lastname@example.org. You also have the option of registering for a preconfernce only.
- Register online through June 20
- Call ALA Registration at 1-800-974-3084
- Onsite registration will also be accepted in Las Vegas.
by mprentice at March 06, 2014 06:18 AM
March 05, 2014
A new program written into President Obama’s 2015 budget request includes professional development funding for school librarians, teachers and leaders who provide high-speed internet access to students. The Obama Administration requested that $200 million dollars be allocated to ConnectEDucators, a new initiative that will ensure that school professionals are well-prepared to use high-speed internet resources in a way that improves classroom instruction and student learning. The ConnectEDucators program is an extension of the Administration’s ConnectED initiative.
Roberto Rodriguez, special assistant to the President for Education Policy and the Domestic Policy Council, confirmed today that the Administration considers school libraries a vital component to student achievement. Rodriguez said that school librarians would qualify for professional development funds available from the ConnectEDucators program.
On a related note, President Obama’s budget requests funding support for school librarians through the Department of Education’s Race to the Top program. The Equity and Opportunity Program tasks the states and school districts in high poverty areas with providing ways to keep the best educators in their schools, and this would include school librarians.
The post School librarians supported in ConnectEDucators program appeared first on District Dispatch.
by Emily Sheketoff at March 05, 2014 10:59 PM
Code4Lib is a unique place. I don’t know of another space like it in the library world. It has inside jokes all over the place, from the love of bacon, to the poking of fun at OCLC as an organization and me as an individual. Both myself and my employer (OCLC) are good for it, and we both engage with and support this community with what I hope is friendly good humor.
From the perspective of the organization, OCLC has been the single largest and most consistent sponsor of the Code4Lib Conference since the beginning. I like to think I had something to do with that. I’ve been an active participant in the Code4Lib community for many years, thanks to Dan Chudnov, who first turned me on to what was at the time a nascent group of library coders. What it has grown into has astonished me and likely others who were early participants.
So recently when there began a Code4Lib meme about “Roy4Lib” (believe me, you had to be there), I wasn’t surprised. But for me, the apex of the inside joke was this post by my friend Ross Singer:
When you’re alone and you think you hear the tinkling of ice cubes in a glass and the faint smell of Scotch, that was Roy.
That person building a treehouse as you drive past, that was Roy.
Out of the corner of your eye, there was a mustached man, that was Roy.
When you delete a MARC record, you are the Roy.
Clearly all of this had precedent, from my love of single malts to my legacy of building treehouses, to my ever-present mustache (which my daughters have forbidden me from shaving), to my throwing down the gauntlet that “MARC Must Die” way back in 2002. And Ross had it almost, completely, thoroughly, right.
But I have one small quibble. I don’t want you to delete a MARC record, I want you to free the data from MARC. And thankfully, this is exactly what we are doing at OCLC, where I work.
I wrote about just one of our most recent efforts here. But we have been doing this for a while. And we will continue to do so, while at the same time supporting MARC as the current foundational standard for library data.
But feel free to go forth and crowbar the data out of MARC. Be the Roy.
by Roy Tennant at March 05, 2014 06:51 PM
New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.
Circulation and Systems Librarian, Kalamazoo College Library, Kalamazoo, MI
Electronic Services Librarian/Information Technology Liaison, Pittsburgh Theological Seminary, Pittsburgh, PA
Software Engineer, Johns Hopkins University, Baltimore, MD
Visit the LITA Job Site for more available jobs and for information on submitting a job posting.
by vedmonds at March 05, 2014 06:21 PM
The Open Knowledge Festival call for session proposals is now open!
The better the proposals, the better the festival, so we’re inviting you to put on your thinking caps and come up with revolutionarily brilliant ideas for sessions at OKFestival 2014.
We know you can do it, and we know you’ll make this festival a huge success by bringing your input to it. To help you fine-tune your ideas – and ask any burning questions that you may have – the Festival Programme Team are going to be on hand via online hangouts over the next week to give you some pointers.
In fact, we’re happy to announce three new tools to help make the magic happen:
we’ve created a public mailing list which you can use to connect and team up with other session planners, to share ideas, plans and tips for OKFestival sessions
we’ve created a brand-new webpage on our festival site with tips to help you build and facilitate the best sessions possible for/at OKFestival
we’re hosting two live hangouts (links below) where you can ask for advice or input on your ideas from us, and exchange tips with each other to help make your proposal shine
Hangouts will be held on Friday, March 7 at 21:00 GMT (22:00 CET/ 13:00 PST/ 16:00 EST) and on Monday, March 10, at 10:00 GMT (11:00 CET/ 13:00 EAT/ 18:00 HKT). We’ll be interacting with you live via etherpad and Twitter – #okfestsessions – as well as via the Google+ Hangouts Q&A App where you can post your questions on the day. The hangouts will be streamed direct to our YouTube channel and G+ page.
If you can’t join us for whatever reason, don’t worry - the resultant YouTube videos will be archived so you can watch them later and you can also continue to read and contribute to the etherpad after the hangouts.
We’re looking forward to building this year’s programme with you!
by Beatrice Martini at March 05, 2014 05:45 PM
Photo by mikebaudio via flickr.
The American Library Association’s Office for Information Technology Policy is accepting nominations for two prestigious awards. The first is the L. Ray Patterson Award: In Support of Users’ Rights. The Patterson Copyright Award recognizes contributions of an individual or group that pursues and supports the Constitutional purpose of the U.S. Copyright Law, fair use and the public domain. Professor Patterson was a copyright scholar and historian who argued that the statutory copyright monopoly had grown well out of proportion, to the extent that the purpose of the copyright law—to advance learning—was hindered.
Patterson co-authored (with Stanley W. Lindberg) The Nature of Copyright: A Law of Users’ Rights and was particularly interested in libraries and their role in advancing users rights. He served as expert counsel to Representative Bob Kastenmaier throughout the drafting of the Copyright Law of 1976. Previous winners of the Patterson Award include Kenneth D. Crews, Peter Jaszi, and Fred von Lohmann. The Patterson Award is a crystal vase trophy.
The second award is the Robert Oakley Memorial Scholarship Fund, sponsored in collaboration with the Library Copyright Alliance (LCA). This award is granted to an early-to-mid-career librarian who is pursuing copyright scholarship and public policy. Professor Oakley was a member of the LCA representing the American Association of Law Librarians, and a long-time member of the International Library Federation of Libraries Associations and Institutions (IFLA), advocating for libraries at the World Intellectual Property Organization and UNESCO.
Oakley was a recognized leader in law librarianship and library management who also maintained a profound commitment to public policy and the rights of library users and was a mentor to many librarians interested in copyright policy. The $1,000 scholarship award may be used for travel necessary to conduct, attend conferences, release from library duties or other reasonable and appropriate research expenses.
The deadline for nominations has been extended to March 31, 2014. For more information on nomination details, see the links above. If you have additional questions, contact Carrie Russell, OITP Director of the Program on Public Access to Information, at email@example.com.
The post It may not be the Academy Award but there’s still time… appeared first on District Dispatch.
by Carrie Russell at March 05, 2014 04:46 PM
Developer Network will be going live with a whole new website this Monday, March 10th!
by hostetls at March 05, 2014 02:56 PM
This is a project we’ve been working on for a while now and it’s exciting to finally be able to share it with all of you. While we can't wait to tell you about all of the bells and whistles, it’s most important that you know that the new site will be right here at the same address and all of the critical information you’ve been using will still be available. Our URL patterns will be changing somewhat and we’ve put re-directs in place to save as many of your bookmarks as possible. Still, you’ll probably want to take some time to explore the new site and make sure you can locate your favorites.
by Shelley Hostetler at March 05, 2014 02:56 PM
The idea that format migration is integral to digital preservation was for a long time reinforced by people's experience of format incompatibility in Microsoft's Office suite. Microsoft's business model used to depend on driving the upgrade cycle by introducing gratuitous forward incompatibility, new versions of the software being set up to write formats that older versions could not render. But what matters for digital preservation is backwards incompatibility; newer versions of the software being unable to render content written by older versions. Six years ago the limits of Microsoft's ability to introduce backwards incompatibility were dramatically illustrated when they tried to remove support for some really old formats.
The reason for this fiasco was that Microsoft greatly over-estimated its ability to impose the costs of migrating old content on their customers, and the customer's ability to resist. Old habits die hard. Microsoft is trying to end support of Windows XP and Office 2003 on April 8 but it isn't providing cost-effective upgrade paths for what is now Microsoft's fastest-growing installed base. Joel Hruska writes:
Microsoft has come under serious fire for some significant missteps in this process, including a total lack of actual upgrade options. What Microsoft calls an upgrade involves completely wiping the PC and reinstalling a fresh OS copy on it — or ideally, buying a new device. Microsoft has misjudged how strong its relationship is with consumers and failed to acknowledge its own shortcomings. Not providing an upgrade utility is one example — but so is the general lack of attractive upgrade prices or even the most basic understanding of why users haven't upgraded.
This resistance to change has obvious implications for digital preservation.
by David. (firstname.lastname@example.org) at March 05, 2014 02:00 PM
I had the enormous pleasure on Saturday and Monday of seeing the three plays that make up The Norman Conquests by Alan Ayckbourn, put on by the Soulpepper company here in Toronto. This review of the October 2013 production explains well how well done they all were and what great plays they are. It was more excellent work by Soulpepper; even more enjoyable than usual because seeing three plays in such a short time—two Saturday and one Monday—concentrates and intensifies everything.
Here I note two especially interesting about the trilogy: the chronology and the fact that Norman is a librarian. I admit that second fact is of limited interest to non-librarians, but after all I myself am a librarian.
The books on stage were perfect. These are in the sitting room; there were Agatha Christies in the dining room.
The three plays all take place over the same weekend with the same six characters, but Table Manners is set in the dining room, Living Together in the sitting room, and Round and Round the Garden in the garden. Each has two acts with two scenes, but the times are staggered, so as you see them—I saw them in that order—the pieces all lock together, and when someone enters a room in one play you realize you saw them leave from another room in another play, or when someone says something offhand in one play you realize they’re covering up an intense experience from another play.
- I.i: The dining room. Saturday evening, 6 pm
- I.ii: The dining room. Sunday morning. 9 am
- II.i: The dining room. Sunday evening, 8 pm
- II.ii: The dining room. Monday morning, 8 am
- I.i: The sitting room. Saturday, 6:30 pm
- I.ii: The sitting room. Saturday, 8 pm
- II.i: The sitting room. Sunday, 9 pm
- II.ii: The sitting room. Monday, 8 am
Round and Round the Garden
- I.i: The garden. Saturday, 5:30 pm
- I.ii: The garden. Saturday, 9 pm
- II.i: The garden. Sunday, 11 am
- II.ii: The garden. Monday, 9 am
Round and Round the Garden comes third in the sequence but contains the weekend in time: it begins first, Saturday at 5:30 pm, and ends last, in the garden on Monday morning at 9 am when people are leaving.
Seeing all three, and spending over six hours with the six actors—while sitting in the front row of an arena theatre!—was a marvellous experience.
The Norman Conquests was first produced at the Library Theatre, which at the time was inside the library in Scarborough in Yorkshire.
Ayckbourn’s official web site has a huge amount of material about The Norman Conquests.
One of the characters is a librarian: Norman, played by Albert Schultz. He does it as a great hairy shambling kind of a man, as many male librarians are, and suitably dressed in a cardigan, as all librarians are. There are a few good library-related lines:
From Table Manners:
Norman: The trouble is, I was born in the wrong damn body. Look at me. A gigolo trapped in a haystack. The tragedy of my life. Norman Dewers—gigolo and assistant librarian.
Ruth: Forget it. You couldn’t possibly take Norman away from me. That assumes I own him in the first place. I’ve never done that. I always feel with Norman that I have him on loan from somewhere. Like one of his library books. I’ll get a card one day informing me he’s overdue and there’s a fine to pay on him.
From Living Together:
Sarah: I thought you were in a hurry to go somewhere, Norman.
Norman: Not at all.
Reg: Yes, I thought you said you had a—librarian’s conference.
Norman: It’s been cancelled.
Norman: About ten seconds ago. Due to lack of interest.
Reg: Funny lot these librarians.
Sarah: It’s a bit late to consider his feelings now, isn’t it? Having tried to steal Annie from under his nose.
Norman: I wasn’t stealing her, I was borrowing her. For the weekend.
Sarah: Makes her sound like one of your library books.
Annie: What are you going to tell Ruth?
Norman: What I was going to tell her anyway. I’ve been on a conference.
Annie: Which finished early?
Norman: Something like that. We ran out of things to talk about. What does it matter? She won’t care. She probably thinks I’m in the attic mending the roof.
Annie: I didn’t know Assistant Librarians had conferences.
Norman: Everybody has conferences.
Ruth: You’re supposed to be at work too.
Norman: I was taken ill, haven’t you heard?
Ruth: I’m amazed they keep you on.
Norman: I’m a very good librarian, that’s why. I know where all the dirty bits are in all the books.
From Round and Round the Garden:
Tom: Oh. I thought you said you were staying.
Norman: No, I’m just passing through on my way to East Grinstead.
Tom: Really? Business?
Norman: Yes. International Association of Assistant Librarians Annual Conference.
Tom: Jolly good.
Norman: I was brought up to believe it was very insulting to sleep with your wife or any lady. A gentleman stays eagerly awake. He sleeps at his work. That’s what work’s for. Why do you think they have SILENCE notices in the library? So as not to disturb me in my little nook behind the biography shelves. L–P.
Ruth: They’ll sack you.
Norman: They daren’t. I reorganized the Main Index. When I die, the secret dies with me.
March 05, 2014 03:33 AM
The Institute of Museum and Library Services recently nominated 15 exemplary libraries for National Medals for their service to their communities. In its 20th year, the National Medal is the nation’s highest honor conferred on libraries and museums, and celebrates institutions that make a difference for individuals, families, and communities.
This year’s honorees will come from a variety of library and museum institutions, including public libraries, cultural library centers and multiple county library systems.
The Institute of Museum and Library Services is encouraging those who have visited finalist libraries and museums to share their story on their Facebook page: www.facebook.com/USIMLS.
National Medal nominees include:
- Pima County Public Library (Tucson, Ariz.)
- Los Angeles Public Library (Los Angeles, Calif.)
- Sacramento Public Library (Sacramento, Calif.)
- Hartford Public Library (Hartford, Conn.)
- Otis Library (Norwich, Conn.)
- Athens-Clarke County Library (Athens, Ga.)
- Chicago Public Library (Chicago, Ill.)
- Booth Library (Eastern Illinois University) (Charleston, Ill.)
- Cecil County Public Library (Elkton, Md.)
- Yiddish Book Center (Amherst, Mass.)
- Mid-Continent Public Library (Independence, Mo.)
- Las Vegas-Clark County Library District (Las Vegas, Nev.)
- Octavia Fellin Public Library (Gallup, N.M.)
- Schomburg Center for Research in Black Culture, New York Public Library (New York, N.Y.)
- Bertha Voyer Memorial Library (Honey Grove, Texas)
The post 15 Museums, Libraries Nominated National Medals appeared first on District Dispatch.
by Jazzy Wright at March 05, 2014 12:22 AM
March 04, 2014
Today, President Barack Obama released his budget request for the 2015 fiscal year. The proposed budget for the Library Services and Technology Act falls $2 million short from the $180.9 million enacted by the U.S. Congress for the 2014 fiscal year. The big hit came to the state program, with slight increases to the set aside for Native Americans and Hawaiians and the National Leadership grants.
||FY 2014 Request
||FY 2014 Enacted
||FY 2015 Request
|Grants to States
|Native Am/Haw. Libraries
|Nat. Leadership / Libraries
|Laura Bush 21st Century
(View the full chart on the budget cuts from IMLS.)
On a conference call with stakeholders, Institute of Museum and Library Services Director Susan Hildreth discussed the Laura Bush 21st Century grants programs, saying that her agency is working on a National Continuing Education Platform so library employees can continue their education around new services and technologies.
On a disappointing note, the President did not include any resources for school libraries.
Please be on the lookout for an action alert from the Washington Office regarding several “Dear Appropriator” letters that your legislators can sign in support of these programs.
The post Federal library funding cut in proposed budget appeared first on District Dispatch.
by Emily Sheketoff at March 04, 2014 08:21 PM
Since announcing the preview release of 194 Million Open Linked Data Bibliographic Work descriptions from OCLC’s WorldCat, last week at the excellent OCLC EMEA Regional Council event in Cape Town; my in-box and Twitter stream have been a little busy with questions about what the team at OCLC are doing.
Instead of keeping the answers within individual email threads, I thought they may be of interest to a wider audience:
Q I don’t see anything that describes the criteria for “workness.”
“Workness” definition is more the result of several interdependent algorithmic decision processes than a simple set of criteria. To a certain extent publishing the results as linked data was the easy (huh!) bit. The efforts to produce these definitions and their relationships are the ongoing results of a research process, by OCLC Research, that has been in motion for several years, to investigate and benefit from FRBR. You can find more detail behind this research here: http://www.oclc.org/research/activities/frbr.html?urlm=159763
Q Defining what a “work” is has proven next to impossible in the commercial world, how will this be more successful?
Very true for often commercial and/or political, reasons previous initiatives in this direction have not been very successful. OCLC make no broader claim to the definition of a WorldCat Work, other than it is the result of applying the results of the FRBR and associated algorithms, developed by OCLC Research, to the vast collection of bibliographic data contributed, maintained, and shared by the OCLC member libraries and partners.
Q Will there be links to individual ISBN/ISNI records?
- ISBN – ISBNs are attributes of manifestation [in FRBR terms] entities, and as such can be found in the already released WorldCat Linked Data. As each work is linked to its related manifestation entities [by schema:workExample] they are therefore already linked to ISBNs.
- ISNI – ISNI is an identifier for a person and as such an ISNI URI is a candidate for use in linking Works to other entity types. VIAF URIs being another for Person/Organisation entities which, as we have the data, we will be using. No final decisions have been made as to which URIs we use and as to using multiple URIs for the same relationship. Do we Use ISNI, VIAF, & Dbpedia URIs for the same person, or just use one and rely on interconnection between the authoritative hubs, is a question still to be concluded.
Q Can you say more about how the stable identifiers will be managed as the grouping of records that create a work change?
You correctly identify the issue of maintaining identifiers as work groups split & merge. This is one of the tasks the development team are currently working on as they move towards full release of this data over the coming weeks. As I indicated in my blog post, there is a significant data refresh due and from that point onwards any changes will be handled correctly.
Q Is there a bulk download available?
No there is no bulk download available. This is a deliberate decision for several reasons.
Firstly this is Linked Data – its main benefits accrue from its canonical persistent identifiers and the relationships it maintains between other identified entities within a stable, yet changing, web of data. WorldCat.org is a live data set actively maintained and updated by the thousands of member libraries, data partners, and OCLC staff and processes. I would discourage reliance on local storage of this data, as it will rapidly evolve and become out of synchronisation with the source. The whole point and value of persistent identifiers, which you would reference locally, is that they will always dereference to the current version of the data.
Q Where should bugs be reported?
Today, you can either use the comment link from the Linked Data Explorer or report them to email@example.com. We will be building on this as we move towards full release.
Q There appears to be something funky with the way non-existent IDs are handled.
You have spotted a defect! – The result of access to a non established URI should be no triples returned with that URI as subject. How this is represented will differ between serialisations. Also you would expect to receive a http status of 404 returned.
Q It’s wonderful to see that the data is being licensed ODC-BY, but maybe assertions to that effect should be there in the data as well?.
The next release of data will be linked to a void document providing information, including licensing, for the dataset.
Q How might WorldCat Works intersect with the BIBFRAME model? – these work descriptions could be very useful as a bf:hasAuthority for a bf:Work.
The OCLC team monitor, participate in, and take account of many discussions – BIBFRAME, Schema.org, SchemaBibEx, WikiData, etc. – where there are some obvious synergies in objectives, and differences in approach and/or levels of detail for different audiences. The potential for interconnection of datasets using sameAs, and other authoritative relationships such as you describe is significant. As the WorldCat data matures and other datasets are published, one would expect initiatives from many in starting to interlink bibliographic resources from many sources.
Q Will your team be making use of ISTC?
Again it is still early for decisions in this area. However we would not expect to store the ISTC code as a property of Work. ISTC is one of many work based data sets, from national libraries and others, that it would be interesting to investigate processes for identifying sameAs relationships between.
The answer to the above question stimulated a follow-on question based upon the fact that ISTC Codes are allocated on a language basis. In FRBR terms language of publication is associated with the Expression, not the Work level description. As such therefore you would not expect to find ISTC on a ‘Work’ – My response to this was:
Note that the Works published from WorldCat.org are defined as instances of schema:CreativeWork.
What you say may well be correct for FRBR, but the the WorldCat data may not adhere strictly to the FRBR rules and levels. I say ‘may not’ as we are still working the modelling behind this and a language specific Work may become just an example of a more general Work – there again it may become more Expression-like. There is a balance to be struck between FRBR rules and a wider, non-library, understanding.
Q Which triplestore are you using?
We are not using a triplestore. Already, in this early stage of the journey to publish linked data about the resources within WorldCat, the descriptions of hundreds of millions of entities have been published. There is obvious potential for this to grow to many billions. The initial objective is to reliably publish this data in ways that it is easily consumed, linked to, and available in the de facto linked data serialisations. To achieve this we have put in place a simple very scalable, flexible infrastructure currently based upon Apache Tomcat serving up individual RDF descriptions stored in Apache HBase (built on top of Apache Hadoop HDFS). No doubt future use cases will emerge, which will build upon this basic yet very valuable publishing of data, that will require additional tools, techniques, and technologies to become part of that infrastructure over time. I know the development team are looking forward to the challenges that the quantity, variety, and always changing nature of data within WorldCat will provide for some of the traditional [for smaller data sets] answers to such needs.
As an aside, you may be interested to know that significant use is made of the map/reduce capabilities of Apache Hadoop in the processing of data extracted from bibliographic records, the identification of entities within that data, and the creation of the RDF descriptions. I think it is safe to say that the creation and publication of this data would not have been feasible without Hadoop being part of the OCLC architecture.
Hopefully this background will help those interested in the process. When we move from preview to a fuller release I expect to see associated documentation and background information appear.
by Richard Wallis at March 04, 2014 02:45 PM
This is a guest post from Andreas Von Gunten, founder of the Creative Commons-based publishing house Buch & Netz and editor of the brand new “The 2013 Open Read – Stories and articles inspired by OKCon2013″.
We all remember very well the fantastic OKCon / Open Knowledge Conference in Geneva last year. There were so many interesting and inspiring workshops from open data enthusiasts from all over the world, and it was a great honor for me to be able to publish an eBook and an online book about the themes and issues from the OKCon 2013.
Now «The 2013 Open Reader – Stories and articles inspired by OKCon2013: Open Data – Broad,Deep, Connected» is available for free until 16th March 2014. It includes blogposts, white papers, slides, journal articles and other types of texts from 45 speakers, workshop coordinators of this event and other contributors. Grab your copy now or read the content online at: http://books.buchundnetz.com/the2013openreader/
The eBook and its content is licensed under a CC-BY 3.0 license, so feel free to distribute the files and the links as you like.
by Guest at March 04, 2014 12:48 PM
March 03, 2014
This semester, I’m teaching I new course I developed for San Jose State’s MLIS program entitled “Embedded Librarians/Embedded Libraries: Embedding the Library into the Fabric of Higher Education.” It’s been a pleasure so far because the students are so ridiculously smart, insightful, and engaged that I can’t help but be excited about the future of our profession. One of my students, who interviewed a disciplinary faculty member and subject librarian for a project, wrote about how students in a certain science discipline had difficulty getting used to research and information literacy since in the first few years their coursework is so “procedural.” That really resonated with me.
I see students all the time asking us to basically make research like procedural coursework and more black-and-white than it is. And sometimes we indulge them. We show them how to click to limit to peer-reviewed journals, doing them no favors, because the world doesn’t have a button you can click to filter out the not-so-good. We sometimes focus too much on finding sources and not enough on what value sources actually provide (or should provide) in research. We provide students with rules for judging sources, when sometimes, the sources that get judged as poor using something like the C.R.A.P. Test are the exact ones they should be using. We (well, maybe instructors more than librarians) focus on scaring the crap out of students about plagiarism and, as a result, students don’t understand why they should provide attribution other than to not get thrown out of school.
I feel like the current Information Literacy Competency Standards for Higher Education were an attempt to simplify and proceduralize something that is so much more complex (and I don’t blame anyone for that — it’s in our nature to try to make things simpler and more concrete). Start at A and get to Z, and you’re good to go, friend! But there’s so much secret sauce of information literacy success that simply isn’t a part of the current Standards. How much of being good at research is about being persistent? Tolerating frustration? Asking for help? Being curious? Looking at things with a critical eye? And then there are the things that are so hard to learn, but once you’ve internalized them, they seem the most obvious things in the world and improve your approach to research immeasurably. The idea that scholarship is a conversation and when you write a research paper, you are engaging in a conversation with those scholars who came before you. That the idea of “good” and “bad” sources is totally contextual, and what is good for answering one research question may not be good for answering another. Or the idea that information can be misrepresented in any format (from the blog post to the peer-reviewed journal article) and we need to be critical consumers of everything we read/see/hear. Or, even more disturbing, that what we know as true is constantly changing as it is held up to scrutiny and experimentation.
But teaching these things? So much more difficult, more time consuming and less satisfying for the student in the short-term. On the other hand, without getting over the hump of a threshold concept, can we say someone is truly information literate? And once they get over the hump, their perspective is irrevocably changed for the better. It’s like when I internalized the notion that assessment was about learning and not accountability. My cynicism around assessment melted away and I was able to design assessment tools that meaningfully informed my teaching. The shift in my thinking and awareness was incredible.
I got very excited reading the partial draft Framework for Information Literacy for Higher Education, because it embraced so much of what I and many librarians I know have been thinking about around instruction. They are built for the increasingly complex information environment we live in:
Greater need for sense-making and metacognition in a fragmented, complex information environment requires the ability to understand and navigate this environment holistically, focusing upon intersections. These intersections may be between disciplines, between academic major and employment, between sets of projects, or between academic pursuits and community engagement, to name just a few. All of these intersections are underpinned by the need to engage with information and the communication of information. To do so effectively, students must understand the intricate connections between knowledge, abilities, and critical dispositions that will allow them to thrive.
This makes it so clear that our current standards are woefully inaccurate as a model for informing information literacy instruction and for defining the information literate individual (as if that person could even be defined). Here is the proposed definition for information literacy in the new framework:
Information literacy combines a repertoire of abilities, practices, and dispositions focused on expanding one’s understanding of the information ecosystem, with the proficiencies of finding, using and analyzing information, scholarship, and data to answer questions, develop new ones, and create new knowledge, through ethical participation in communities of learning and scholarship.
While it is a bit less approachable in the way it’s written, I do appreciate the recognition in the proposed new definition that we’re talking about more than just skills. I also really like how it talks about using and analyzing information to answer questions (like “where should I go to college?” or “what cell phone should I buy?” or “who should I vote for?”) and that sometimes this happens through participating in communities (and learning from human sources of information in our social networks). That dovetails nicely with the idea of connectivism, which is a theory I’ve really embraced since I read about it in 2005-2006. The recognition of collaboration, participation, creation, and more than just contributing to research papers is very welcome.
I have the great fortune of working within shouting distance of two people whose work had a huge impact on the new draft. Amy Hofer’s fingerprints (along with Lori Townsend and Kori Brunetti) are all over the standards. Their research on threshold concepts in information literacy has made an indelible mark on the profession and our thinking about teaching information literacy (see “Troublesome concepts and information literacy: investigating threshold concepts for IL instruction” and “Threshold Concepts and Information Literacy”). Bob Schroeder has written, with Elyssa Stern Cahoy, on affective learning outcomes in information literacy, getting us all thinking about how information literacy is not just about skills, but about dispositions and feelings (“Valuing Information Literacy: Affective Learning and the ACRL Standards” and “Embedding Affective Learning Outcomes in Library Instruction”). Bob turned me on to the AASL standards, which included a lot of those great dispositions that now are part of the draft framework. Both Amy and Bob have had such an impact on my thinking about instruction and it’s nice to see that their ideas are also impacting thinking about information literacy nationally!
I think Threshold Concepts will force conversations with disciplinary faculty because no threshold concept can be taught in a one-shot. It requires re-emphasis, practice, and reflection. This has to happen in a partnership, much more so than when the focus is on teaching something that feels like our sole domain. (As an aside, I think the boundedness of threshold concepts as they were originally conceived of doesn’t really work in information literacy as it is inherently interdisciplinary.) Of course the problem is that those faculty who are still asking us to “teach JSTOR” or “teach APA” or “teach Boolean operators” will probably not be open to a switch to focusing on “research as inquiry” and “format as process.” However, there are plenty of faculty — those with whom have strong relationships, who trust us and see us as partners — who will be willing to go down the rabbit hole of threshold concepts with us. However, I do question this statement:
A vital benefit in using threshold concepts as one of the underpinnings for the new Framework is the potential for collaboration among disciplinary faculty, librarians, teaching and learning center staff, and others. Creating a community of conversations about this enlarged understanding should create conditions for more collaboration, more innovative course designs, more action research focused on information literacy, and a more inclusive consideration of learning within and beyond the classroom.
Do I think the new framework will have an impact on disciplinary faculty? No. Just as the current Standards didn’t at most institutions. The librarians who read and believed in the Standards did that, but it wasn’t the Standards. I don’t think the new framework will create more collaboration unless librarians work towards greater collaboration and disciplinary faculty are game for it. I feel like we’re as likely to have good collaboration with disciplinary faculty with the new framework and standards as with the old, unless they inspire us (librarians) to pursue deeper partnerships. The framework simply frames and guides the conversation on our side of things.
But I do really love that this framework emphasizes the fact that information literacy instruction is not (and cannot be) the sole domain of librarians. I have always resisted the notion that we are the only people who can and do teach this, and I think in embracing this idea and focusing more of our energies in supporting disciplinary faculty teaching these skills, dispositions, etc. is vital in the current environment.
When I think about assessing these things, yikes! It’s easy to see whether or not a student correctly provided attribution or used quality sources. How do you measure metacognition? How do you know when a student has made it over the hill of a threshold concept? Even looking at authentic student work — their research papers and other products of research — may not tell you this. A student can do a beautiful job on a paper by mimicking good papers s/he has seen before without ever actually internalizing any of the larger lessons. I do like that this draft framework provides ideas for self-assessment and assignments, but I feel like those are actually activities for teaching /learning the threshold concepts. If a student can successfully “conduct an investigation of a particular topic from its treatment in the popular media, and then trace its origin in conversations among scholars and researchers” does that really mean that they understand that scholarship is a conversation? They might, and they might not. But it’s a great tool to try and teach that particular threshold concept.
A small gripe I have with the Framework: I have never been a big fan of transliteracy or metaliteracy because I believe that all of the things covered under those tents fits into information literacy already. I’ve never understood why information literacy itself doesn’t include “new roles and responsibilities brought about by emerging technologies and collaborative communities” or how information literacy doesn’t empower “learners to participate in interactive information environments, equipped with the ability to continuously reflect, change, and contribute as critical thinkers.” In fact, I think the latter statement is exactly the goal of information literacy; to empower people to create, make decisions, etc. If information literacy isn’t about helping people to see themselves as producers of knowledge, then I don’t know why we do what we do. A lot of the stuff listed under metaliteracy learning objectives in the draft, such as “demonstrate the ability to think critically in context” and “compare the unique attributes of different information formats… and have the ability to use effectively and to cite information for the development of original content” seem like they were already part of information literacy in the first place. I agree with Donna Witek that “metaliteracy should [not] be elevated by name to the extent that it is in the new draft ACRL Framework for Information Literacy for Higher Education.” Threshold concepts — totally new and different way of looking at infolit. Dispositions — totally new to ACRL at least. Metaliteracy — kinda what we’ve already been doing.
I think the framework on the whole is a major change, and while a welcome change, it may be a lot for people to swallow. Plenty of librarians have never even heard of threshold concepts. But I love that we, as a profession, are learning and growing and improving our own teaching skills and approaches and ways of thinking. Looking at the line from the sort of “BI model” to what we see here — from a focus on tools to skills to dispositions and sense-making — it’s a beautiful thing. And, like Troy Swanson, I hope it’s never seen as completed, but is constantly improved (annually? don’t hate me people on the committee!) based on feedback and new research. Our information environment is changing rapidly. Our understanding of our user’s needs is changing. Our thinking about learning is changing. Maybe incremental changes would make more sense than such jarring alterations every 14 years.
Image credit: 北京颐和园的高梁桥。 Gaoliang Bridge of The Summer Palace. by Hennessy
by Meredith Farkas at March 03, 2014 10:33 PM
Sign up here for monthly updates to your inbox.
What a month! February may be the shortest month (at least, for those using the Gregorian calendar), but we’ve sure made the most of it. It seems to be the month of “the launch”: the campaign to Stop Secret Contracts; OKFestival’s website, ticket sales and Call for Proposals; Open Data Day 2014; Brazil and Spain as the two newest Chapters; a revamped Public Domain Review; a local City Census to complement the Country Census and resulting Index; and the Impact Stories competition for the Partnership for Open Data! Also, Open Knowledge Central published the results of the Community Survey taken at the end of the 2013 (huge thanks to all of you who contributed) and we’re digesting to learn how we can support the amazing Open communities better.
February is also known for St Valentine’s Day… If you are craving some romance in March, have a look at the ‘little book of love’, celebrated by the Public Domain Review.
Like the sound of what we’re doing? The Open Knowledge Foundation is a not-for-profit organisation – all our community services are provided openly and for free. We rely on the generosity of our institutional and individual supporters. Please visit okfn.org/support to find out more about becoming an Open Knowledge Foundation supporter.
OKFestival 2014 launched
We’re very excited to announce the Open Knowledge Festival 2014! This global, inclusive, and participatory event is taking place July 15th – 17th in Berlin, Germany.
With the main themes of Knowledge, Tools and Society – the three main levers of change – this will be a platform for the change Open Knowledge is making around the world. As for what the content will be, that is up to you! This will be a crowd-sourced event, built by the community. Visit okfestival.org/programme/ to see what sort of proposals we are looking for.
Early bird tickets are now available – get yours at okfestival.org/tickets/
Stop Secret Contracts
This month saw the launch of our new global campaign, Stop Secret Contracts. Together with over 30 civil society groups around the world, we are calling on world leaders to end secrecy in contracting.
Millions of dollars of public money are lost every year to fraud, corruption and lining the pockets of unaccountable corporations. Citizens have the right to know who is doing business with their governments and on what terms. Transparency in government contracting is crucial to democracy.
Sign up to the campaign to Stop Secret Contracts now.
Open Data Day 2014
This year’s Open Data Day was the biggest yet, with over 190 events taking place around the world. The global network gathered in person and remotely, with events from Nepal to Egypt, looking at everything from local government spending, to flood data, to mashing up public domain content into cool videos.
Lots of stories are reported on the Open Knowledge Foundation Community Stories Tumblr, and there’s a round up post on the blog.
We are proud to support Open Data Day, which has fast become a key date in the information activist calendar. The diversity of events produced across the world is a fantastic expression of the vibrant international movement which is building for open data.
New Chapters welcomed
The Open Knowledge ‘official’ network continues to grow, welcoming both Brazil and Spain as Chapters. They join Austria, Belgium, Finland, Germany, Greece, Spain and Switzerland as the group of organisations under the umbrella of Open Knowledge Foundation, making a real difference for Open Data and Open Knowledge in their areas and world-wide.
You can read more about the formalisation of Spain and Brazil, as well as what a Chapter is and how to become one, on our blog and Local Groups pages.
How does Open Data and Open Knowledge affect you? This month, Partnership for Open Data launched the Impact Stories Competition.
Prizes are available – 1000 USD for the winner, and 500 USD each for the two runners-up – submit your story now (before the 24th March) to be in with a chance of winning.
Check out the blog post and website for more details, and share your stories with us.
by Theodora Middleton at March 03, 2014 10:24 AM
March 02, 2014
I’m happy to report that I finally completed work on the latest MarcEdit update. This change provides updates specifically to the RDA Helper and MarcEdit’s implementation/interaction with the OCLC WorldCat Metadata API. The full list of changes can be found below.
- RDA Helper: Modified the way that the 260/264 conversion occurs, to bring it more in line with the current practice and examples. This includes retaining the 264$c and creating a second 264\4 when the program can determine that the data defined in the 260$c is a copyright statement.
- RDA Helper: Added a qualifier option for the 015/020/024/027. You can turn this on or off (default is off)
- RDA Helper: I’ve updated some of the abbreviations
- RDA Helper: Fixed a bug in the 260/264 processing that didn’t always clean up superficial punctuation during the translation process.
- OCLC Integration: OCLC Search – I’ve added the ability to search by indexes oclc number, issn, isbn as a single or Boolean search.
- OCLC Integration: In the configuration area, I’ve added some addition error checking.
- OCLC Integration: In the configuration and batch records update section, I’ve added an option to Lookup your Holdings Codes. This will show all your institutions valid holdings codes (something that folks have sometimes had trouble finding). This makes use of the OCLC Metadata API, specifically the Holdings retrieval code.
- OCLC Integration: I’ve added the holdings retrieval code to the OCLC API Library. This code has been synced to github.
- OCLC Integration: Local Bibliographic Records Support. This is what I’m still working on. The program will allow you, on search, to now choose the option to download a local bib record (rather than the OCLC bib record) if one is available. This allows users who are using OCLC WMS or have Local Bibliographic Records, the ability to create, update, and delete these records.
- OCLC Integration: I’ve added the local bibliographic data records code to the OCLC API Library. This code has been updated to github.
If you have MarcEdit, you can download the program via the automated update tool, or you can download directly from:
by reeset at March 02, 2014 09:28 PM
Exact 20 jaar geleden was dit mijn eerste stripje in de ANS, Algemeen Nijmeegs Studentenblad. Mijn broer kende de hoofdredacteur van de krant, Arjan Broers. Blijkbaar was net een striptekenaar vertrokken en ze hadden een nieuwe nodig. Tijdens het tekenen was
by hochstenbach at March 02, 2014 11:01 AM
March 01, 2014
First there was the bad engagement photos tumblr, but now it’s been one-upped by this crazy Russian wedding photos LiveJournal.
by Casey Bisson at March 01, 2014 07:23 PM
The Feynman Lectures on Physics was one of my favorite textbooks in college. It wasn't the assigned textbook, it was recommended reading. I think the reason it doesn't work as a textbook is that every chapter is so deep that students would get sucked so far into every topic that they would never finish the course. It's the sort of book that transforms your life and way of thinking about the physical world. When I started Unglue.it, The Feynman Lectures was one of the first books I investigated for ungluing.
My friends at Caltech informed me that the rights situation with the Feynman Lectures was exceedingly complicated, and it would be a cold day in hell before the Feynman Lectures would be free to the world in digital form. It seems that Caltech and the book publishing world had made an awful hash of the rights, with print rights being owned by Pearson, and the audiovisual rights being owned by competing publisher Perseus. Heroic efforts by Caltech lawyer Adam Cochrane and some dedicated physicists and educators resulted in the untangling of rights, leading to a revised edition available through Perseus imprint Basic Books.
And last year, a miracle happened. An authorized free digital version of the lectures appeared on the web! There is sanity in the world! The Feynman Lectures had been unglued!
Vikram Verma, a software developer in Singapore, wanted to be able to read the lectures on his kindle. Although PDF versions can be purchased at $40 per volume, no versions are yet available in Kindle or EPUB formats. Since the digital format used by kindle is just a simplified version of html, the transformation of web pages to an ebook file is purely mechanical. So Verma proceeded to write a script to do the mechanical transformation – he accomplished the transformation in only 136 lines of ruby code, and published the script as a repository on Github.
Despite the fact that nothing remotely belonging to Perseus or Caltech had been published in Verma's repository, it seems that Perseus and/or Caltech was not happy that people could use Verma's code to easily make ebook files from the website. So they hauled out the favorite weapon of copyright trolls everywhere: a DMCA takedown.
Luckily, Github has a policy of publishing every DMCA takedown notice it receives, which is how I found out about Perseus' action, and Verma's counternotice. Perseus had 10 days to respond to the counter-notice and since they failed to do so, Github has re-opened the repository.
In the meantime, the Feynman Lectures website has taken some steps to break Verma's script. For example, instead of a link to
http://www.feynmanlectures.caltech.edu/II_28.html (my favorite chapter), the table of contents now has a link to
Michael Gottlieb, the editor of The Feynman Lectures on Physics New Millennium Edition added this issue to the repo:
The online edition of The Feynman Lectures Website posted at www.feynmanlectures.caltech.edu and www.feynmanlectures.info is free-to-read online. However, it is under copyright. The copyright notice can be found on every page: it is in the footer that your script strips out! The online edition of FLP can not be downloaded, copied or transferred for any purpose (other than reading online) without the written consent of the copyright holders (The California Institute of Technology, Michael A. Gottlieb, and Rudolf Pfeiffer), or their licensees (Basic Books). Every one of you is violating my copyright by running the flp.mobi script. Furthermore Github is committing contributory infringement by hosting your activities on their website. A lot of hard work and money and time went into making the online edition of FLP. It is a gift to the world - one that I personally put a great deal of effort into, and I feel you are abusing it. We posted it to benefit the many bright young people around the world who previously had no access to FLP for economic or other reasons. It isn't there to provide a source of personal copies for a bunch of programmers who can easily afford to buy the books and ebooks!! Let me tell you something: Rudi Pfeiffer and I, who have worked on FLP as unpaid volunteers for about a decade, make no money from the sale of the printed books. We earn something only on the electronic editions (though, of course, not the HTML edition you are raping, to which we give anyone access for free!), and we are planning to make MOBI editions of FLP - we are working on one right now. By publishing the flp.mobi script you are essentially taking bread out of my mouth and Rudi's, a retired guy, and a schoolteacher. Proud of yourselves? That's all I have to say personally. Github has received DMCA takedown notices and if this script doesn't come down pretty soon they (and very possibly you) might be hearing from some lawyers. As of Monday, this matter is in the hands of Perseus's Domestic Rights Department and Caltech's Office of The General Counsel.
Michael A. Gottlieb
Editor, The Feynman Lectures on Physics New Millennium Edition
(Note: Gottlieb's description of the website copyright notice
is inaccurate- it says nothing about "downloaded, copied or transferred for any purpose")
This is kind of sad. Here Caltech did the right and noble thing and made the Feynman Lectures
free as a website. That they can make money from the work via sales of print and other versions is great. But having done that, trying to control what people do with the free digital version (other than sell it) is a hopeless endeavor, and they should just stop.
I was wrong. The Feynman Lectures
hasn't been unglued.Update, March 3: Verma made a one-line change to the script to un-break it. But it's not a polite script, so don't all go and run it. Better to ask Caltech to use the script to make epubs and mobi's for sale; I would certainly pay for my DRM-free copy!
Update, March 4: Gottlieb e-mailed me to say that Perseus didn't respond to the counter-notice because Github's email notice went to a spam filter, and that more takedowns would be coming. He seemed to think that I am one of the flp.mobi developers and warned that I have put myself "in a precarious legal position". To me clear, I am not involved in the development or publication of flp.mobi. I hope its existence is not used as a pretext to take down or lock down the FLP website. Also, high-quality epub and mobi are on the way!
Update, March 7: Verma e-mailed me to say he is voluntarily taking down his repo:
I'm taking down my copy of the repository on Monday morning, in worry its continued availability will lead Caltech to discontinue free online access to FLP. You're each welcome to adopt maintainership if you prefer, though I would rather if you did not.Techdirt has a post and commentary.
Update, March 10: Verma's repo is now history, but forks of it remain in 15 places, including, bizarrely, Gottlieb's own Github page.
by Eric (firstname.lastname@example.org) at March 01, 2014 01:01 PM
February 28, 2014
I've spent a large part of February becoming acquainted with Open Access ebook publishers. And the one thing that troubles me is that too many of them are not putting honesty first. Because existing distribution channels do not reward forthrightness in Open Access publishers; in fact the channels actively discourage it.
Let's take Amazon, for example. They don't like free ebooks, because there's no money in it for them. If you're a publisher and you want your ebook to be free for people to load onto their kindles, Amazon will charge you for the privilege. They rationalize that they're paying for a separate wireless network, "Whispernet", so it's only fair to assess "delivery charges" to free ebook publishers. If you use their 70% royalty option, the delivery charge is 15 cents per MB of data, and the minimum price you can set is 99 cents. The only way to get Amazon to deliver your ebook for free is to select their 35% royalty option, and then invoke this "matching Competitor Pricing" clause:
From time to time your book may be made available through other sales channels as part of a free promotion. It is important that Digital Books made available through the Program have promotions that are on par with free promotions of the same book in another sales channel. Therefore, if your Digital Book is available through another sales channel for free, we may also make it available for free. If we match a free promotion of your Digital Book somewhere else, your Royalty during that promotion will be zero. (Unlike under the 70% Royalty Option, if we match a price for your Digital Book that is above zero, it won't change the calculation of your Royalties indicated in C. above.)
Apple, Kobo, and Google are much happier to set prices to zero, because they make some money on hardware sales or advertising, so you can get Amazon to give your ebook away for free by getting people to report your zero price on Apple, Kobo, and Google.
So just to get your ebook to be free on Kindle, you're forced to be incompletely honest with your customers and distributers.
But Amazon creates a great temptation. Why not use the suckers paying on Kindle to subsidize the free availability for those smart users who come to your website? Isn't it convenience that these people are happily paying for?
And libraries are another temptation. They'll pay for the convenience of getting your ebook though their preferred platform, Overdrive or whatever, even as you offer the book for free to users at all the libraries that don't pay for your ebook. But would they still buy if they knew they could get the ebook for free? Maybe you shouldn't ask questions when you don't want to know the answer.
So here's my simple, unproven postulate: in the long run, full disclosure about pricing and an honest relationship with readers will be in the best, mutual interests of authors, publishers, readers, and libraries. And customers will prefer a distribution channel that enables that honesty.
by Eric (email@example.com) at February 28, 2014 11:14 PM
In a desire to make our APIs faster, easier to use, and more flexible, we will be adjusting the Invoice and Budget schemas for the WMS Acquisitions API in the upcoming release, currently scheduled for 6 April.
by hostetls at February 28, 2014 09:58 PM
In a desire to make our APIs faster, easier to use, and more flexible, we will be adjusting the Invoice and Budget schemas for the WMS Acquisitions API in the upcoming release, currently scheduled for 6 April.
by Shelley Hostetler at February 28, 2014 09:58 PM
On March 6, the American Association of School Librarians will offer a free webinar on the student loans. This session has limited space so please register quickly if interested.
Have questions about loan forgiveness for school librarians? Register for “Federal Student Loan Forgiveness and Cancellation Benefits,” a webinar that will provide information on loan forgiveness and cancellation benefits for school librarians with federal student loans. Webinar leaders will discuss Perkins Loan cancellations and Direct Loan Public Service Loan Forgiveness. Attendees will have an opportunity to ask questions at the end of the presentation.
Presenters include Ian Foss, federal student aid program specialist for the U.S. Department of Education, and Brian Smith, post-secondary education program specialist for the Federal Family Education Loan Program.
Date: Thursday, March 6, 2014
Time: 4:00 p.m. EST (1:00 p.m. PST)
Register for the free event
A seat in the live webinar is guaranteed to the first 100 attendees. All advance registrants will receive a link to the webinar archive after the event.
The post Free informative webinar for school librarians on loan forgiveness appeared first on District Dispatch.
by Emily Sheketoff at February 28, 2014 07:27 PM
I wanted to let everyone know that we have just open sourced the Drupal-based Library DIY infrastructure. You can find it at
We have documentation on how to get it installed on a server, how to get started using the system, and how to add/organize content. Unfortunately, we do not have the staff time to provide tech support for Library DIY, I’d be happy to answer questions about how we developed our information architecture and content.
Here is a list of all of the PSU Library’s awesome open source projects. We’ve got some really talented and driven people working here at the library, and the proof is in all of these projects.
by Meredith Farkas at February 28, 2014 04:57 PM
Pen and ink on Bristol ATC.
by John at February 28, 2014 03:03 PM
This goes out to all you paraoid netizens out there, and if you’re not one, you should be…
As a follow-up to my last post on moving off Chrome and back to Firefox for privacy and security reasons, I wanted to document that I gave Firefox Sync a closer look.
Mozilla, the folks that develop Firefox, has a very detailed information page on Firefox Sync, but to sum up, this feature allows one to share add-ons, bookmarks, passwords, preferences, history and tabs across all your computers and other devices.
Double-plus-good: you can decide what to sync and what not to. Because I’m trying to be extra careful with my data, I opted for syncing only my add-ons, bookmarks and preferences. One important note on syncing add-ons, this will install your add-ons across your devices, but not necessarily configure them, so you might have to do that part manually.
If you opt to sync your history, it will do so up to 60 days.
Reading over the security details of Firefox Sync, it seems like you’re in pretty good hands since sync uses an encryption key. I consider passwords and history going beyond my tolerance threshold, but these are likely pretty secure for most folks. My rule is to assume that hackers access my sync data: What can I live with leaking out to the public?
Bookmarks? I guess so.
History? Not really
Passwords? Are you kidding?
When I set up sync, I also added Firefox as my default phone browser which I find no problems with yet and it’s nice to know that I’m surfing as privately on Android as on OSX.
by mryanhess at February 28, 2014 03:00 PM
This past Saturday was Open Data Day across the world. More than 190 events took place around the globe and many of these were organized by Local Groups of the Open Knowledge Foundation. In this summary we will be highlighting some of all these great events (see also our blog post leading up to Open Data Day and our dedicated Open Data Day overview page).
In Ireland they worked on 4 open data and civic projects. Around 70 people – data wranglers, coders, activists, civil society representatives and interested citizens – volunteered their time and participated actively in the different projects as well as networked, shared ideas and enjoyed great food! In Egypt lots of participants joined from around the country and collaborated online and offline, and in general lots of attention was garnered, also in the national media and across the social media space. The highlight of the impact was a couple of supporting tweets from Egypt’s Minister of Communications and Information Technology, Atef Helmy.
Our friends in India had great Open Data Day success as well. Day long events included hackathons, webinars, opening up of datasets, making of data visualization and many other such activities.
The Nepal Local Group also organized a series of activities including talks and various sprints including wiki school and Mozilla webmaking, many of which are summarised in this blog post, this photo gallery and this video.
In Russia and Belarus they had several events in different cities, among other Minsk, Perm and Moscow – collaborating with among other the local OpenStreetMap community and OpenAccess project Cyberleninka. In Belgium they focused on making a pre-Open Data Day event. The result was a full topic stream at the Data Days conference in Belgium, titled ‘Open Belgium’ and it was a great success. They gathered over 180 data experts (which was maximum capacity of the venue), which included local and national policy makers and even visitors from other countries.
In Scandinavia several activities took place. In Sweden they used the occasion to officially launch the Open Knowledge Foundation Local Group and released a press release. Iceland did a hackathon, Finland hosted an Open Data Brunch and in Denmark they held a grand event with 4 difference hackathons and workshops, where around 60 people – scientists, artists, data wranglers, coders, activists, data providers and interested citizens – participated. Some had worked on making videos from openly licenses cultural heritage content and showcased the results in the evening at a big Bring Your Own Beamer event in downtown Copenhagen. During the day the 4 countries even had a video hangout to share stories and connect over shared enthusiasm about open knowledge!
OpenSpending and Local Open Data Census sprints
Some groups engaged in some of the global Open Data Day topics of the Open Knowledge Foundation network. One of those was the OpenSpending project, where for instance our group in Burkino Faso dug into public expenses. Open Knowledge Foundation Japan passed another milestone and have now added more than 250 datasets on OpenSpending. In London, transactional spending data from the London borough of Lewisham was published by participants – and in Spain they visualised the city of Vigo.
Many groups participated in the global Local Open Data Day sprint. Among the most active were the United States where the sprint was organized in collaboration with CodeAcross and Sunlight Foundation – and the result was the data mapping of over 20 cities across the nation. Also our Greek and German groups did an amazing job and mapped an impressive 10 cities and 11 cities respectively – see photo gallery from Greece here. Germany additionally worked on all kinds of other projects and even shot a little video.
The twittersphere was also highly active all around the globe. Congregating around hashtags such as #ODD14 and #ODD2014 thousands joined to either mention what they were doing or comment on the great works of others. We’ve highlighted some of the best tweets here.
All in all an amazing day that truly highlighted the breath and depth of the global open data community. We can hardly wait for Open Data Day 2015!
by Christian Villum at February 28, 2014 02:46 PM
This is a guest post by Ranjit Goswami, Dean (Academics) and (Officiating) Director of Institute of Management Technology (IMT), Nagpur, India. Ranjit also volunteers as one of the Indian Country Editors for the Open Data Census.
Developing nations, more so India, increasingly face a challenge in prioritizing its goals. One thing that increasingly becomes relevant in this context, in the present age of open knowledge, is the relevance of subscription-journals in dissipation and diffusion of knowledge in a developing society. Young Aaron Swartz from Harvard had made an effort to change it, that did cost him his life; most developed nations have realized research funded by tax-payers money should be made freely available to tax-payers, but awareness on these issues are at quite pathetic levels in India – both at policy level and among members of academic community.
Before one looks at the problem, a contextual understanding is needed. Today, a lot of research is done globally, including some of it in India, and its importance in transforming nations and society is increasingly getting its due recognition across nations. Quantum of original application oriented research, applicable specifically to the developing world, is a small part of overall global research. Some of it is done locally in India too, in spite of two obvious constraints developing nations face: (1) lack of funds, and (2) lack of capability and/or capacity.
Tax-funded research should be freely available
This article argues that research outcomes, done in India with Indian tax-payers money, are to be freely available to all Indians, for better diffusion. Unfortunately, the present practice is quite opposite.
The lack of diffusion of knowledge becomes evident in absence of any planned efforts, to make the research done in local context available in open platforms. Here when one looks at the academic community in India, due to the older mindset where research score and importance is given only for publishing research papers in journals, often even in journals of questionable quality, faculty members are encouraged to publish in subscription-journals. Open access journals are considered untouchables. Faculty members mostly do not keep a version of the publication to be freely accessible – be it in their own institute’s website, or in other formats online. More than 99% of Indian higher educational institutes do not have any open-access research content in their websites.
Simultaneously, a lot of academic scams get reported, more from India, as measuring research contribution is a difficult task. Faculty members often fall prey to short-cuts of institute’s research policy, in this age of mushrooming journals.
Facing academic challenges
India, in its journey to be an to an open knowledge society, faces diverse academic challenges. Experienced faculty members feel, that making their course outlines available in the public domain would lead to others copying from it; whereas younger faculty members see subscription journal publishing as the only way to build a CV. The common ill-founded perception is that top journals would not accept your paper if you make a version of it freely available. All of above act counter-productive to knowledge diffusion in a poor country like India. The Government of India has often talked about open course materials, but in most government funded higher educational institutes, one seldom sees even a course outline in public domain, let alone research output.
Question therefore is: For public funded universities and institutes, why should any Indian user have to cough up large sums of money again to access their research output? And it is an open truth that – barring a very few universities and institutes – most Indian colleges, universities and research organizations or even practitioners cannot afford the money required to pay for subscribing most well-known journal databases, or afford individual articles therein.
It would not be wrong to say that out of thirty-thousand plus higher educational institutes, not even one per cent has a library access comparable to institutes in developed nations. And academic research output, more in social science areas, need not be used only for academic purposes. Practitioners – farmers, practicing doctors, would-be entrepreneurs, professional managers and many others may benefit from access to this research, but unfortunately almost none of them would be ready or able to shell out $20+ for a few pages by viewing only the abstract, in a country where around 70% of people live below $2 a day income levels.
Ranking is given higher priority than societal benefit
Academic contribution in public domain through open and useful knowledge, therefore, is a neglected area in India. Although, over the last few years, we have seen OECD nations, including China, increasingly encouraging open-access publishing by academic community; in India – in its obsession with university ranks where most institutes fare poorly, we are on reverse gear. Director of one of India’s best institutes have suggested why such obsessions are ill-founded, but the perceptions to practices are quite opposite.
It is, therefore, not rare to see a researcher getting additional monetary rewards for publishing in top-category subscription journals, with no attempt whatsoever – be it from researcher, institute or policy-makers – to make a copy of that research available online, free of cost. Irony is, that additional reward money again comes from taxpayers.
Unfortunately, existing age-old policies to practices are appreciated by media and policy-makers alike, as the nation desperately wants to show to the world that the nation publishes in subscription journals. Point here is: nothing wrong with producing in journals, encourage it even more for top journals, but also make a copy freely available online to any of the billion-plus Indians who may need that paper.
Incentives to produce usable research
In case of India, more in its publicly funded academic to research institutes, we have neither been able to produce many top category subscription-journal papers, nor have we been able to make whatever research output we generate freely available online. On quality of management research, The Economist, in a recent article stated that faculty members worldwide ‘have too little incentive to produce usable research. Oceans of papers with little genuine insight are published in obscure periodicals that no manager would ever dream of reading.’ This perfectly fits in India too. It is high time we look at real impact of management and social science research, rather than the journal impact factors. Real impact is bigger when papers are openly accessible.
Developing and resource deficit nations like India, who need open access the most, thereby further lose out in present knowledge economy. It is time that Government and academic community recognize the problem, and ensure locally done research is not merely published for academic referencing, but made available for use to any other researcher or practitioner in India, free of cost.
Knowledge creation is important. Equally important is diffusion of that knowledge. In India, efforts to resources have been deployed on knowledge creation, without integrative thinking on its diffusion. In the age of Internet and open access, this needs to change.
Prof. Ranjit Goswami is Dean (Academics) and (Officiating) Director of Institute of Management Technology (IMT), Nagpur – a leading private B-School in India. IMT also has campuses in Ghaziabad, Dubai and Hyderabad. He is on twitter @RanjiGoswami
by Guest at February 28, 2014 12:10 PM
This is the third guest blog post from Open Steps, an initiative by two young Berliners Alex (a software developer from Spain) and Margo (a graduate in European politics from France) who decided to leave their daily lives and travel around the world for one year to meet people and organizations working actively in open knowledge related projects, documenting them on their website. Read also the first blog post and the second one.
After the first 6 months in East Europe and India, we landed in the Asian continent and had two and a half months to explore South-East Asia, Hong Kong and Japan. Starting first planning meetings and workshops in the Mekong Region, we rapidly understood there are not numerous organisations working on Open Knowledge there, compared to the previous visited countries.
The Mekong Basin Region and its lack of Open Data momentum
In none of the countries we passed by in South-East Asia (Thailand, Cambodia & Laos) we could find a strong will from the public administration to promote Open Data (OD) or Open Government (OG) initiatives. However, each government has its own different experience. Let’s take a look at this in detail:
In Thailand, we got in contact with Opendream, a company focused on developing web and mobile apps around social issues, mostly using and released as Open Source. Organising our workshop in their offices brought us closer to the singular Thai Open Data history. A plan for releasing data to the public domain through an Open Data platform (which was built by Opendream members) had already been initiated under the mandate of the previous Prime Minister, but surprisingly dismissed few months afterwards when the power changed hands. At the time of our visit, this first attempt was not available anymore on the web and there was no plan to do a second one. Considering other kind of organisations than the public sector, we discovered Thai Netizen Network, a small group of advocates working on intellectual property. We met Arthit Suriyawongkul, its founder, who is also one of the activists working on the Thai adaptation of the Creative Commons license. According to him, the Open movement in Thailand can be summarized in a few individuals who might be connected via social networks but don’t represent in any case an active and regular meeting group.
In Phnom Penh, the capital of Cambodia which was our next stop, Open Data is neither the priority of the government. But there, we could meet several organisations, mostly NGOs, and also gathered numerous students, journalists and human rights advocates as attendees at our events. This reflects a big interest in both data visualisation and data journalism. Our 3 workshops were respectively organised at the national high school for media practitioners (the DMC of the Royal University), with the German GIZ (the Public Agency for International Cooperation) and with Transparency International Cambodia. One of the organisations we particularly consider relevant to mention is Open Development Cambodia (ODC), which manages the only online platform in Cambodia where local data is being aggregated and shared. The elaborate map visualisations of this NGO are the proof that the civil society is active and that making use of data is already a know tool to bring awareness and to address specific issues Cambodia has to face. ODC’s team is working hard on it and together with the newly created OKFN local group, they are the ones leading the efforts. Not to forget is the great event they organised for the international Open Data Day this year.
What about Laos? The neighbouring country has an even more difficult situation than Thailand and we could not discover any initiative there which can be categorized as open, neither from the public administration nor from the civil society. In Vientiane, we met the IT-team behind the data portal of the Mekong River Commission (MRC), an intergovernmental agency between Laos, Thailand, Cambodia and Vietnam, created to preserve the Mekong basin region and improve its water management. The data portal is a platform gathering and analysing data on (among others) water quality through various maps and reports. Sadly, due to national policies and the strict rules defined by the collaboration between these four countries, the data is not available as open but some fees and copyright apply for download and re-use.
A different story : Hong Kong and Japan
South-East Asia can definitely not reflect all Asia and what we discovered during the rest of our journey was the antipode of the first three countries mentioned. We headed further East and arrived in Hong Kong, where we were already in contact since we left Berlin with two active organisations: DimSumLabs (hackerspace) and Open Data Hong Kong (ODHK). DimSumLabs offered us its space and ODHK its warm support to run our session in the big metropole. As they both have built a great community of activists and enthusiasts, the topic of Open Data and Open Cultures in general is large known and there was no need to present our usual beginners-targeted workshop. Instead of that, we prepared new contents and did a recap of the most exciting projects we had discovered so far. It resulted in a very interesting discussion about the status of Hong Kong as “Special Administration Region” of China. The city still remains under China´s rules (has no Freedom of Information Act) but its autonomy allows a “healthy” environment for OD/OG initiatives. The existence of the Open Data platform and Open Data challenges are a proof of it.
On the same line, Japan was a productive stop for our research. First, we visited the Mozilla factory, created last year in the centre of Tokyo. A fantastic open space for everyone to learn and work on the web, equipped with tools such as 3D printers and greatly designed with Open Source furnitures available for download and re-use. On our meeting, we discovered also about their new project called MozBus, a refurbished camping van turned into a nomadic web factory that can provide internet infrastructure at remote areas after natural disasters. The International Open Data Day (22nd February 2014) happened during our stay in this last stop in Asia and we participated in the event organised in Kyoto. There, volunteers from public and private sector and members of the OpenStreetMap Foundation scheduled an one-day workshop to teach citizens with different backgrounds and ages how to use OpenStreetMap and Wikipedia, with the main purpose to document and report historical buildings of the city. In addition, this event was also a good place to research about the status of OD/OG initiatives in Japan. If the government has worked on a strategy for many years (with focus on how can OD/OG make disaster management more efficient) and seems to be in the list of the much advanced countries; the national Open Data platform, launched in beta, dates from last December and there are, generally speaking, still improvements needed, particularly regarding the licenses applied for spending and budgeting data.
But that is not all what is happening in Asia
Although we would really have loved to, it was not possible for us to be all over the continent and discover all the projects and initiatives currently going on. Countries as Indonesia, Philippines or Taiwan present an advanced status regarding Open Data and we would definitely have had a lot to document if our route would have passed there. We invite you to read this sum-up about the Open Data situation in Asia (put together by a folk on the OKFN-Discuss mailing list after last year’s OKCon) to get a more detailed idea on the different contexts the Asian continent shows. It’s a very good read!
After Asia, keeping heading East, we are now reaching South-America and this is here where the last part of our one-year research begins. We have now four months to go through Chile, Argentina, Uruguay, Brazil and Peru and the first contacts we could establish are really promising …. follow us to get updated!
by Guest at February 28, 2014 09:21 AM
Since OCLC made their Metadata APIs available, I’ve spent a good deal of time putting them through their paces and looking for use cases that these particular API might solve, and if/how MarcEdit might be an appropriate platform to take advantage of some of this new functionality. Looking over the new Metadata API, it’s pretty easy to see where the focus was on – providing some capacity to have read/write access to the WorldCat database, primarily the global and local bibliographic data. In addition, the API provided the ability to set institutional holdings on records, though not work with Local Holdings Records.
For that reason, the first round of MarcEdit development utilizing these API was focused primarily on working with the master bibliographic and institutional holdings records. These are two areas where I tend to receive regular queries from users in the mist of reclamation projects or weeding projects and needing to make changes to hundreds or thousands of records in the WorldCat database. The new APIs allowed me to provide an answer to the two most common questions: 1) Can I batch update/delete holdings on a particular OCLC records and 2) can I batch upload records to OCLC. While I’d definitely argue (and I think OCLC probably would agree) that these APIs are not designed to be used with batch operations in mind (i.e., they are slow) – they finally offered a way to communicate with the WorldCat database in real-time…and regardless of performance, this feels/is a big win for libraries that choose to work with OCLC. If you are interested in how this initial work was implemented, you can read about it here: http://blog.reeset.net/archives/1245
After releasing the first MarcEdit builds and making subsequent refinements to the integration work, I’ve had the opportunity to speak with more OCLC WMS users and individuals that make use of the Local Bibliographic Data files and decided that it was time to start working on adding support for this type of data.
The challenge with working with Local Bibliographic Data records is that there is no way (at least through the API) to know if a record has a local bibliographic record attached, without multiple queries and making a set of special calls to the API about a specific OCLC number. This means that for all intensive purposes, the process of working with local bibliographic data in MarcEdit assumes one of two things:
- That the user knows what data has these records
- That the user is likely creating new ones.
Secondly, I’ve tried to integrate this new functionality into the existing tools developed for use with the other OCLC Metadata APIs. So for example, users looking to query a set of records and retrieve the attached local bibliographic records now will see a new option on the OCLC Search box that denotes if the master or local bibliographic record should be extracted.
The problem, is at this point, MarcEdit doesn’t know if a local bibliographic record actually exists. Certainly, the tool could pre-query every oclc number returned as part of a search, but that approach exponentially increases the back and forth communication between MarcEdit and OCLC via the API, and remember, this isn’t an API that feels designed for batch operations. So, rather than query, MarcEdit provides an option to download the local bibliographic record if it is present. If this checkbox isn’t selected, the program will download the master bibliographic record for edit. For users looking to create a local bibliographic record, the process is the same. Local bibliographic records must be attached to a master OCLC number – so a user would query a record, check to download the local bibliographic record, and then attempt the download.
When downloading a local bibliographic file, MarcEdit will prompt users, asking if the tool should automatically generate a local bibliographic file for edit in MarcEdit if one doesn’t exist on a master record.
To generate new records, MarcEdit utilizes a new template within the application – local_bib_record_template.mot. The template contains the following data:
=LDR 00000n a2200000 4500
This template looks slightly different than a normal MarcEdit template, in that it includes a number of data points that MarcEdit will automatically generate as part of the record generation process. This is necessary because specific data must be present in order for a local bibliographic record to be valid. For example, a local bibliographic record must have the OCLC number that it’s attached to – data that is found in the 004. The 935 represents a local system generated number, a number MarcEdit generates as a timestamp indicating record creating time, and finally, the 940 includes the organizational code, or the code that normally would appear in the 040 of a bibliographic record. This information is stored and used as part of the API profile, so MarcEdit includes that data in the record generation process.
A local bibliographic record is in many ways like the master bibliographic record, just with data only applicable to an institution. OCLC has a set of documentation around what kinds of data can be stored in these records. Using a test record in WorldCat, I extracted the following example:
In this record, the data breaks down in the following way:
- 001 – this is the unique lhd control number
- 004 – this is the oclc number for the master bibliographic record that this local record is attached to
- 005 – this is the transaction data; this data must match when updating records as OCLC uses this as a point of validation.
- 500, 790 – in this record, these are the local data fields…data that is separate from the master bibliographic record and visible only to my users who see our local bibliographic data.
- 935 – this is a locally defined system number – when MarcEdit generates this number, it is a system timestamp.
- 940 – this is the institution code.
As one can see, a local bibliographic record can be fairly brief (or verbose depending on the notes added to the record).
To update/create/delete a local bibliographic holdings file (single or batch) – a user would start with a local bibliographic record. Even delete operations require the record – you cannot just pass the lbd a control number or list of oclc numbers – at least at this point. The API requires the record to be sent through the service as a type of validation.
Updating this data requires using the Add/Delete/Update Local Bibliographic Data option:
Selecting this option will open the following window:
When dealing with Local Bibliographic data, the option will be selected, and the option to delete those records is also presented. Users not working with local bibliographic data will see the same dialog, but the Process as Local Bibliographic Records option will be left unchecked and the Delete Records option will not be visible. At this point, users can process their records, and MarcEdit will return the response codes provided by the API to determine if the updates were successful or not.
My guess is, that like the first pass through the API, the use of these methods and tools that make use of them will be refined with time and use, but I think that they provide a good start. Of course, at this point, I’ve reached the limit in terms of functionality that the Metadata API provides. In looking at this toolset, it’s pretty clear that at this point, these API were primarily envisioned for individual, real-time editing of the WorldCat database. I have a feeling that the batch holdings tools, and now the ability to upload bibliographic and local bibliographic data in batches probably fall outside of the identified use cases when OCLC first released these to the public. But they work, though a little slowly, and provide some capacity to work directly with the WorldCat database. At the same time, the API is limited and missing key features that folks are currently asking for – most notably the ability to work with local holdings data, the ability to validate records, and a better process for search and discovery (as the present Search API are woefully inadequate for nearly any, but for the most vanilla uses). Hopefully, by doing some simple integration work in MarcEdit and providing some useful tools around the Metadata API, it can provide a catalysis for additional work, additional functionality, and additional innovation on the side of OCLC – and continue to push the cooperative to provide more transparent and deeper access to the WorldCat resources and holdings.
Finally, the functions discussed here will be made available for download on March 2.
by reeset at February 28, 2014 07:13 AM
Learn about the employed ontologies- Ideally, each of the items in the result will be an actionable URI in the form of a “cool URL”. Using your Web browser, you ought to be able to go to the URL and read a thorough description of the given class, but the URLs are not always actionable.
Learn more about the employed ontologies- Using the following query you can create a list of all the properties in the triple store as well as infer some of the characteristics of each class. Unfortunately, this particular query is intense. It may require a long time to process or may not return at all. In English, the query says, “Find all the unique predicates where the RDF triple has any subject, any predicate, or any object, and sort the result by predicate.”
Guess- Steps #2 and Step #3 are time intensive, and consequently it is sometimes easier just browse the triple store by selecting one of the “cool URLs” returned in Step #1. You can submit a modified version of Step #1′s query. It says, “Find all the subjects where any RDF subject (URI) is a type of object (class)”. Using the
This is the simplest of SPARQL tutorials. The tutorial’s purpose is two-fold: 1) through a set of examples, introduce the reader to the syntax of SPARQL queries, and 2) to enable the reader to initially explore any RDF triple store which is exposed as a SPARQL endpoint.
SPARQL (SPARQL protocol and RDF query language) is a set of commands used to search RDF triple stores. It is modeled after SQL (structured query language), the set of commands used to search relational databases. If you are familiar with SQL, then SPARQL will be familiar. If not, then think of SPARQL queries as formalized sentences used to ask a question and get back a list of answers.
Also, remember, RDF is a data structure of triples: 1) subjects, 2) predicates, and 3) objects. The subjects of the triples are always URIs — identifiers of “things”. Predicates are also URIs, but these URIs are intended to denote relationships between subjects and objects. Objects are preferably URIs but they can also be literals (words or numbers). Finally, RDF objects and predicates are defined in human-created ontologies as a set of classes and properties where classes are abstract concepts and properties are characteristics of the concepts.
Try the following steps with just about any SPARQL endpoint:
Get an overview- A good way to begin is to get a list of all the ontological classes in the triple store. In essence, the query below says, “Find all the unique objects in the triple store where any subject is a type of object, and sort the result by object.”
LiAM triple store, the following query tries to find all the things that are EAD finding aids.
Learn about a specific thing- The result of Step #4 ought to be a list of (hopefully actionable) URIs. You can learn everything about that URI with the following query. It says, “Find all the predicates and objects in the triple store where the RDF triple’s subject is a given value and the predicate and object are of any value, and sort the result by predicate”. In this case, the given value is one of the items returned from Step #4.
Repeat a few times- If the results from Step #5 returned seemingly meaningful and complete information about your selected URI, then repeat Step #5 a few times to get a better feel for some of the “things” in the triple store. If the results were not meaningful, then got to Step #4 to browser another class.
Take these hints- The first of these following two queries generates a list of ten URIs pointing to things that came from MARC records. The second query is used to return everything about a specific URI whose data came from a MARC record.
Read the manual- At this point, it is a good idea to go back to Step #2 and read the more formal descriptions of the underlying ontologies.
Browse some more- If the results of Step #3 returned successfully, then browse the objects in the triple store by selecting a predicate of interest. The following queries demonstrate how to list things like titles, creators, names, and notes.
Read about SPARQL- This was the tiniest of SPARQL tutorials. Using the
LiAM data setas an example, it demonstrated how to do the all but simplest queries against a RDF triple store. There is a whole lot more to SPARQL than SELECT, DISTINCT, WHERE, ORDER BY, AND LIMIT commands. SPARQL supports a short-hand way of denoting classes and properties called PREFIX. It supports Boolean operations, limiting results based on “regular expressions”, and a few mathematical functions. SPARQL can also be used to do inserts and deletes against the triple store. The next step is to read more about SPARQL. Consider reading the
canonical documentationfrom the W3C, ”
SPARQL by example“, and the Jena project’s ”
SPARQL Tutorial“. [1, 2, 3]
Finally, don’t be too intimidated about SPARQL. Yes, it is possible to submit SPARQL queries by hand, but in reality, person-friendly front-ends are expected to be created making search much easier.
by LiAM: Linked Archival Metadata at February 28, 2014 03:13 AM
February 27, 2014
Today we at the Open Knowledge Foundation are launching a new global campaign, Stop Secret Contracts. Secret contracting leads to fraud, corruption, and unaccountability. It means the loss of millions of dollars of public money every year. Join our call to world leaders to end secrecy in public contracting.
Secrecy in contracting is leading to the loss of millions of dollars to corruption, mismanagement, and lining the pockets of unaccountable corporations. The global value of government contracts is estimated at $9.5 trillion, but even in countries with strong government transparency laws the contracting process is often opaque and unaccountable. In both Africa and the EU, estimates suggest that around $150 billion is lost annually to corruption and mismanagement.
While these numbers are staggering, the real cost is counted in the teachers who can’t be paid, the hospitals which have no medicines, and the roads which can’t be built. In the Niger Delta, over 2 million barrels of oil are extracted every day, and yet not a single new road has been built in the region for over ten years. In post-invasion Iraq, an estimated $60 billion was lost in defence and reconstruction contracts – money which could have enabled Iraq to build enough hospitals for the entire country to have a first-class health service. Across the world, the public is losing out to private interests.
Secrecy in contracting means a breakdown in public control over public money, which in its extreme forms endangers the health, futures, and lives of citizens. We must stop secret contracting now to restore trust and accountability between governments and the people.
The campaign already has over 30 organisational signatories including Global Witness, Integrity Action, the International Budget Partnership, the Sunlight Foundation and Transparency International, and we’re expecting many more to join. With local organisations in countries from Hungary to Nepal to South Sudan, we will be targeting governments at both national and international levels to secure reforms. We need your support to show governments the importance of this issue.
Rufus Pollock, Founder of the Open Knowledge Foundation said:
“Every year, millions of dollars of public money are lost to fraud, corruption, and payments to contractors that don’t deliver. Openness of key contracting information is essential to allow us to hold governments to account, and ensure that public money is used for public good.”
Gavin Hayman, Executive Director of Global Witness, said:
“One set of secret deals signed by the DRC government with obscure companies may have cost that state twice its annual education and health budget. Secrecy in how contracts are handed out and what they say robs citizens of the ability to know who got the contract, how they won and whether it was a good deal for their country”
Rueben Lifuka, board member of Transparency International, said:
“Secret contracts are never about public interest and only serve as conduits to satisfy the selfish interests of a few. Giving relevant information about public contracts to government entities, parliaments and civil society contributes to a more stable investment environment, and allows good governance and the rule of law to prevail.”
If you support the aims of the campaign please sign the petition at StopSecretContracts.org.
Help us make some noise about the campaign by tweeting on #SecretContracts or blogging about the issues.
If you’d like to be more involved with the campaign, get in touch with contact [at] stopsecretcontracts [dot] org
For more quotes and details, see our press release.
by Theodora Middleton at February 27, 2014 02:00 PM
I’m currently using gravity forms and struggling with the accessibility of it. It sounds like there are no plans to make it accessible, but I know a lot of people use this plugin, so let’s make the best of it. Why Gravity Forms? I recognize that gravity forms is not fully accessible, so why use […]
by Cynthia at February 27, 2014 12:34 AM
February 26, 2014
The news about OCLC’s Linked Data service circulated widely on Twitter yesterday. I’ve never been a big OCLC cheerleader, but the news really hit home for me. I’ve been writing in my rambling way about Linked Data here for about 6 years. Of course there are many others who’ve been at it much longer than I have … and in a way I think librarians and archivists feel a kinship with the effort because it is cooked into the DNA of how we think about the Web as an information space.
This new OCLC service struck me as an excellent development for the library Web community for a few reasons, that I thought I would quickly jot down:
- it’s evolutionary: OCLC didn’t let the perfect be the enemy of the good. It’s great to hear links to VIAF, FAST, LCSH, etc are planned. But you have to start somewhere, and there is already significant value in expressing the FRBR workset data they have as Linked Data on the Web for others to use. Also, the domain
experiment.worldcat.org clearly reflects this is an experiment…but they didn’t let anxiety about changing URLs prevent them from publishing what they can now. The future is longer than the past.
- it’s snappy: I don’t know if they’ve written about the technical architecture they are using, but the views are quite responsive. Of course I have no idea what kind of load it is under, but so far so good. Update: Ron Buckley of OCLC let me know the service is built on top of a shared Apache HBase Hadoop cluster.
- schema.org: OCLC has the brains and the market position to create their own vocabulary for bibliographic data. But they worked hard at engaging openly with the Web community to help clarify and adapt the Schema.org vocabulary so that it can be used by our community. There is lots of thrashing going on in this space at the moment, and OCLC is being a great model in trying to work with the Web we have, and iterating to make it better, instead of trying to take a quantum leap forward.
- json-ld: JSON-LD has been cooking for a while, but it’s a brand new W3C standard for representing RDF as idiomatic JSON. RDF has been somewhat plagued in the past by esoteric and/or hard to understand representations. JSON-LD really seems to have hit a sweet-spot between the expressivity of RDF and the usability of the Web. It’s refreshing to see OCLC kicking JSON-LD’s tires.
Rubber Meet Road
So how do you discover these Work URIs? Richard’s post led me to believe I could get them directly from the xID service using an ISBN. But I found it to be a two step process: first get any OCLC Number associated with an ISBN from xID, and then use the OCLC Number to get the Work Identifier from the xID service:
So for example, to discover the Work URI for Tim Berners-Lee’s Weaving the Web you first look up the ISBN:
which should yield:
"author": "Tim Berners-Lee with Mark Fischetti.",
"city": "San Francisco",
"ed": "1st ed.",
"title": "Weaving the Web : the original design and ultimate destiny of the World Wide Web by its inventor",
Then pick one of the OCLC Numbers (oclcnum) at random and use it to do an xID call:
Which should return:
You can then dig out the Work Identifier (owi), trim off the owi prefix, and put it on the end of a URL like:
or, if you want the JSON-LD without doing content negotiation:
This returns a chunk of JSON data that I won’t reproduce here, but do check it out.
Update: After hitting publish on this blog post I’ve corresponded a bit with Stephan Schindehette at OCLC and Alf Eaton about some inconsistencies in my blog post (which I’ve fixed), and uncertainty about what the xID API should be returning. Hopefully xID can be updated to return the OCLC Work Identifier when you lookup by ISBN. I’ll update this blog post if I am notified of a change.
One bit of advice that I was given by Dave Longley on the #json-ld IRC channel, which I will pass along to OCLC, is that it might be better to use CURIE-less properties, e.g.
name instead of
@context but I think it might make sense to reference an external context document and cut down on the size of the JSON-LD document even more.
It’s wonderful to see that the data is being licensed ODC-BY, but maybe assertions to that effect should be there in the data as well? I think schema.org have steered clear of licensing properties, but cc:license seems like a reasonable property to use, assuming it’s used with the right subject URI.
And one last tiny suggestion I have is that it would be nice to see the service mainstreamed into other parts of OCLC’s website. But I understand all too well the divides between R&D and production … and how challenging it can be to integrate them sometimes, even in the simplest of ways.
by ed at February 26, 2014 05:44 PM
The following is the text of a piece originally published in my column "Libraries in Computers" for Computers in Libraries 28(9), October 2008.
The Emperor's New Repository
I don't know the first thing about building digital repositories. Maybe that's a strange thing to say, given that I work in a repository development group now, and worked on the original DSpace project years ago, and worked on a few repository research projects in between. If that qualifies me to say anything about repositories, though, it's just that I don't know much about what I'm doing, and I don't think many other people do, either.
I'd better qualify that some. It's not that there aren't smart people working on repositories -- there are plenty. It's not that few repository projects have good, important objectives -- many do. And it's not that we haven't learned anything in the past 10-15 years since "digital repositories" grew from a buzzword into a strategic program in many libraries -- we've learned a lot. But I don't know what this smart group of people with solid goals and lessons learned add up to yet.
Does that still sound strange? If it sounds strange to you, try this thought experiment. Say you get a new job as a director of a small library. In a new town. Without a library. So it's your job to build a library where one doesn't exist. What do you do first?
I've never run or built a library before, but I'd guess that you'd need to start with siting property and working with a city to plan infrastructure. Then come architectural design and construction bids, and when you're far enough along to plan what goes in the building itself you split out budget lines for staffing different departments, and furniture and shelving of several different kinds, and a floor plan, and meeting rooms and utility functions. Maybe you only can hire a few people, but you know you need to cover collection development, public and technical services, computing, accounting and payroll, and maintenance, whether with three people on staff or 30.
See? I don't know the first thing about running a library, but I know you'd better plan for at least all these things. You're probably thinking of other things I didn't mention, whether you've ever run a library or not.
Now suppose you were hired to be a digital repository project director. For a new repository, which doesn't exist yet. What do you do first?
I don't know, either. And that's what I mean!
Collect what you know
Given how long I've been around people and projects aiming to "build repositories", and how little confidence I have that we know what it really means to build a repository, I'd guess there are plenty of you who don't know, either. On one hand, sometimes it's okay not to know -- if you have a good goal in mind, like collecting faculty research, or making rare local materials available, the details of how you achieve your goal are less important than regularly measuring against the yardstick itself. I'd bet, though, that there are plenty of projects that don't even have this much clarity driving them. Absent such clarity, there are a lot of mistakes you can make along the way to achieving clarity in determining why you want a repository.
The first mistake to avoid is fetishizing software products or projects. Over the years I've had a lot of conversations with friends and colleagues who've asked "do you think Greenstone/Fedora/DSpace/EPrints/ContentDM/etc. is what we should use?" My answer these days is almost always the same: "it depends on what you're doing, but you can always start with one or another and decide after you have some experience, and they're all a good place to start, so just pick one and get started." There isn't any single answer and there isn't any clear winner. In an era when we're still trying to figure out what it means to build a repository, it's great that there are so many options, and it's wonderful that there are so many free software options. If you think you need specialized software to build your repository, then the key thing is to get started with a tool that looks like a roughly good fit. You're going to learn so much along the way that the details of whether that tool's the best long term fit or not are going to become obvious to you as you build up experience loading your content and making it available.
The flip side of avoiding agonizing over which tool to pick is that you shouldn't hesitate, once a project takes some turns you don't like, to acknowledge that maybe you've made a bad choice with a particular toolkit, or that maybe you just approached the project the wrong way, with the wrong materials at first, or with the wrong staff, or even just at the wrong time. There's a software development axiom from Frederick Brooks, author of _The Mythical Man Month_, that applies well here: "Plan to throw one away; you will, anyhow." To plan for mistakes means to be ready to learn from them when they happen, and to minimize the cost in energy and expense when things go wrong. Start with a small collection, minimal staff, and a short timetable, and see what you can learn by building something quickly. All the feature comparison spreadsheets and RFPs in the world won't help you make a good decision if you haven't already started up the learning curve for yourself.
There's a broader point to remember here, too, that software isn't usually part of our collection development plans. We librarians know approval plans, cataloging standards, and search strategies, but selecting and implementing software isn't our strength. There, I said it: we're not good at this. But that's okay! Like a reference librarian assigned to select materials in a new collection area, we can learn as we go. The key steps are to get started, and to expect to make mistakes, but to be ready to learn from your mistakes.
I'll let you in on another secret of repository building. Adding new software isn't always the best approach to building a repository. Sometimes it's not only a mistake to introduce new layers of software between content and users or between content and staff, it's just a bad idea, period. You're probably comfortable, by now, with putting a web page online, and setting up a directory on a web server with some new files, or if you're not comfortable performing these tasks, you probably have colleagues or staff who are capable and experienced with basic web publishing. If you have a small collection of digitized or born digital items, and your primary goal is to get it online, the easiest thing to do might be to just put it online in a simple directory or two with some web pages listing and describing it all. Make a backup, too, or two, of course. But most of us with web sites can get some new files linked from a library's home page pretty easily these days. If you can do that, your users can find it from your home page, and even if they never visit your home page, the major search engines can find your items and crawl and index them, so maybe your users can find things that way.
If you think this sounds defeatist or desperate, don't think that -- I mean just the opposite. Sometimes the shortest path between users and content is simply putting the content where users have a chance at finding it. The web does this very well for us. I've administered enough web sites and odd software packages over the years to know well that the content that lasts online the longest is almost always the content with the least amount of stuff around it that can go wrong. Whether it's an old programming language or a bad script or an odd toolkit few people ever used, if the only way to get to your stuff is through some oddly-shaped software that's out of date and unreliable, someday nobody's going to be able to get to your stuff anymore. If, on the other hand, the same content is simply available over the web like any other web content, just a bunch of files in a directory on a web server, then that content can be indexed and reindexed by Google et al., copied and recopied onto different servers by you and your staff, and migrated across changes in web server and operating system software choices over time.
We are the repository
The second mistake to avoid is forgetting that we are the repository. Every software repository I've helped to build has faced complex issues of planning and policy which had little to do with technology and everything to do with how to build a sustainable program for ensuring access over time. Remove any special software from the equation completely -- like in the case of "just a few directories on a web server" -- and planning and policy issues still come into play. Like with any other materials in a library, it's ultimately up to those of us working in the library to set, maintain, and uphold policies for collection development, access, maintenance, and retention. If repository technology gets in the way of making policy choices that fit in with your broader institutional mission, something might be wrong.
Optimize for access
A third mistake that's easy to make is to over-think what a "digital object" might be. I fall into this trap all the time, even with a few rounds of experience under my belt. If you focus on making some content available, using one specialized tool or another or none at all, and that content is useful to your community, its users will tell you how they want to use it. This is another concept echoed both in the "release early and release often" mantra of free software development and the "don't make me think" school of usability testing. Real feedback from real users will tell you the most about features your repository should add or improve, or when one degree more or less of descriptive metadata will make items easier to find or will cost you a ton without really helping people.
If, before "just giving it to people", you spend months or years designing specifications for "complex objects" and hammering structural metadata and content files into shape to match before ever giving your users a chance to see that content, you might just find that you've spent a lot of time and energy without knowing a bit about what people want to do with your stuff. It's possible that you'll guess correctly about what to call files and how to relate files and metadata to each other and store all of that on disk, but that might not help your users at all. Give three programmers a pile of files and ask them to arrange it and you'll get five system designs for how to do it in four different programming languages and database models, but none of that tells you whether your users will use any of it.
The best thing about letting users drive what you do, when it works (I know how hard it can be just to get feedback sometimes), is that it lets you build incrementally. Maybe you start with a simple set of collections published in a few directories and not a lot more. If your users are happy with that, maybe you can stop there. But if they ask for the ability to search across it all, or to browse it all by subjects, or for specialized functions like being able to zoom in on large images, for instance, maybe that's a reason to look deeper at a specialized software package to augment or replace your "files on disk" setup. If you try a new package, look to your user community to tell you whether it does the job better.
I know this advice isn't going to "solve your repository problem" for you. But if you avoid over-thinking software choices and you avoid over-thinking the complexity of your content, you might find that there are immediate, cost-effective choices available to you that can help your community soon and teach you a lot along the way. And if you focus on optimizing access to meet your community's needs, you might find that the policies you'll need to sustain digital materials over time might match how you do everything else already. In five, ten, and twenty years, after all, any software you use today is likely to be obsolete, but it'll still be your responsibility to make and keep your content available and useful to your community.
by dchud at February 26, 2014 05:33 PM
Teamwork Will Get You There CC BY-NC 2.0 by Dr. Case
In Brief As librarians, we claim to uphold the principles of open access, equitable and unbiased service, intellectual freedom, and lifelong learning. How can we better integrate these principles into our workplaces? This article is an exploration of information behaviors and structures in library workplaces, particularly the behaviors of withholding and sharing information, and the effect they have on service to patrons and overall quality of the work environment.
Introduction: Definitions and Questions
As librarians, we are familiar with information as the currency of our work. Information studies scholar Marcia Bates proposes that the word “information” covers “all instances where people interact with their environment in any such way that leaves some impression on them – that is, adds to or changes their knowledge store” (2010). Every day, we see information adding to or changing patrons’ knowledge stores as they discover a new author, narrow a database search, or use company information to prepare for a job interview. We may not think in the same way about the information that makes up our workplaces and workplace behaviors, whether that means cataloging a film, teaching a workshop, or creating a schedule. While we are aware that information is organized, used, and sought in the workplace, we do not always take the same care with it as we do with outward-facing collections of information.
Throughout this article, I will apply different theories of information behavior (both individual and organizational) to library workplaces, whether they are made up of 5 or 500 people. The outcomes of these behaviors are often at cross-purposes with a library’s mission, particularly when it comes to populations with more limited access to information, like new librarians and paraprofessionals. I will describe some models and approaches that actively promote information sharing and clarity that can be applied in library workplaces.
I’d like to start with Donald Case’s definition of information behavior (from an information science perspective) as not just active information seeking but also “the totality of unintentional or passive behaviors (such as glimpsing or encountering information), as well as purposive behaviors that do not involve seeking, such as actively avoiding information” (2002). The vast majority of information behavior studies, if they apply to libraries, have been done on users, not on library staff. But we, too, engage in information behaviors, both individual and institutional. The latter, at its most successful, is expressed by social anthropologist Jean Lave and educational theorist Wenger as a “community of practice.”
Marcia Bates points out that information scientists are interested not inherently in a social hierarchy (as sociologists are), but in the way that hierarchy “impedes or promotes the transfer of information” (2010). What are we doing in our library workplaces, among ourselves as staff, to facilitate the successful transfer of information? What are we doing to block it? A number of researchers in information sharing have concluded that information does not “‘speak for itself’ but requires negotiation concerning its meaning and context” (Talja and Hansen, 2006). What are a workplace and a workday, if not a set of negotiations of the meaning and context of information?
The information cultures of library workplaces do not always follow a principle we espouse as a profession: easy and democratic access to reliable, stable, and clear sources of information. It’s an ideal we strive for more with users than with each other. Like our users, we must derive meaning and purpose from a vast sea of information surrounding us. Some systematic filtering of information is necessary, of course, for us to be able to do our daily work. But surely that can exist within an environment where information is accessible to those who wish to gain access to it. (This may be more of a challenge in privately-funded libraries than in publicly-funded libraries, where more documentation is legally required.) Librarians Martha Mautino and Michael Lorenzen characterize communication and information as forms of power, equating restricted access to information to a “loss of status.” Whether it’s election-related information, consumer information, or the mechanics of database searching, one of the most gratifying aspects of librarianship is empowering users with information. Our colleagues deserve the same.
How information is constructed, documented, and disseminated is crucial to how functional a library workplace is. One way researchers define an environment where information behavior takes place is as an “information culture.” Chun Wei Choo, et al., in their case study of the use of information by employees at a Canadian law firm, define it this way: “By information culture we mean the socially transmitted patterns of behaviors and values about the significance and use of information in an organization” (2006). The key words in this definition are “socially transmitted.” Rules and resources may be organizationally articulated, or reside in unconscious social and other power structures. In her ethnographic studies of information-seeking behavior, Elfreda Chatman introduced the concept of the information “small world” where “insiders see their codes of behavior as normative, routine, and as fitting shared meanings, [but] outsiders to the group cannot relate, because they do not share the same social meanings” (Fulton, 2005). For example, it may be common for departments within a library to share the minutes of their meetings, or to keep them private. A technical services department may have no idea what a reference department’s priorities are, and vice versa, though their processes and priorities have direct effects on each other – because the social code of behavior is to keep information within the small world of the department.
Choo, et al. use knowledge management research to identify two different organizational strategies: codification, in which knowledge is codified, stored, and disseminated through formal channels, and personalization, in which knowledge is shared through social networks, conversations, and other informal means (2006). I posit that in libraries, the first strategy is usually true for collections of outward-facing information, and the second for internal workplace knowledge, which may reside in silos so sturdily built that they resist even the most sensible demolishing. The distinction between outward- and inward-facing knowledge is, however, eroding a little more quickly, as open access, accountability, and social media engagement grow, which has forced some information cultures to become more open.
Paula Singer and Jeri Hurley (2005), writing to librarians in the context of professional advice, divide valuable knowledge into two categories: explicit and tacit. Explicit information is able to be “documented, archived, and codified” – though it is important to note that not all explicit information undergoes these processes. Tacit knowledge, on the other hand, is defined as “know-how contained in employees’ heads.” Tacit knowledge is more subjective. Take, for example, a librarian who finds a mistake on a library web page. Different librarians might approach this problem differently, depending upon their relationships with individual staff members, and their understandings of who wields power, who is in charge of what, and who has the knowledge to get something done. In some libraries, explicit knowledge has become tacit. What may seem like a codifiable piece of explicit knowledge is intimately wrapped up in social networks and relationships, as well as perceptions of others’ willingness to both share and accept information. Singer and Hurley acknowledge that the very value of knowledge may prevent individuals from sharing it: “in many cases employees are being asked to surrender their knowledge and experience – the very traits that make them valuable as individuals” (2005). The word surrender is emotionally charged. There is an element of surrender and trust that comes with transparency – we must trust that the others in our workplace are sharing what they know as well.
Parts and Sums
Much of the research combining information behavior and library or information science has focused on systems. In information scientist Pauline Atherton’s view, this inhibited understanding of “the more substantive and more difficult aspects of our world of information science, namely the human being who is processing information” (quoted in Garvey, 1979). There is some more recent research, however, about the factors (both systematic and individual) that influence individual information behavior. For example, Bates identified the frequently-demonstrated dominance of the “principle of least effort” in information seeking (2010). And Sanna Talja argues that researchers in most fields prefer informal sources and channels if available (2002). In many cases, the principle of least effort may cause people to avoid information seeking altogether, especially if the source of that information is closed off, hostile, or made inaccessible by other human or technological means. People may make do with what they have at hand, can Google, or find out from those they trust, rather than risk vulnerability or alienation with a source known to be difficult in one way or another. Emotion is inextricably linked to information behavior, and, more obviously, to social behavior. An array of information behaviors (seeking, withholding, sharing) are related to emotional behaviors such as stress and self-concept. Even the solo librarian is part of a professional network, and a larger organization, and must rely upon others and other sources of information in order to do her job.
Christina Courtright, writing about Thomas Wilson’s model of information behavior, refers to what he calls the “feedback loop” of “learning over time” (2007). This learning, according to Courtright, always takes place in relation to an individual’s perception of both risk and reward, and of self-efficacy. Imagine a library employee faced with a required task, a low sense of self-efficacy, and a high risk for information-seeking; for instance, a student employee at an academic library working at the desk late at night, with a supervisor who has in the past refused to answer this student’s questions because she thinks he should remember what she verbally told him during training a month ago. A patron comes to the desk wanting to extend a loan on a reserve item until morning; the student is unsure of the permissions and process. Were there adequate documentation (an online document, for example, of policies and procedures), or were the supervisor more willing to share information, the “risk” element would be taken out of the equation, as well as, perhaps, the student’s low sense of self-efficacy. The thinking and actions this student might go through in such a situation have been described by Elfreda Chatman as “self-protective behavior” (Hersberger, 2005). Chatman identified four characteristics of such behaviors: secrecy, deception, risk-taking, and situational relevance. In this example, the student employee must choose between the risk of asking his secrecy-wielding supervisor what to do, or deception of both supervisor and patron by bluffing and risking a solution which may be incorrect. Either choice ultimately has a negative effect both on service to patrons and on the student worker himself.
Thomas Davenport, in his book Information Ecology, discusses what happens when a system lets down individuals from the system’s very inception: if employees “don’t feel their interests have been adequately represented in deliberations over information, they’ll develop their own sources of information and subvert the…structure” (1997). When employees don’t trust their own system, they create workarounds, back doors, and “go-to” people they ask when they are afraid to approach those who may actually be more knowledgeable on the subject. Davenport found, in his studies of organizations, that the many reasons individuals engage in non-sharing behavior boil down to distrust: of either the individual’s own ability, or of what others would do with the information. Above all, Davenport found that information is often “hoarded to preserve the importance and unique contribution of its creator or current owner.” Individuals may perceive that their value to an organization is based solely on their knowledge, and if that knowledge is shared, there is no need to keep the individual around. People must trust that their value also resides in their abilities to grow and adapt, and to acquire new knowledge.
At many libraries, categories of information are associated with people rather than departments, locations, or workflows. This can be embodied when a person takes on, or is assigned, the role of gatekeeper of information. Take, for example, a library that has undergone an ILS migration, where some data about lost and overdue books did not migrate correctly. This data is maintained by the supervisor at the main branch in the form of printouts. The supervisor considers himself the only person who can consult and understand the information. Not only do the staff at the other branches have to call the main branch to resolve problems with patron accounts, but if the supervisor is not there, the patron must return when he is in. This supervisor displays distrust of the abilities of his colleagues. Perhaps he also feels that exclusive ownership of this knowledge and how to interpret it makes him a valuable employee. This person is acting as a gatekeeper. While there are of course advantages to funneling specialized requests or questions through one person, there are distinct disadvantages. When one person controls a cache of information – whether procedures, passwords, policies, or even the names of other gatekeepers – so much more rests upon the relationship between the gatekeeper and the information seeker. And that knowledge may be lost if the gatekeeper leaves. Elfreda Chatman found that such self-protective behavior ultimately results in a negative effect on individuals’ “access to useful or helpful information.”
One concept I’ve only glanced on is power, and how it fits into concepts of information behavior. Marcia Bates and many others point out that in most studies of information behavior, people prefer to get their information from other human beings if possible (2010). However, power structures can stymie this preference. Just as those with more social capital get ahead in the larger world, the same is true in the library workplace; they are, as articulated in sociologist Nan Lin’s theory of social capital, “more likely to be in a position to encounter useful information either directly or by proxy” (Johnson, 2005). In particular, the formation of in-groups in library workplaces that privilege or withhold information works against the free flow of information. (While in-groups and out-groups based on larger societal categories such as race and gender are critically important factors, that is a subject for a whole other article.) These groups may be demarcated by departmental divisions, the length of time employees have been working at a library, social groups formed around interests, or “professional” versus “paraprofessional.”
This last divide is a sore point at many libraries, and many have written and spoken about it. Some libraries have deliberately blurred these lines as they blend services across departments. It may seem a meaningless distinction what we call ourselves, particularly when patrons are generally unaware of titles, and just want help from the person at the desk or on the other end of the phone. But Chatman found that “[h]ow you are classified determines both your access to information and your ability to use it” (2000). This is not just true for those of us with clearance classifications in government jobs. The titles we give individual library staff members and their departments affect how information is shared and accessed. A special collections “paraprofessional” with an interest in the theory behind archival arrangement may not have the time or encouragement built into her job to learn and advance. Paraprofessionals are often not invited to meetings where policies that will affect them are crafted. The MLS and other advanced degrees are keys that unlock information. I am personally grateful for everything I learned in my master’s program, and I think professional library science education has value. I think, however, a more nuanced progression in professional development, a blend of on-the-job learning and formal education, would open conduits and allow practical and theoretical information to flow more freely in all directions. We can all learn from each other, but we must all be willing to teach and learn. Communication researcher J. David Johnson writes that individuals’ own perceptions of information politics can affect their behavior: “For many individuals it does not make much sense to learn more about things over which they have no control, so the powerless tend not to seek information” (2009). Active information sharing by those with power can counteract this tendency.
Davenport, writing from a corporate perspective, identifies three types of information behaviors that improve an information environment: “sharing, handling overload, and dealing with multiple meanings” (1997). The first of these behaviors, sharing, is part of what information scientists Madhu Reddy and B.J. Jansen describe as “collaborative information behavior,” or CIB (2008). People are more likely to move from individual information behavior (including withholding, selectively disseminating, or using secrecy or deception) to CIB when certain triggers occur. These include fragmented information resources, lack of domain expertise, and complexity of information need. In other words, when the situation is pressing enough, people will share rather than hoard. In theory, for example, enough database problems during a weekend or vacation will force an systems librarian who has kept problem-solving processes to herself to share them with other employees.
While that is an example of an individual, one-time behavior conducted under duress, in an ideal world, similar situations would trigger the creation of more open, transparent, and flexible information environments. Lisa Lister, writing specifically about library workplaces, notes that “workplace structure itself can foster collegiality or its antithesis, competition and turf guarding” (2003). She observes that library workplaces, in theory, should lend themselves to collegiality and open sharing of information, because of the profession’s more “circular and participatory” and less “pyramidal and autocratic” nature. Libraries tend to have, and are trending toward, flat structures. It is more crucial than ever to use these structures to create more transparent, open, and flexible information environments. Such models not only improve the flow of information, but also embody the principles and values of the library profession.
Open Access Means Both
We don’t have to look far for models of more open information environments. The impact of the open access movement on the library universe – its implications for publishing, copyright, and access – is well-documented. Many librarians have enthusiastically embraced the principles of open access when it comes to collections decisions, or working with faculty on publishing agreements. How many of us, however, have applied these principles to our own workplaces? The Budapest Open Access Initiative includes this key principle of open access: “Removing access barriers to this literature will accelerate research, enrich education, share the learning of the rich with the poor and the poor with the rich, make this literature as useful as it can be, and lay the foundation for uniting humanity in a common intellectual conversation and quest for knowledge” (2002). Replace “rich” and “poor” with “information rich” and “information poor,” and “humanity” with “library staff,” and this sounds to me like an ideal directive for information sharing in the library workplace.
Library and information science scholar Kevin Rioux describes a set of behaviors he refers to as “information acquiring-and-sharing,” which focuses not on information seeking but on how available an individual makes his or her own information base to others with information needs – a concept directly in line with the principles of open access (2005). When undertaking information acquiring-and-sharing, an individual actively stores and recalls others’ existing and potential information needs, makes associations with information she has acquired, and shares the information. In other words, she removes barriers to access. In order to be successful at both seeking and sharing information, individuals must be aware of other people’s information needs and sharing behaviors. This crucial act of sharing can happen in either direction. Librarians Maria Anna Jankowska and Linnea Marshall (2003) suggest sharing information via joint meetings between departments whose information behaviors might clash. In a very specific example, Lisa Lister suggests that what she calls “fugitive” information useful to public services librarians (e.g., phone numbers for referrals) be clearly documented, rather than relying on individual librarians’ memory or informal sharing (2003), which privileges particular librarians and their social networks.
In Choo et al.’s study of a Canadian law firm (2006), employees were surveyed about the information environment in their workplace. Some of the statements with which employees were asked to indicate their agreement were:
- Knowledge and information in my organization is available and organized to make it easy to find what I need.
- Information about good work practices, lessons learned, and knowledgeable persons is easy to find in my organization.
- My organization makes use of information technology to facilitate knowledge and information sharing.
These are all statements on which librarians might easily agree if we were launching an online, open-access journal, but perhaps not on library workplaces’ own internal organization of information. This applies particularly to the last statement. How many of us are using paper files or outdated computer programs to store information about instruction strategies, acquisition processes, or community contacts? Libraries should take advantage of more inexpensive, open technologies and invest in training existing and new employees (where, of course, they are able to do so under staffing and financial constraints).
One of the goals of open access is to make research and other scholarly work more accessible in pre-publication stages, in order to benefit from the collaborative nature of the Internet. A number of barriers exist to implementing this approach in library workplaces. Communication researcher William Garvey identified that scientists participate in a public culture of communication, but a private culture of research (1979). While the scientific research environment may have changed, libraries have been slow to break down the “private culture” of our own workplaces, instead privileging information to make ourselves as individuals seem more valuable. Cross-functionality and collaboration can begin to clear the logjam of what sociologists Marc Smith and Howard T. Welser call the “collective action dilemma” – when “actors seek a collective outcome, yet each actor’s narrow self-interest rewards her or him for not contributing to that group goal” (2005). For example, working alone, a reference librarian’s knowledge of an arcane trick to produce good catalog results is an asset to him. Working in a cross-functional catalog team with a technical services librarian could force the librarian to explain how he uses the catalog and spur improvements to the system. Though it may rob the reference librarian of some “special” knowledge, the user has been served better through the pressure of others on a cross-functional team.
Open access thrives on the idea of the community of practice, a model enacted in some library organizations, but certainly not all. In true communities of practice, people share goals, interests, and a common language; they work with the same information, tools, and technologies. While the latter half of that description may be a tall order for specialized library functions and libraries with shrinking budgets, the former should be feasible in library workplaces. Goals, interests, and a common language: all of these can be summarized in a mission and accomplished by attendant goals, directives, and processes. How can we get disparate groups within library workplaces to agree upon a common language and to share information using it? Martha Mautino and Michael Lorenzen, quoting business professor Phillip Clampitt, offer concrete suggestions, both structural — writing interdepartmental agreements, tracking organizational processes, creating cross-functional teams — and behavioral – inclusive brainstorming sessions, show and tell at all-staff meetings (2003). All of these efforts can go a long way toward increasing access to information at all stages of creation and implementation, and to creating a common language and goals among library staff. It’s already happening to some extent – sharing among libraries is strong at conferences and on social media – but robust, open-access-style repositories of knowledge in library workplaces would be powerful.
The New Librarian and the Principles of the Profession
In a study of janitors with information needs, Elfreda Chatman found that they “believed that, if their supervisors or even neighbours or friends knew some problems that they were having, they would take advantage of them by using this information against them” (2000). In other studies, people did not want to be viewed as less capable than others and therefore did not seek information. This can be a particularly prevalent problem for new librarians in their first professional positions. They may be expected to jump in and learn as they go along —and without a supportive or clear structure of both human and documented information sources, they may revert to self-protective behavior.
Those new to the profession or to a particular workplace are singularly positioned to benefit the most from an open and well-structured information environment, or to improve a closed and poorly structured one. Library literature abounds with advice to new librarians (whether to the profession or a workplace). Both Julie Todaro (2007) and Natalie Baur (2012), writing separately in the ALA-APA newsletter Library Worklife, suggest responsibilities for the new employee, including: learning the library’s hierarchy, culture, and expectations, seeking out materials and documents, and introducing oneself to everyone (not just to those who may seem strategically advantageous). Rebecca K. Miller brings the responsibilities of both sides together: “Through accurate job descriptions and well-developed communications, a library organization can…communicate realistic expectations, making sure that new librarians come into an organization with a clear idea of what the organization expects and how the new librarian can work to meet those expectations” (2013).
A new person coming into a library workplace may have ideas about workplace information culture from a previous position or from library school, but she must also learn the ways information is socially transmitted in her new workplace. If those ways are unnecessarily complicated (whether intentionally or unintentionally), it is more difficult for the new person to do her job. Perhaps members of a department have “always” taken vacation on a seniority basis, and when a new person is granted vacation on a first-come, first-served basis, there may be unspoken resentment. The new person is unaware of both the custom and the senior employees’ resentment; the senior employees and manager have not shared their custom with the new person. Down the line, when that new person needs information, that resentment may affect the senior employees’ willingness to share it. And no one will know why because it has not been communicated. Had the policy been documented in the first place, it would have been less of a problem. Todaro places responsibility equally on the new person and the organization to seek out and to provide information, respectively. As she points out, however, “much ‘common knowledge’ is known to all but new employees” (2007). This common knowledge includes methods of communication, and the accepted processes of retrieving and using content from common sources of information.
One common source of information, as I previously discussed, is an established mission. Maria Anna Jankowska and Linnea Marshall describe an organization without a mission this way: “beliefs may be promulgated among the members through their own personal communications among themselves….The quantity, quality, and inclusiveness of these personal communications contribute to, or detract from, a unified organizational vision” (2003). A poorly conceived or written mission statement is, of course, just as harmful as no mission at all. But constructed carefully from both top down (larger institutional mission) and bottom up (employees’ tasks and services), they can inform everything in a workplace, including procedures and policies governing information behavior. Clearly-written missions and goals can address three important, positive types of information behavior identified by Davenport: sharing information, handling information overload, and dealing with multiple meanings. A collaboratively written and agreed-upon set of goals and directions for a library makes information public (sharing), distills it (overload), and asks everyone to agree on a common language (multiple meanings). This may all sound obvious, but there are plenty of libraries that do not address these three behaviors, that do not have unified goals or even a mission statement. And in those libraries, as Jankowska and Marshall point out, lateral communications – which often occur in the context of social relationships and not in an open community of practice – govern the day-to-day tasks and, ultimately, long-term direction of that library.
As Lisa Lister writes, “Our library culture and organizational structure can either foster or hinder the participatory ideals that contribute to our collegiality.” The ALA’s Code of Ethics (2008) provides principles to accomplish the former – to foster information sharing and clear, open channels of communication, through library organizational and information culture. Three of the eight principles under the code of ethics are:
- We provide the highest level of service to all library users through appropriate and usefully organized resources; equitable service policies; equitable access; and accurate, unbiased, and courteous responses to all requests.
- We distinguish between our personal convictions and professional duties and do not allow our personal beliefs to interfere with fair representation of the aims of our institutions or the provision of access to their information resources.
- We strive for excellence in the profession by maintaining and enhancing our own knowledge and skills, by encouraging the professional development of co-workers, and by fostering the aspirations of potential members of the profession.
All three of these principles can be applied when interacting with fellow library staff as well as when serving users; employees should have equitable access to accurate information that affects their jobs. Reddy and Jansen argue that collaborative information behavior can only take place where there is “trust, awareness, and coordination” (2008). All three of these factors are reflected in the ALA’s Code of Ethics: we must trust that personal beliefs will not hinder coworkers from sharing information, maintain awareness of our own knowledge, and employ coordination through actively sharing information to foster others’ professional development. When information is shared among all individuals in a library workplace – especially from those with power to those with less power – we ultimately provide better service, and the principles of our profession are enacted.
Many thanks to Ellie Collier as my In the Library with the Lead Pipe editor for excellent help in shaping this article, and to Caro Pinto as both external editor and stellar colleague. Thanks are also due to Katy Aronoff, Macee Damon, Hope Houston, and Matt van Sleet, for thought-provoking conversations and for encouraging me to write.
American Library Association. (2008, January 22). Code of ethics of the American Library Association. Retrieved from http://www.ala.org/aboutala/governance/policymanual/updatedpolicymanual/section2/40corevalues
Bates, M. J. (2010). Information behavior. In M.J. Bates & M. N. Maack (Eds), Encyclopedia of library and information sciences (3rd ed.). Retrieved from http://pages.gseis.ucla.edu/faculty/bates/articles/information-behavior.html
Baur, N. (2012, July). The ten commandments of the new professional. Library Worklife. Retrieved from http://ala-apa.org/newsletter/2007/08/16/ten-dos-and-donts-for-your-first-ten-days-of-work/
Budapest Open Access Initiative. (2002, 14 February). Retrieved from http://www.budapestopenaccessinitiative.org/read
Case, D.O. (2002). Looking for information: A survey of research on information seeking, needs, and behavior. Boston: Academic Press.
Chatman, E. A. (2000). Framing social life in theory and research. The New Review of Information Behaviour Research, 1, 3–17.
Choo, C. W., Furness, C. F., Paquette, S., van den Berg, H., Detlor, B., Bergeron, P., & Heaton, L. (2006). Working with information: Information management and culture in a professional services organization. Journal of Information Science 32(6), 491-510.
Courtright, C. (2007). Context in information behavior research. Annual Review of Information Science and Technology, 41, 273-306.
Davenport, T.H., with L. Prusak. (1997). Information ecology: Mastering the information and knowledge environment. Oxford: Oxford University Press.
Fulton, C. Chatman’s life in the round. (2005). In K. E. Fisher, S. Erdelez, & L. McKechnie (Eds.), Theories of information behavior (pp. 79-82). Medford, NJ: Information Today, Inc.
Garvey, W.D. (1979). Communication, the essence of science: Facilitating information exchange among librarians, scientists, engineers, and students. Elmsford, NY: Pergamon Press.
Hersberger, J. (2005). Chatman’s information poverty. In K. E. Fisher, S. Erdelez, & L. McKechnie (Eds.), Theories of information behavior (pp. 75-78). Medford, NJ: Information Today, Inc.
Jankowska, M. A., & L. Marshall. (2003). In Mabry, C.H. (Ed.), Cooperative reference: Social interaction in the workplace (pp. 131-144). New York: The Haworth Press.
Johnson, C.A. (2005). Nan Lin’s theory of social capital. In K. E. Fisher, S. Erdelez, & L. McKechnie (Eds.), Theories of information behavior (pp. 323-327). Medford, NJ: Information Today, Inc.
Johnson, J.D. (2009). Information regulation in work-life : Applying the comprehensive model of information seeking to organizational networks. In T. Afifi & W. Afifi (Eds.), Uncertainty information management, and disclosure decision: Theories and applications (pp. 182-200). New York: Routledge.
Lister, L. F. (2003). Reference service in the context of library culture and collegiality: Tools for keeping librarians on the same (fast flipping) pages. In Mabry, C.H. (Ed.), Cooperative reference: Social interaction in the workplace (pp. 33-39). New York: The Haworth Press, 2003.
Mautino, M., & Lorenzen, M. (2013). Interdepartmental communication in academic libraries. In K. Blessinger & P. Hrycaj (Eds.), Workplace culture in academic libraries: The early 21st century (pp. 203-217). Oxford: Chandos Publishing.
Miller, R. K. (2013). Helping new librarians find success and satisfaction in the academic library. In K. Blessinger & P. Hrycaj (Eds.), Workplace culture in academic libraries: The early 21st century (pp. 81-95). Oxford: Chandos Publishing.
Reddy, M.C. & Jansen, B.J. (2008). A model for understanding collaborative information behavior in context: A study of two healthcare teams. Information Processing & Management 44, 256-273.
Rioux, K. (2005). Information acquiring-and-sharing. In K. E. Fisher, S. Erdelez, & L. McKechnie (Eds.), Theories of information behavior (pp. 169-173). Medford, NJ: Information Today, Inc.
Singer, P. M., & Hurley, J. E. (2005, June). The importance of knowledge management today. Library Worklife. Retrieved from http://ala-apa.org/newsletter/2005/06/17/the-importance-of-knowledge- management-today/
Smith, M., & Welser, H. T. (2005). Collective action dilemma. In K. E. Fisher, S. Erdelez, & L. McKechnie (Eds.), Theories of information behavior (pp. 95-98). Medford, NJ: Information Today, Inc.
Talja, S. (2002). Information sharing in academic communities: Types and levels of collaboration in information seeking and use. New Review of Information Behavior Research, 3(1), 143-159.
Talja, S., and Hansen, P. (2006). Information sharing. In A. Spink & C. Cole (Eds.), New directions in human behavior (pp. 113-134). New York: Springer.
Todaro, J. (2007, August). Ten dos and don’ts for your first ten days of work. Library Worklife. Retrieved from http://ala-apa.org/newsletter/2007/08/16/ten-dos-and-donts-for-your-first-ten-days-of-work/
Wilson, T. D. (1999). Models in information behaviour research. Journal of Documentation 55(3), 249-270.
Chen, X., Ma, J., Jin, J., & Fosh, P. (2013). Information privacy, gender differences, and intrinsic motivation in the workplace. International Journal of Information Management, 33(6), 917-926.
Karsten, M.F. (2006). Gender, race, and ethnicity in the workplace: Issues and challenges for today’s organizations. Westport, CT: Praeger Publishers.
Richards, D., & Busch, P. (2013). Knowing-doing gaps in ICT: Gender and culture. VINE: The Journal of Information & Knowledge Management Systems 43(3), 264-295.
Sewall, B. B., & Alarid, T. (2013). Managing the access services desk: Utilizing layered levels of staff skills. Journal of Access Services 10(1), 6-13.
Somerville, M. M., Huston, M. E., and Mirjamdotter, A. (2005.) Building on what we know: Staff development in the digital age. The Electronic Library 23(4): 480-491.
Wilson, T. D. (2010, February/March). Fifty years of information behavior research. ASIS&T Bulletin. Retrieved from http://www.asis.org/Bulletin/Feb-10/FebMar10_Wilson.html
by Elizabeth Galoozis at February 26, 2014 01:00 PM
One object per line, suitable for parsing as a stream.
\n for newline; no data types.
Example: PLOS Search API
\r\n for newline;
Array data types, no
Date data type.
Allows nested objects.
Example: Twitter Streaming API
<item id="1" title="Example One" date="2014-02-26"/>
<item id="2" title="Example Two" date="2014-02-27"/>
Data types for each field can be specified in an external XML Schema file.
@prefix dc: <http://purl.org/dc/elements/1.1/> .
_:1 dc:title "Example One"; dc:date "2014-02-26" .
_:2 dc:title "Example Two"; dc:date "2014-02-27" .
_:1 <http://purl.org/dc/elements/1.1/title> "Example One"; <http://purl.org/dc/elements/1.1/date> "2014-02-26" .
_:2 <http://purl.org/dc/elements/1.1/title> "Example Two"; <http://purl.org/dc/elements/1.1/date> "2014-02-27" .
All fields have data types implied by the predicate, but - to be explicit - add
@en to the title field and
^^<http://www.w3.org/2001/XMLSchema#date> to the date field.
February 26, 2014 12:20 PM