Planet Code4Lib

DSpace User Group at The Repository Fringe Conference, July 2-3 / DuraSpace News

From Bram Luyten, Atmire

The annual Repository Fringe Conference is taking place on the 2nd and 3rd of July in Edinburgh.

The organizers of the repository fringe are kindly making the Kelvin Room at RSE Edinburgh available from 2.00 to 3.30pm on Tuesday July 3rd for a DSpace User Group meeting.

If you are interested in joining this event, please join the ongoing communication about the program on the DSpace UK mailing list.

When available, more information will also be added to:
https://wiki.duraspace.org/display/DSPACE/2018-07-03+Repository+Fringe+DSpace+User+Group+Meeting

The post DSpace User Group at The Repository Fringe Conference, July 2-3 appeared first on Duraspace.org.

I find your lack of faith disturbing: Islandora 7.x-1.11 is out, and yes, those are the VMs you're looking for! / Islandora

Ta da! (Sound) 

Islandora 7.x-1.11 is here, not born from Midi-chlorians in some forgotten and dangerous (desert) planet, but from your own pursuit of intergalactic repository awesomeness through old fashioned work and effort, well done gold leaders!

In your quest to perfect the ancient art of code review mind tricks (where the subject finally addresses your concerns, defeated but willing, because it can’t remember where or when everything started!) or your need to find the perfect wording that brings that fallen README.md back into the light, (with matching screenshots of course), you all have managed to beat the odds, save the day, tame the killer alien bugs, close some Jira tickets and hide others behind impenetrable blast doors, defeat the dark empire of closed repository systems (or at least push it back for another 6 months!) and make it alive for another sequel. You are all amazing open source heroes and make us all feel like squishy bear-like characters from some forgotten moon full of trees ready to party all night long, now that our rebellion, the rebellion against closed code, pay walls, opaque and impenetrable roadmaps, forced migrations and bad taste in UI succeeded once more!

For those of you who ask yourself “what is all this nerdy thing of who-cares-film references release managers are giving us”, live with it (no offence intended). Because what matters, the code, the docs, are good, so good and release was delivered!. Also, we do our best to match release narrative with T-Shirt themes. Everyone else, enjoy the T-Shirts and VM ascii art too! We are so proud and grateful of what you have accomplished here. There is no release, there is no moving forward and no community (or light-speed travel!) without you. You are the pilots and the crew of this ship and even if difficult to see, always in motion is the future, looks like we are still heading in opposite direction of the Sun(s)!

To upgrade or simply test this release:

 - Read the Release Notes (really, now!), stuff like Islandora Badges deprecated islandora_oaDoi and replaced it with unpaywall. So go, read it!
 - Check out the 7.x-1.11 branches of all our Git repos* (or the ones you use really). Remember mix and match is not a good idea and leads to fear and hate, etc.
 - For Tuque, checkout the 1.11 Branch (tuque loves to be the special one
 - Spin one of our Virtual Machines  (The Cantina is also serving a IIIF Cantaloupe flavoured one!)
 - Or go the old fashioned path to enlightenment and download individual release packages found the Release Notes and Downloads page  or in each Git repos*’s release tab.

Note: Islandora Drupal Filter (Version 7.1.11) has its own, Java (not the hooded scavengers..those are with a “w") semantic like, versioning. Feel free to upgrade if you like complete-ness, but no change has happened there since our last release so keeping your 7.1.10 in place is safe.

All new 5 new features and 32 improvements are amazing and game changing!

From “manage your orphaned Objects”, crafted by Brandon to much more and better IIIF Image API integration for our viewers from Jon, to shiny things like a new Cyborg/techno upgrade for Creative Commons XML Forms elements coded by d-r-p (now with a lot of more options and actually working!) or Access Control for inactive Objects by Rosie or the cool redirect to first Child (it is hard to be the oldest child!) for compounds by Lucas from Leiden or what about Solr for newspaper retrieval from Jared!. There is something for everyone! Please give the improvements and the new features list a look, then print, frame and hang over your desk.

On the loud space explosion side of thing (yep.. sound in vacuum)

We had a blocker (Islandora OAI Importer was messing with your MODS[5]) and that blocker was defeated by a group of courageous islandora fleet members and Blue leader Bryan Brown. To be honest, i have never ever seen so many pull request comments, 139, in my long long life. Our respects to all people involved. There is a Drush script there that you will have to run if you ever triggered a DOI import that contained HTML. See
drush -u 1 islandora-scholar-fix-doi-html-in-mods and of course the release notes.

Gracias, Muchas Gracias!

Well, you know. Thanks for making Islandora, over and over again. Thanks for using it and breaking it, thanks for not leaving us for a bad and expensive other systems, thanks for sharing your use cases and making questions, for being explicit about your needs and thanks for being around. Especially thanks for all those who spend hours of their last months trying to fix, document, test and understand what this system, this community and these releases are all about. This is your show, enjoy it. 

If you could not be part of our release Team this time, it is ok, We know who you are and were you live and we will we waiting for 7.x-1.12. In the meantime, please take a moment and say "thank you" to one (or more than one) of our volunteers that worked on 7.x-1.11. They deserve a hug also.

• Adam Vessey
• Alan Stanley
• Andrija Sagic
• Bayard Miller
• Ben Companjen
• Brandon Weigel
• Brian Harrington
• Bryan Brown
• Caleb Derven
• Carolyn Moritz
• d-r-p
• Devin Soper
• Diego Pino
• Don Richards
• Giancarlo Birello
• Janice Banser
• Jared Whiklo
• Jon Green
• Jordan Dukart
• Keila Zayas-Ruiz
• Kim Pham
• Marcus Barnes
• Mark Jorda
• Martha Tenneyn
• Matthew Miguez
• Nat Kanthan
• Paul Cummins
• Peter MacDonald
• Rachel Smart
• Robert Waltz
• Robin Naughton
• Rosie Le Faive
• Will Panting
• Yamil Suarez

And done. See you next time. Enjoy, rest i must.

PS: Oh, final quotes: “Always pass on what you have learned.”  and “Do or do not. There is no try.”  Pretty sure Yoda was into open source.
 

D & R & M 

LITA @ ALA Annual 2018 – Executive Perspectives / LITA

Attending the 2018 ALA Annual Conference in New Orleans? Please consider attending the 2018 edition of:

Executive Perspectives: A Conversation on the Future of the Library Technology Industry
Saturday, June 23, 2018, 10:30 – 11:30 am
Morial Convention Center Room 293

The panelists include:

  • Beth Jefferson, Co-founder and CEP, BiblioCommons
  • Berit Nelson, Chief Product officer, SirsiDynix
  • Jane Burke, VP Strategic Initiatives, Ex Libris
  • Mary Sauer-Games, Vice President, Product Management and Product Marketing, OCLC

Beth Jefferson head shot    Berit Nelson head shot    Jane Burke head shot    Mary Sauer-Games head shot

See more details at the Executive Perspectives web page.

Marshall Breeding, author of the annual Library Systems Report published in American Libraries, will assemble and moderate this panel of senior executives representing organizations that produce software or services for libraries. Breeding will give a brief introduction and will then lead a lively discussion to probe at the technology and business trends. This year’s panel discussion will center on technologies which support the expanding role of libraries into their communities and parent institutions. The panelists will speak on how their organizations are providing or developing technologies beyond the traditional resource management and discovery products.

Discover the more than 20 other LITA programs and discussions to make your ALA Annual experience complete.

Questions or Comments?

Contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

QOTD: Sally Jackson on how disagreement makes arguments more explicit / Jodi Schneider

Sally Jackson explicates the notion of the “disagreement space” in a new Topoi article:

“a position that remains in doubt remains in need of defense”1

 

“The most important theoretical consequence of seeing argumentation as a system for management of disagreement is a reversal of perspective on what arguments accomplish. Are arguments the means by which conclusions are built up from established premises? Or are they the means by which participants drill down from disagreements to locate how it is that they and others have arrived at incompatible positions? A view of argumentation as a process of drilling down from disagreements suggests that arguers themselves do not simply point to the reasons they hold for a particular standpoint, but sometimes discover where their own beliefs come from, under questioning by others who do not share their beliefs. A logical analysis of another’s argument nearly always involves first making the argument more explicit, attributing more to the author than was actually said. This is a familiar enough problem for analysts; my point is that it is also a pervasive problem for participants, who may feel intuitively that something is seriously wrong in what someone else has said but need a way to pinpoint exactly what. Getting beliefs externalized is not a precondition for argument, but one of its possible outcomes.2

From Sally Jackson’s Reason-Giving and the Natural Normativity of Argumentation.3

The original treatment of disagreement space is cited to a book chapter revising an ISSA 1992 paper4, somewhat harder to get one’s hands on.

  1. p 12, Sally Jackson. Reason-Giving and the Natural Normativity of Argumentation. Topoi. 2018 Online First. http://doi.org/10.1007/s11245-018-9553-5
  2. p 10, Sally Jackson. Reason-Giving and the Natural Normativity of Argumentation. Topoi. 2018 Online First. http://doi.org/10.1007/s11245-018-9553-5
  3. Sally Jackson. Reason-Giving and the Natural Normativity of Argumentation. Topoi. 2018 Online First. http://doi.org/10.1007/s11245-018-9553-5
  4. Jackson S (1992) “Virtual standpoints” and the pragmatics of conversational argument. In: van Eemeren FH, Grootendorst R, Blair JA, Willard CA (eds) Argument illuminated. International Centre for the Study of Argumentation, Amsterdam, pp. 260–226

The Four Most Expensive Words in the English Language / David Rosenthal

There are currently a number of attempts to deploy a cryptocurrency-based decentralized storage network, including MaidSafe, FileCoin, Sia and others. Distributed storage networks have a long history, and decentralized, peer-to-peer storage networks a somewhat shorter one. None have succeeded; Amazon's S3 and all other successful network storage systems are centralized.

Despite this history, initial coin offerings for these nascent systems have raised incredible amounts of "money", if you believe the heavily manipulated "markets". According to Sir John Templeton the four words are "this time is different". Below the fold I summarize the history, then ask what is different this time, and how expensive is it likely to be?

The idea that the edge of the Internet has vast numbers of permanently connected, mostly-empty hard disks that could be corralled into a peer-to-peer storage system that was free, or at least cheap, while offering high reliability and availability has a long history. The story starts:

A long, long time ago in a computer far, far away....

The realization that networked personal computers need a shared, remote file system in addition to their local disk, like many things, starts with the Xerox Alto and its Interim File Server, designed and implemented by David R. Boggs and Ed Taft in the late 70s. As IP networking started to spread in the early 80s, CMU's Andrew project started work in 1983 on the Andrew file system, followed in 1984 by Sun's work on NFS (RFC1094). Both aimed to provide a Unix-like file system API to processes on client computers, implemented by a set of servers. This API was later standardized by POSIX.

Mi Disco Es Su Disco

Both the Andrew File System and NFS started from the idea that workstation disks were small and expensive, so the servers would be larger computers with big disks, but NFS rapidly became a way that even workstations could share their file systems with each other over a local area network. In the early 90s people noticed that workstation CPUs were idle a lot of the time, and together with the shared file space this spawned the idea of distributing computation across the local network:
The workstations were available more than 75% of the time observed. Large capacities were steadily available on an hour to hour, day to day, and month to month basis. These capacities were available not only during the evening hours and on weekends, but during the busiest times of normal working hours.
By the late 90s the size of workstation and PC disks had increased and research, for example at Microsoft, showed these disks were also under-utilized:
We found that only half of all disk space is in use, and by eliminating duplicate files, this usage can be significantly reduced,depending on the population size. Half of all machines are up and accessible over 95% of the time, and machine uptimes are randomly correlated. Machines that are down for less than 72 hours have a high probability of coming back up soon. Machine lifetimes are deterministic, with an expected lifetime of around 300 days. Most machines are idle most of the time, and CPU loads are not correlated with the fraction of time a machine is up and are weakly correlated with disk loads.
This gave rise to the idea that the free space in workstation disks could be aggregated, first into a local network file system, and then into a network file system that spanned the Internet. Intermemory (also here), from NEC's Princeton lab in 1998, was one of the first, but there have been many others, such as Berkeley's Oceanstore (project papers) from 2000.

All peers are created equal

A true peer-to-peer architecture would eliminate the central organization and was thought to have many other advantages. In the early 2000s this led to a number of prototypes, including FARSITE, PAST/Pastiche and CFS, based on the idea of symmetry; peers contributed as much storage to the network as they consumed at other peers:
In a symmetric storage system, node A stores data on node B if and only if B also stores data on A. In such a system, B can periodically check to see if its data is still held by A, and vice versa. Collectively, these pairwise checks ensure that each node contributes as it consumes, and some systems require symmetry for exactly this reason [6, 18].
(NB - replication meant that the amount of storage consumed was greater than the amount of data stored. Peers wanting reliability had to build their own replication strategy by symmetrically storing data at multiple peers.).

I am shocked, shocked to find that
cheating is going on in here

These systems were vulnerable to the problem that afflicted Gnutella, Napster and other file-sharing networks, that peers were reluctant to contribute, and lied about their resources. The Samsara authors wrote:
Several mechanisms to compel storage fairness have been proposed, but all of them rely on one or more features that run counter to the goals of peer-to-peer storage systems. Trusted third parties can enforce quotas and certify the rights to consume storage [23] but require centralized administration and a common domain of control. One can use currency to track the provision and consumption of storage space [16], but this requires a trusted clearance infrastructure. Finally, certified identities and public keys can be used to provide evidence of storage consumption [16, 21, 23], but require a trusted means of certification. All of these mechanisms require some notion of centralized, administrative overhead—precisely the costs that peer-to-peer systems are meant to avoid.
Samsara from 2003 was a true peer-to-peer system which:
enforces fairness in peer-to-peer storage systems without requiring trusted third parties, symmetric storage relationships, monetary payment, or certified identities. Each peer that requests storage of another must agree to hold a claim in return---a placeholder that accounts for available space. After an exchange, each partner checks the other to ensure faithfulness. Samsara punishes unresponsive nodes probabilistically. Because objects are replicated, nodes with transient failures are unlikely to suffer data loss, unlike those that are dishonest or chronically unavailable.
As far as I know Samsara never got into production use.

From each according to his ability,
to each according to his needs

At the same time Brian Cooper and Hector Garcia-Molina proposed an asymmetric system of "bid trading":
a mechanism where sites conduct auctions to determine who to trade with. A local site wishing to make a copy of a collection announces how much remote space is needed, and accepts bids for how much of its own space the local site must "pay" to acquire that remote space. We examine the best policies for determining when to call auctions and how much to bid, as well as the effects of "maverick" sites that attempt to subvert the bidding system. Simulations of auction and trading sessions indicate that bid trading can allow sites to achieve higher reliability than the alternative: a system where sites trade equal amounts of space without bidding.
The mechanisms these systems developed to enforce symmetry or trading were complex, and it was never really clear that they were proof against attack, because they were never deployed at enough scale to get attacked.

Its 10pm, do you know where your bytes are?

The API exported by services like these falls into one of two classes:
  • The "file system and object store" model, in which the client sees a single service provider. The service decides which peer stores what; the client has no visibility into where the data lives.
  • The "storage marketplace" model, in which the client sees offers from peers to store data at various prices, whether in space or cash. The client chooses where to store what.
There is a significant advantage of the "file and object store" model. Because the client transfers data to and from the service, the service can divide the data into shards and use erasure coding to deliver reliability at a low replication factor. In the "storage marketplace" model the client transfers data to and from the peer from which it decides to buy; the client needing reliability has to buy service from multiple peers and shard the data across them itself, greatly increasing the complexity of using the service. In principle, in the "file and object store" model the service can run an internal market, purchasing the storage from the most competitive peers.

If at first you don't succeed ...

Why didn't Intermemory, Oceanstore, FARSITE, Pastiche, CFS, Samsara and all the others succeed? Four years ago I identified a number of reasons:
  • Their model of the edge of the Internet was that there were a lot of desktop computers, continuously connected and powered-up, with low latency and no bandwidth charges, and with 3.5" hard disks that were mostly empty. Since then, the proportion of the edge with these characteristics has become vanishingly small.
  • In many cases, for example Samsara, the idea was that participants would contribute disk space and, in return, be entitled to store data in the network. Mechanisms were needed to enforce this trade, to ensure that peers actually did store the data they claimed to, and these mechanisms turned out to be hard to make attack-resistant.
  • Even if erasure coding were used to reduce the overall replication factor, it would still be necessary for participants to contribute significantly more space than they could receive in return. And the replication factor would need to be higher than in a centrally managed storage network.
  • I don't remember any of the peer-to-peer systems in which participants could expect a monetary reward. In the days when storage was thought to be effectively free, why would participants need to be paid? Alas, storage is a lot less free than it used to be.
Now I can add two more:
  • The centralized systems such as Intermemory and Oceanstore never managed to set up the administrative and business mechanisms to channel funds from users to storage service suppliers, let alone the marketing and sales needed to get users to pay.
  • The idea that peer-to-peer technology could construct a reliable long-term storage infrastructure from a self-organizing set of unreliable, marginally motivated desktops wasn't persuasive. And in practice it is really hard to pull off.

Show me the money!

Bandwidth and hard disk space may be cheap, but they aren't free.

Both Intermemory and Oceanstore were proposed as subscription services; users paid a monthly fee to a central organization that paid for the network of servers. In practice the business of handling these payments never emerged. The symmetric systems used a "payment in kind" model to avoid the need for a business of this kind.

The idea that the Internet would enable automated "micro-payments" has a history as long as that of distributed storage, but I won't recount it. Had there been a functional micro-payment system it is possible that a distributed or even decentralized storage network could have used it and succeeded. Of course, Clay Shirky had pointed out the reason there wasn't a functional Internet micro-payment system back in 2000:
The Short Answer for Why Micropayments Fail

Users hate them.

The Long Answer for Why Micropayments Fail

Why does it matter that users hate micropayments? Because users are the ones with the money, and micropayments do not take user preferences into account.
One of Satoshi Nakamoto's critiques of existing payment systems when he proposed Bitcoin was that they were incapable of micro-payments. Alas, Bitcoin has turned out to be incapable of micro-payments as well. But as Bitcoin became popular, in 2014 a team at Microsoft and U. Maryland proposed:
a modification to Bitcoin that repurposes its mining resources to achieve a more broadly useful goal: distributed storage of archival data. We call our new scheme Permacoin. Unlike Bitcoin and its proposed alternatives, Permacoin requires clients to invest not just computational resources, but also storage. Our scheme involves an alternative scratch-off puzzle for Bitcoin based on Proofs-of-Retrievability (PORs). Successfully minting money with this SOP requires local, random access to a copy of a file. Given the competition among mining clients in Bitcoin, this modified SOP gives rise to highly decentralized file storage, thus reducing the overall waste of Bitcoin.
This wasn't clients directly paying for storage, the funds for storage came from the mining rewards and transaction fees. And, of course, the team were behind the times. Already by 2014 the Bitcoin mining network wasn't really decentralized.

This time is different

After that long preamble, we can get to the question: What is different about the current rash of cryptocurrency-based storage services from the long line of failed predecessors? There are two big differences.

The first is the technology, not so much the underlying storage technology but more the business and accounting technology that is intended to implement a flourishing market of storage providers. The fact that these services are addressing the problem of a viable business model for providers in a decentralized storage market is a good thing. The lack of a viable business model is a big reason why none of the predecessors succeeded

Since the key property of a cryptocurrency-based storage service is a lack of trust in the storage providers, Proofs of Space and Time are required. As Bram Cohen has pointed out, this is an extraordinarily difficult problem at the very frontier of research. No viable system has been deployed at scale for long enough for reasonable assurance of its security. Thus the technology difference between these systems and their predecessors is at best currently a maybe.

However:

It Isn't About The Technology

Remember It Isn't About The Technology? It started with a quote from Why Is The Web "Centralized"? :
What is the centralization that decentralized Web advocates are reacting against? Clearly, it is the domination of the Web by the FANG (Facebook, Amazon, Netflix, Google) and a few other large companies such as the cable oligopoly.

These companies came to dominate the Web for economic not technological reasons.
The second thing that is different now is that the predecessors never faced an entrenched incumbent in their market. Suppose we have a cryptocurrency-based peer-to-peer storage service. Lets call it P2, to emphasize that the following is generic to all cryptocurrency-based storage services.

To succeed P2 has to take market share from the centralized storage services that dominate the market for Internet-based storage. In practice it means that it has to take market share from Amazon's S3, which has dominated the market since it was launched in 2006. How do they stack up against each other?
  • P2 will be slower than S3, because the network between the clients and the peers will be slower than S3's infrastructure, and because S3 doesn't need the overhead of enforcement.
  • P2 will lack access controls, so clients will need to encrypt everything they store.
  • P2 will be less reliable, since a peer stores a single copy where S3 stores 3 with geographic diversity. P2 clients will be a lot more complex than S3 clients, since they need to implement their own erasure coding to compensate for the lack of redundancy at the service.
  • P2's pricing will be volatile, where S3's is relatively stable.
  • P2's user interface and API will be a lot more complex than S3's, because clients need to bid for services in a marketplace using coins, and bid for coins in an exchange using "fiat currency". 
Clearly, P2 cannot charge as much per gigabyte per month as S3, since it is an inferior product. P2's pricing is capped at somewhat less than S3's. But the cost base for a P2 peer will be much higher than S3's cost base, because of Amazon's massive economies of scale, and its extraordinarily low cost of capital. So the business of running a P2 peer will have margins much lower than Amazon's notoriously low margins.

Despite this, these services have been raising extraordinary amounts of capital. For example, on September 7th last year Filecoin, one of the more credible efforts at a cryptocurrency-based storage service, closed a record-setting Initial Coin Offering:
Blockchain data storage network Filecoin has officially completed its initial coin offering (ICO), raising more than $257 million over a month of activity.

Filecoin's ICO, which began on August 10, quickly garnered millions in investment via CoinList, a joint project between Filecoin developer Protocol Labs and startup investment platform AngelList. That launch day was notable both for the large influx of purchases of Simple Agreements for Future Tokens, or SAFTs (effectively claims on tokens once the Filecoin network goes live), as well as the technology issues that quickly sprouted as accredited investors swamped the CoinList website.

Today, the ICO ended with approximately $205.8 million raised, a figure that adds to the $52 million collected in a presale that included Sequoia Capital, Andreessen Horowitz and Union Square Ventures, among others.
Lets believe for now these USD amounts (much of the ICO involved cryptocurrencies), and that the $257M is the capital for the business. Actually, only "a significant portion" is:
a significant portion of the amount raised under the SAFTs will be used to fund the Company’s development of a decentralized storage network that enables entities to earn Filecoin (the “Filecoin Network”).
Investors want a return on their investment, lets say 10%/yr. Ignoring the fact that:
The tokens being bought in this sale won’t be delivered until the Filecoin Network launches. Currently, there is no launch date set.
Filecoin needs to generate $25.7M/yr over and above what it pays the providers. But it can't charge the customers more than S3, or $0.276/GB/yr. If it didn't pay the providers anything it would need to be storing over 93PB right away to generate a 10% return. That's a lot of storage to expect providers to donate to the system.

Using 2016 data from Robert Fontana and Gary Decad of IBM, and ignoring the costs of space, power, bandwidth, servers, system administration, etc. the media alone represent $3.6M in capital. Lets assume a 5-year straight-line depreciation ($720K/yr) and a 10% return on capital ($360K/yr) that is $1.08M/yr to the providers just for the disks. If we assume the media are 1/3 of the total cost of storage provision, the system needs to be storing 107PB.

Another way of looking at these numbers is that Amazon's margins on S3 are awesome, something I first wrote about 5 years ago.

Running a P2 peer doesn't look like a good business, even ignoring the fact that only 70% of the Filecoin are available to be mined by storage suppliers. But wait! The cryptocurrency part adds the prospect of speculative gain! Oh no, it doesn't:
When Amazon launched S3 in March 2006 they charged $0.15 per GB per month. Nearly 5 years later, S3 charges $0.14 per GB per month for the first TB.
As I write, their most expensive tier charges $0.023 per GB per month. In twelve years the price has declined by a factor of 6.5, or about 15%/yr. In the last seven years it has dropped about 23%/yr. Since its price is capped by S3's, one can expect that P2's cryptocurrency will decline by at least 15%/yr and probably 23%/yr. Not a great speculation once it gets up and running!

Source
Like almost all cryptocurrencies, Filecoin is a way to transfer wealth from later to earlier participants. This was reflected in the pricing of the ICO; the price went up linearly with the total amount purchased, creating a frenzy that crashed the ICO website. The venture funds who put up the initial $50M, including Union Square Ventures, Andreessen Horowitz and Sequioa, paid less than even the early buyers in the ICO. The VCs paid about $0.80, the earliest buyer paid $1.30.

Filecoin futures 6/16/18
Filecoin is currently trading in the futures market at $7.26 , down from a peak at $29.59. The VCs are happy, having found many "greater fools" to whom their investment can, even now, be unloaded at nine times their cost. So are the early buyers in the ICO. The greater fools who bought at the peak have lost more than 70% of their money.

For whosoever hath, to him shall be given,
and he shall have more abundance:
but whosoever hath not,
from him shall be taken away even that he hath.

Ignoring for now the fact that running P2 peers won't be a profitable business in competition with S3, lets look at the effects of competition between P2 peers. As I wrote more than three years ago in Economies of Scale in Peer-to-Peer Networks:
The simplistic version of the problem is this:
  • The income to a participant in a P2P network of this kind should be linear in their contribution of resources to the network.
  • The costs a participant incurs by contributing resources to the network will be less than linear in their resource contribution, because of the economies of scale.
  • Thus the proportional profit margin a participant obtains will increase with increasing resource contribution.
  • Thus the effects described in Brian Arthur's Increasing Returns and Path Dependence in the Economy will apply, and the network will be dominated by a few, perhaps just one, large participant.
The advantages of P2P networks arise from a diverse network of small, roughly equal resource contributors. Thus it seems that P2P networks which have the characteristics needed to succeed (by being widely adopted) also inevitably carry the seeds of their own failure (by becoming effectively centralized).
Thus, as we see with Bitcoin, if the business of running P2 peers becomes profitable, the network will become centralized.

But that's not the worst of it. Suppose P2 storage became profitable and started to take business from S3. Amazon's slow AI has an obvious response, it can run P2 peers for itself on the same infrastructure as it runs S3. With its vast economies of scale and extremely low cost of capital, P2-on-S3 would easily capture the bulk of the P2 market. It isn't just that, if successful, the P2 network would become centralized, it is that it would become centralized at Amazon!

Taking it to the House for a #DayOfAdvocacy for #NetNeutrality / District Dispatch

Thanks in large part to public pressure, the Senate voted to block the current Federal Communications Commission’s attempt to roll back strong, enforceable net neutrality rules. Now, action has moved to the House of Representatives. House leadership has said they will not schedule a vote on the Congressional Review Act (CRA), but Members can force a vote if the majority signs a discharge petition. If the discharge petition receives enough signatures, then Members have the opportunity to put enforceable net neutrality protections in place, by signing the CRA discharge petition and then voting for the CRA.

ALA and its members’ voices are a key part of the chorus of Americans calling on policymakers to save net neutrality; since mid-December an ALA action alert has generated more than 6,000 emails to members of Congress.

Here’s another way to be involved: on Tuesday, June 26th, net neutrality supporters will hold an Advocacy Day with opportunities to visit your representative’s office. Volunteers will start the day with an advocacy training to make sure you’re ready and feel comfortable talking to your member of Congress. Then volunteers will be on hand take you to your representative’s office so you can share why an open internet is important to libraries and why it’s essential that your representative supports the CRA resolution.

If you cannot join the Day of Advocacy, you can show your support online:

The post Taking it to the House for a #DayOfAdvocacy for #NetNeutrality appeared first on District Dispatch.

Adding Useful Friction to Library UX: Ideas from UXLibs Workshop Participants / Shelley Gullikson

Post-its with ideas on adding friction

At this years UXLibs conference, I led a workshop on adding useful friction to the library user experience. I’ve already posted my text of the workshop, but I told the participants that I also wanted to share the ideas that they came up with during the course of the workshop. The ideas were generated around three themes:

  • friction for users, with a goal of helping those same users
  • friction for staff, with a goal of helping users
  • friction to improve inclusion in the library

What is below is verbatim, as much as I could decipher, from the post-its. There’s seems to be a combination of examples of bad friction and ideas for good friction. If you were a participant and would like to correct or flesh out what’s here, please get in touch!

Here are all of the responses from both workshops, in no order at all:

  • Remove desk or make it a low height only
  • Lower the circulation desk or remove it altogether
  • Users: appointments (promote other options first, Books/resources)
  • Giving the article rather than showing to find the resource
  • Answer questions instead of showing how to do it
  • Wayfinding no.1 enquiry — looking at it with fresh eyes
  • Staff want to put passive aggressive posters everywhere
  • Toilet sign / Not a key — Gender N.
  • Have the students suggest furniture in the library
  • A room with a computer and a loudspeaker where the patron can hear what is on the screen
  • Clickable text where a loudspeaker symbol shows you that you can hear what is said
  • Wayfinding signage / Posters — loads of / passive aggressive
  • Enforced preview when editing web pages
  • Put forms through web group to ensure they’re not excluding
  • When they click around one website
  • When they order on shelf items
  • When they order articles
  • Making them having coffee with other departments and teaching staff
  • Making them walk across campus to use another office
  • Making them use the public spaces one hour a week
  • You haven’t used Library Search for a while – do you need some tips?
  • Get rid of printed self-help so staff have to promote online self-help
  • Friction to help people understand the space they’re in
  • Helping new users find books (when they want it!)
  • Multi-language at entrances and around
  • Remove classification systems!!
  • Inclusivity check before things are published
  • Remove search term suggestions databases
  • Remove phone, e-mail, etc. from the info desk (anything that isn’t talking to students)
  • Giving more options for reference help – all hours of the day, off campus, offline, etc.
  • Change the quiet reading rooms with the group rooms every week
  • Have staff meetings at the group study areas/rooms
  • Put the check-out machines on the top floor
  • Wary of pop ups but what if pop has the answer
  • To slow down scanning of web pages — part scan and leave just prior to achieving answer
  • Pop up box: “Sign in to get: full text, saved search, e-shelf, etc.”
  • A confirm button when adding manual charges to accounts
  • All A4H staff to be fully trained!
  • Had thought of reducing page options/text but could friction be added another way?
  • When they order interlibrary loans
  • We added friction to subj guide BUT — super friction -> no control for subj lib. Therefore like the less friction idea presented by Shelley
  • Pop up on self-issues: Your books will auto-renew for “x” weeks but may be recalled
  • Are you sure? Deleting records from Endnote web
  • EMS Editions: Removing assets
  • Exhibition gallery: interactive screen
  • Event submission
  • Feedback forms
  • Find a book / study space blindfolded?
  • Stop them from using terms and phrases that people don’t understand
  • Test all changes to web page on real users, especially extreme users
  • Plain language checkers for web content
  • Highlighting research consultations over guides + DBs
  • Declarative signage: “You are on the (blank) floor, (blank wing)”
  • Website style guides
  • Push back on academic staff to upload accessible teaching materials to VLE
  • Making ILL request — have you check whether this is in library? (?)
  • Encourage use of learning technologies, but also provide analogue alternatives
  • Provide alternative signage options (multiple alternatives)
  • When entering study zones -> be aware of conditions expected in space
  • Links that take users to Arabic page rather than going back to main page
  • Allowing males to borrow / use the library during female hours
  • Having box to return the books used inside the library
  • Having a shared online spreadsheet if they would like to have someone to cover their desk hours rather than emailing
  • Did you know? pop ups on library websites
  • iPads out intermittently to draw attention
  • Having to meet with a librarian
  • Signage (or something else!) that prompts new students to consider using Library catalogue before trawling the physical shelves
  • Helpdesk would benefit from friction when students make initial enquiries re: Learning Difference Support (e.g. Dyslexia) — In my Univ Lib they are required to ask about this in an “open” queue without any confidentiality!
  • Near shelves potential redirect to lib cat
  • On entry to help students choose appropriate working space
  • On entry think about what student intends to achieve during visit
  • Replying to email enquiry messages force scroll to beginning to force people to read whole history
  • Have you left this in required state? Bog poster for open access disabled loos
  • Creating new challenges in every day tasks to upskill staff, provide better services to users
  • Asking questions (too many!) to get essential services in place / working properly (e.g. hearing loops [or might be learning loops])
  • Forcing users to rub up against us: “This resources has been brought to you by your library”
  • (for colleagues) Flag for spamming no. of forwards and emails to lists per day
  • Returning books through the book sorter—asking “have you returned all the books you need to” before issuing a receipt
  • Students who don’t have a disability but are anxious to be able to have one-to-one library hours, therefore all need to be asked at induction
  • ILR’s (InterLibrary Requests) asking “is this available locally”
  • Items in store requested through the catalogue—”can this be accessed online” before the final request click—stops unnecessary collections from store that are not collected
  • Vendor outlinks “You are about to leave the library’s website”
  • Time to choose to read something you wouldn’t have thought of yourself
  • Time to reflect on impact of a certain behavior
  • Time to advertise additional services that might be helpful
  • Screens/maps to look at before looking for books → are you going where you want to?
  • Set target times to resolve a query. Solutions should be quick and easy.
  • Library website: design decision
  • Library website: content
  • CMAS editors: removing assets
  • ILL request form when item not available
  • Library clear link when no results on discovery layer
  • Disable possibility to take a breath from chat
  • Stricter policy for adding web pages
  • Slow down book drop
  • Friction in ordering interlibrary loans which should be purchases
  • How do we offer booking of “resource rooms”?
  • Can we make it more difficult to make web pages inaccessible?
  • Forced message to remove USB before the PC shuts down/logs you out
  • Triage questions? IT vs Library
  • Only hosting video tutorial with embedded subtitles — don’t rely on YouTube autotitles = RUBBISH!!
  • What images are you using to show your library? Does it look inclusive on posters / online / in literature? E.g. pic of our staircase
  • Reservation Collection—self-issue—extra touch screen with due date for 48 hr loans
  • Stop them from rushing to the top floor, like signs in the elevator
  • Force staff to actually test the accessibility of web sites
  • Students, faculty, other ←
  • Library VRS → stop before leaving the chat “Are you sure you don’t need further help?”
  • How do we address people / users?
  • Double-check before making a poster to “solve” a problem!
  • Role management: Design does not equal project management
  • Peer-checking of presentations / teaching sessions for accessibility
  • Writing training materials for students with English as a 2nd language
  • Uploading to online system: large files, Microsoft format, video and audio (not stream), copyrighted
  • To support distant or part-time students
  • Starting projects without: clarity about outcomes, testing, resources required
  • Adding resource e.g. reading list not using the system
  • Copying over last years materials to this years module
  • Better obstacle than fee for interlibrary loans or document delivery
  • Remove “scripts” for staff answers on Just Ask (IM) — be more personal?
  • No pictures of PDFs or text on web — screen readers can’t cope with them
  • Pop-ups letting students know access is being provided by the library (to online resources)
  • Library website
  • QR codes??
  • Symbols instead of English — Puts everyone at the same level of wayfinding regardless of language skills
  • Diverse reading lists
  • Know Your Staff Wiki!
  • Regular process to review existing web content before adding more
  • Entrance vestibule to silent study spaces
  • Promoting self-service portal at library entrance
  • Chatline. FAQs page to scroll through to get to input page
  • Force a catalogue search before submitting an ILL request
  • Policy that all staff deal with a request for help at point of need and see through
  • Logging all enquiries on an EMS
  • Pick-up shelf: Make users check out their reading room loans
  • Database records in Summon—people going straight to Lib search when not everything is listed
  • Sign up form for focus groups so we can pick by course, not first come, first served
  • Academic workbooks arranged by topic on CMS not just straight link to AS server
  • Online support and workshops more prominently promoted than 1:1s as easier to same [some?] large number
  • I need to approve all external comms and surveys
  • Web edits — I have to approve all pages
  • Training on [survey?] software linked to approval from me
  • Me as final editor for newsletter (brand / accessibility)
  • Gender neutral toilets
  • Editing text for screen readers — on all channels
  • Check catalogue for students who have incorrect info on reserve items
  • Complete a short online library quiz as part of first module
  • Activate your student card in the library within the last week of term
  • Put “Please refer to…” messages where rules aren’t clear
  • ILL — request articles/books we already have—way to make them search first?
  • Search box — choose what format first (they will type anything in a search box without thinking and then think we don’t have an article because they are looking in catalog)
  • Ebooks — add to reserves or pop up asking them to look by title
  • Student staff tell students we don’t have an item when we do — need to try other ways — have system prompt?
  • Expand chat hours so people uncomfortable approaching desk can still ask questions
  • CMS — make popup for alt-text but also color contrast, web writing, close-captioning for videos, etc.
  • Content manager for website — approve all changes even Subject Guides
  • Better feedback on item request — many are not picked up
  • Knowing who your liaison is if on a certain page
  • Staff Friction: Using CRM or equivalent to report issues to other teams, i.e. metadata errors: don’t ring team, logon LANDESK (CRM). Has advantages collating themes and work.
  • Inclusion: Feedback form gender
  • User [Gateways portals]: To prompt and remind about compliance maybe – copyright / usage — use of data/info. Authentication does this also.
  • Staff: Printing checklist Actions before resorting to use of staff printer
  • User: To prompt remind/inform resources purchased on behalf of students by institution
  • IT passwords for faculty users
  • Using lockers after library closing hours
  • Computers on every floor (staff)
  • Toilets (improve inclusion)
  • Game area (students)
  • Lounge area (students)
  • Change main structure of website
  • Adding too long text to buttons
  • Adding too many main category pages
  • Put “silence” signs on every door → there have to be noisy places
  • Just grab a book (without having a look to the books around)
  • Policy: force all staff to use structured text documents so that they are accessible
  • Self-return machines (Don’t take think books, so we need to “slow” the users know know this)
  • Inclusion: Programs → languages
  • Open access funding program → read criteria before submitting the application
  • Adding too long texts into modals designed to be glanced
  • Gender in feedback forms
  • Requirement for text and audio on video
  • Request / reservation: This book is on the shelves in this library. Are you sure you want to request it? [checkbox] Yes.
  • Sign on ground floor: The only toilets and drinking water in this building are on this floor. (Most library services are 2 floors up from here)
  • Making gender option in forms more inclusive e.g. more option or textbox
  • Before making an order/reservation that costs money
  • Before making a reservation
  • Before deleting your user account
  • Before deleting any info permanently
  • Get staff out of their offices — send them to find academic who have not been in the library for a long time
  • We have a Lib Reciprocal Programme across unis in S.A. But in our Lib we force users to see an Info Lib before they get a letter to visit another uni library.
  • Catalogue research (first finding is seldom the best)
  • Remove option to add media on webpages for most staff
  • Accessibility checks before publishing a webpage
  • Filling out book request form for somebody
  • Clearing a list in the catalog
  • Printing single vs. double sided
  • Staff designate, monitor, and enforce quiet areas
  • Building entrance vs. exit
  • Reserving lendable technology
  • Requesting items from storage
  • Information in subject guides
  • Giving information to new students about the library’s services
  • Ordering interlibrary loans
  • In the Discovery systems
  • Request print copies of articles
  • Promote new physical and online materials in entrance
  • User (student) testing before buying e-books
  • Build UX into all projects
  • Prayer facilities
  • A note on self service screen to common ?s. Really good idea.
  • Spending more time with the unfamiliar
  • Symbol sign posting
  • Meet and Greeters at front door
  • Pick up cards at library
  • Send librarians out to visit people
  • Stop “library” work at enquiry point
  • Wellbeing attention grabbing display — subject guide to
  • Registration online — pick up library card in person
  • Commuter room with lockers — charging (away from home help)
  • Auto emails for book arrivals triggered by front desk team so that we are certain it is ready on the shelf
  • Friction needed to prevent deletion of content
  • Subject guides Allow use to browse area and discover other books related to study
  • Develop electronic check lists for staff to ensure staff complete all necessary steps in a task on time and in order
  • Finding tools — Before search encourage users to reflect if using the right finding tool
  • Reading lists — Cap amount of items that can be added → “Do you really want to add this item?”
  • Self-issue machines — Add “do you want to borrow” for very short loans / high charge (had at public)
  • Modernise the till and integrate with LMS. Creates a couple of steps that slows staff and avoids mistakes on the till from “autopilot”
  • “Lost” status and “Found” status. Create pop up explaining what to do and if want to continue to avoid incorrect use.
  • Int’l students — Don’t assume that library experience of someone else is the same particularly when they have a different international experience / Encourage staff to think before assuming person is just not as smart as culture they are accustomed to.
  • Filters — Putting a [friction?] to alert people that they can expand their search to include content not available at [library] as well
  • Friction for staff: prompts to ask particular questions / edit or do something people often forget
  • When searching: “This search result is showing everything. Is that what you want?” Or “It looks like you might be searching for a journal title. Would you like to do that?”
  • Different language options — catalogue, website / signage
  • Compulsory reflection on implicit biases before finalising a form / policy / procedure / interview / process / etc….
  • Sometimes it’s good to get “lost” and find hidden spaces…
  • Have “no wifi” areas to create “switch-off” spaces…
  • Noise control — something that encourages slowing of pace / pause on entry
  • Furniture might cue quiet study vs. collaboration
  • If staff are including a gender (or other protected characteristic) question on a form, make them type their justification!
  • Supporting assistive tech (friction for staff)
  • Stop long forms with every piece of info the librarian needs to order an item
  • Shibbolth sign in from pub page — get to the right path, choose the best relationship for access
  • Group study facilities — varied tech options
  • More tailored handouts for students who have English as 2nd language or 3rd etc.
  • DVD borrowing: “Don’t forget to unlock your case!” pop-up?
  • Multimedia options for dyslexic students — on entry to library
  • Chat box help kiosk for students who feel like “imposters” (afraid to admit what they don’t know)
  • Single sign on — subject / CS team comms.
  • Consistent approach to adding info to app. Autonomy and overall framework.
  • Quizzes on VCEs at end of modules
  • Furniture — soft for de-stresses
  • Commuter students — find out what their priorities are and how this differs from other students
  • To get integrated in the education with the Library competence, so every student gets the same education (information literacy)
  • Find location in the library
  • Gender free web
  • Block them from the staff cataloguing OPAC — only use for 1 hour a day
  • Think of the people you put on the website. Still mostly young, happy users.
  • Teacher making resource lists
  • Users: Interlibrary loans
  • Pop-up help button after 3 kw searches < 1 minute
  • Discover layer: where am I searching
  • Website friction from adding content — specifically start
  • “Headlines” when coming in to the library — To show services offered that are “unknown”
  • Stacks — “Did you find out the exact location of your book?”
  • Making signs — Added friction for personnel
  • Multilingual captioning
  • Sign friction or not?
  • Faculty-librarian meeting for new faculty (in-person? why?)
  • More faculty-librarian friction
  • Leaving web presence, what about credibility? Evaluate results
  • Require AIT text on IMG upload
  • When leaving discovery tool to external site
  • Management friction
  • Default web editor template; to change, require friction
  • Consider for more friction at admin side
  • Mandatory meeting with librarian for an assignment
  • Swipe card to enter the library
  • Baby changing tables
  • Rainbow lanyards
  • Help uniforms / sashes?
  • Program friction — new program proposal
  • Signage
  • Dual monitor search comp. for info desk enquiries
  • Stop users from ordering books on shelf
  • Warning pop up !DANGER!
  • Universal design on website
  • Pause before changing your brand colors etc. to your online library interface. …consider accessibility first.
  • Pause before allowing online systems use your personal data …instead, learn what the provider will do with your data
  • Pause before composing the perfect, new metadata or information model for the new library service …instead, involve users and designers in the process
  • Shh… Quiet beyond this point

 

Libraries Ready to Code Collection beta release this week / District Dispatch

This post originally appeared in American Libraries blog The Scoop on Tuesday, June 12, 2018. It is co-written by Marijke Visser, associate director and senior policy advocate at ALA’s Washington Office and Libraries Ready to Code project leader, along with Nicky Rigg, CS education program manager at Google.

The American Library Association’s (ALA) Libraries Ready to Code initiative, sponsored by Google, is releasing the beta version of the Ready to Code Collection at the 2018 Annual Conference and Exhibition in New Orleans. The release party will be held Friday, June 22, at the Morial Convention Center in the exhibit hall at Google booth #4029.

Members of the Libraries Ready to Code cohort meet at ALA’s 2018 Midwinter Meeting in Denver, Colo.

The Libraries Ready to Code Collection is a cache of resources developed, tested, and curated by libraries, for libraries to create, implement, and enhance their computer science (CS) programming for youth. In the nine months since Libraries Ready to Code announced the 28 grantee libraries participating in the project, the cohort has piloted a range of programs:

  • Middle school library and technology staff working with local nonprofits to identify needs of local businesses and nonprofits and enabling young library users to fill those needs through applied coding projects.
  • high school librarian collaborating with a local music mentorship program to teach youth in special education classes how to code music with assistive technology.
  • Public librarians in a rural community teaching coding languages to help youth engineer and operate a FarmBot robotic gardener.
  • Elementary school librarians leading 4th–8th-grade students through an interest-based coding club and helping students to develop their own workshops showcasing their skills as coding mentors to K–3rd graders.

Learnings from these programs are presented in a comprehensive guide to enable library professionals to cultivate their young patrons’ computational thinking (CT) literacies—their ability to solve complex problems through a step-by-step analytical process.

As cohort members discovered, it takes more than sophisticated technology and fun activities to make a CS/CT program successful. As a variety of critical components of a strong CS program surfaced, the Collection evolved to include strategies for:

  • broadening participation;
  • connecting with youth interests and emphasizing youth voice;
  • engaging with communities;
  • engaging with families; and
  • demonstrating impact through outcomes.

Developing the collection and implementing Ready to Code principles has been a labor of love for the cohort libraries as a community of practice. They have workshopped their experiences in formal weekly meetings and informal listservs. They have been both cheerleaders and critics for each other’s programs. Now they’re looking to other library professionals for input.

Starting June 22, librarians will be able to view the beta version of the Libraries Ready to Code Collection online, determine a Ready to Code “persona,” and provide feedback on the content in an online survey or in person at one of the Libraries Ready to Code sessions or the Ready to Code/Google booth. Librarians can also visit the Ready to Code Teaching Theater in the exhibit hall, where cohort members will demonstrate some of their activities and discuss their programs on June 23 and 24. The final Ready to Code Collection will be released in fall 2018.

The post Libraries Ready to Code Collection beta release this week appeared first on District Dispatch.

June 2018 ITAL Issue Published / LITA

The June 2018 issue (volume 37, number 2) of Information Technology and Libraries (ITAL) has been published.

With this issue, we introduce a new look for the journal — thanks to the work of LITA’s Web Coordinating Committee, and in particular Kelly Sattler (also a member of the Editorial Board), Jingjing Wu, and Guy Cicinelli. The new design is much easier on the eyes and more legible, and sports a new graphic identity for ITAL.

In this June 2018 issue, we continue our celebration of ITAL’s 50th year with a summary by Editorial Board member Sandra Shores of the articles published in the 1970s, the journal’s first full decade of publication. The 1970s are particularly pivotal in library technology, as it marks the introduction of the personal computer, as a hobbyist’s tool, to society. The web is still more than a decade away, but the seeds are being planted.

The table of contents and brief abstracts are below.

Ken Varnum
Editor

“Primo New User Interface: Usability Testing and Local Customizations Implemented in Response”
Blake Lee Galbreath, Corey Johnson, and Erin Hvizdak

Washington State University was the first library system of its 39-member consortium to migrate to Primo New User Interface. Following this migration, we conducted a usability study in July 2017 to better understand how our users fared when the new user interface deviated significantly from the classic interface. From this study, we learned that users had little difficulty using basic and advanced search, signing into and out of primo, and navigating their account. In other areas, where the difference between the two interfaces was more pronounced, study participants experienced more difficulty. Finally, we present customizations implemented at Washington State University to the design of the interface to help alleviate the observed issues.

“Managing In-Library Use Data: Putting a Web Geographic Information Systems Platform through its Paces”
Bruce Godfrey and Rick Stoddart

Web Geographic Information System (GIS) platforms have matured to a point where they offer attractive capabilities for collecting, analyzing, sharing, and visualizing in-library use data for space-assessment initiatives. As these platforms continue to evolve, it is reasonable to conclude that enhancements to these platforms will not only offer librarians more opportunities to collect in-library use data to inform the use of physical space in their buildings, but also that they will potentially provide opportunities to more easily share database schemas for defining learning spaces and observations associated with those spaces. This article proposes using web GIS, as opposed to traditional desktop GIS, as an approach for collecting, managing, documenting, analyzing, visualizing, and sharing in-library use data and goes on to highlight the process for utilizing the Esri ArcGIS Online platform for a pilot project by an academic library for this purpose.

“It is Our Flagship: Surveying the Landscape of Digital Interactive Displays in Learning Environments”
Lydia Zvyagintseva

This paper presents the findings of an environmental scan conducted as part of a Digital Exhibits Intern Librarian Project at the Edmonton Public Library in 2016. As part of the Library’s 2016–2018 Business Plan objective to define the vision for a digital exhibits service, this research project aimed to understand the current landscape of digital displays in learning institutions globally. The resulting study consisted of 39 structured interviews with libraries, museums, galleries, schools, and creative design studios. The environmental scan explored the technical infrastructure of digital displays, their user groups, various uses for the technologies within organizational contexts, the content sources, scheduling models, and resourcing needs for this emergent service. Additionally, broader themes surrounding challenges and successes were also included in the study. Despite the variety of approaches taken among learning institutions in supporting digital displays, the majority of organizations have expressed a high degree of satisfaction with these technologies.

“The Provision of Mobile Services in US Urban Libraries”
Ya Jun Guo, Yan Quan Liu, and Arlene Bielefield

To determine the present situation regarding services provided to mobile users in US urban libraries, the authors surveyed 138 Urban Libraries Council members utilizing a combination of mobile visits, content analysis, and librarian interviews. The results show that nearly 95% of these libraries have at least one mobile website, mobile catalog, or mobile app. The libraries actively applied new approaches to meet each local community’s remote-access needs via new technologies, including app download links, mobile reference services, scan ISBN, location navigation, and mobile printing. Mobile services that libraries provide today are timely, convenient, and universally applicable.

“Current Trends and Goals in the Development of Makerspaces at New England College and Research Libraries”
Ann Marie Lynn Davis

This study investigates why and which types of college and research libraries (CRLs) are currently developing Makerspaces (or an equivalent space) for their communities. Based on an online survey and phone interviews with a sample population of CRLs in New England, the investigator found that more than two dozen (26) CRLs had or were in the process of developing a Makerspace in this region. In addition, a number of other CRLs were actively engaged in promoting and diffusing the Maker ethos. Of these libraries, most were motivated to promote open access to new technologies, literacies, and STEM-related knowledge.

“From Dreamweaver to Drupal: A University Library Website Case Study”
Jesi Buell

In 2016, Colgate University Libraries began converting their static HTML website to the Drupal platform. This article outlines the process librarians used to complete this project using only in-house resources and minimal funding. For libraries and similar institutions considering the move to a content management system, this case study can provide a starting point and highlight important issues.

Editorial Content

Submit Your Ideas

for contributions to ITAL to Ken Varnum, editor, at varnum@umich.edu with your proposal. Current formats are generally

  • Articles – original research or comprehensive and in-depth analyses, in the 3000-5000 word range.
  • Communications – brief research reports, technical findings, and case studies, in the 1000-3000 word range.

Questions or Comments?

For all other questions or comments related to LITA publications, contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

Three Questions on IRUS-USA / Digital Library Federation

For this special edition of DLF Contribute, we explore IRUS-USA (Institutional Repository Usage Statistics USA), an experimental collaboration between Jisc and the Digital Library Federation (DLF) at CLIR

Paul Needham
Santi ThompsonSanti Thompson

Jo Lambert manages services and projects at Jisc, a UK registered charity that champions the use of digital technologies in education and research. Paul Needham is the Research and Innovation Manager at Kings Norton Library, Cranfield University. Santi Thompson is the Head of Digital Research Services at the University of Houston (UH) Libraries and a co-leader of the DLF Assessment Interest Group (AIG).

Jo Lambert manages services and projects at Jisc, a UK registered charity that champions the use of digital technologies in education and research. Current work includes managing shared analytics services, the Journal Usage Statistics Portal (JUSP), and Institutional Repository Usage Statistics (IRUS-UK) in the UK, as well as a series of projects to provide services or explore use outside the UK. With a background in information services and project management, Jo is interested in working alongside higher education (HE) communities to develop practical, evidence-based shared projects and services that meet community needs and support organisations in their decision making.

Paul Needham is the Research and Innovation Manager at Kings Norton Library, Cranfield University. He is a member of the NISO SUSHI Standing Committee and the COUNTER Executive Committee, and co-chair of the COUNTER Technical Advisory Group. Since 2008, he has mainly worked on projects and initiatives relating to usage statistics based on the COUNTER standard. These include involvement in JUSP (the Jisc Usage Statistics Portal); the new Release 5 of the COUNTER Code of Practice; development of the COUNTER_SUSHI_API; several IRUS services including IRUS-UK, IRUS-CORE, IRUS-OAPEN; and IRUS pilots: IRUS-ANZ and IRUS-USA.

Santi Thompson is the Head of Digital Research Services at the University of Houston (UH) Libraries and a co-leader of the DLF Assessment Interest Group (AIG). Santi publishes on the assessment of digital repository metadata, software, and content reuse. He also currently serves as the principal investigator for the IMLS-funded “Developing a Framework for Measuring Reuse of Digital Objects” grant project and the co-principal investigator for the IMLS-funded “Bridge2Hyku Toolkit: Developing Migration Strategies for Hyku.”

Jo and Paul, could you tell us a bit about IRUS-UK? What has been Jisc’s motivation for developing and investing in a system like this?

Jo and Paul: 15 years ago, institutional repositories (IRs) were the new in-thing. Higher education institutions, just about everywhere, were setting up institutional repositories. Money was being spent, time and effort were being expended . . . but there was no way to reliably demonstrate the usage and impact of those repositories. Sure, statistics were being generated—but everyone was doing it differently, applying their own rules to processing data, and many figures produced were vastly inflated by search engine and robotic usage. We were trying to compare apples and oranges. The statistics lacked credibility.

So that’s why we started IRUS-UK—the first service to enable IRs to expose and share usage statistics based on a global standard—COUNTER. The COUNTER standard is the one that traditional scholarly publishers and aggregators like Elsevier, Springer, EBSCO, etc. all adhere to when producing usage statistics. We all follow the same rules and usage data are filtered to remove robots and double clicks, so the statistics are reliable, trustworthy, authoritative, and comparable.

IRs use IRUS to monitor and benchmark usage of their research against similar organisations in a meaningful way. It provides Jisc with a view of UK repository use to demonstrate the value and impact of IRs. And it provides a UK-wide launch pad for collaborating with other national and international initiatives, projects, and services. We were really interested to hear about DLF AIG work and as our conversations developed our common interests became more and more apparent. A mutual interest in tools to measure impact, develop benchmarks, and share ideas and good practice prompted the collaboration that has since morphed into IRUS-USA.

Santi, as co-chair of the DLF Assessment Interest Group (AIG), what DLF AIG connections and research interest led you to recommending our current IRUS-USA pilot project?

Santi: The JISC-funded Institutional Repository Usage Statistics (IRUS) aggregation project excited me for several reasons. First, I have found difficulty in gaining access to standardized usage statistics for scholarly works repositories. While many systems offer built-in statistics features, they often lack documentation that offers details on how they work, including what they do (and do not) count. With the COUNTER standard acting as the foundation for the aggregation service, IRUS draws upon standardized practices to deliver usage statistics across a shared community, giving managers access to a diverse range of data. The ability to query the usage statistics by formats and benchmark against other member institutions offers repository managers collection development tools often lacking in institutional repository environments.

The work of IRUS also intersects nicely with current and former projects sponsored by the DLF AIG. A former working group, the Web Analytics Working Group, focused a large portion of their efforts on compiling information on various analytic tools and services that could aid in assessing repositories. In 2015 the group published a white paper on the use of Google Analytics in Digital Libraries. The group followed up this work in 2016-2017 by developing an annotated bibliography on how libraries use web analytics to assess their programs, collaborate with other institutions, and make decisions. There work provides a great overview of the world of usage analytics.

Another AIG group, the Content Reuse subgroup of the User Studies Working Group, is currently investigating how best to assess the reuse of digital objects. With funding from IMLS (Developing a Framework for Measuring Reuse of Digital Objects [LG-73-17-0002-17]), the group is aiming to expand upon standardized usage statistics to better understand how users utilize or transform unique materials from library-hosted digital collections. The team believes that leveraging usage statistics, like the kind provided by IRUS, and reuse information will provide practitioners with a richer set of data in which to highlight the value of digital repositories and cultural heritage organizations.

What are all three of you geeking out on? (Or, what is the most interesting thing you’ve learned through this IRUS-USA initiative?)

Santi: My participation in projects like the IRUS-USA pilot program and the Measuring Reuse grant program have me obsessed with better understanding digital library users and reuses. The deep dive that I and my colleagues have taken on who uses digital library materials and for what purposes has allowed me to see how digital libraries are just as much of a “public good” or “public service” as they are a scholarly resource. There are countless anecdotes of how “everyday” people are using digital library objects for a variety of purposes—personal research, genealogy and family history, artistic expression and creation among others. However, I am not sure how well we, as a profession, have embraced the “public good” aspects of digital libraries and think that more attention can be given to the relationship between digital libraries and the “everyday” user. I will continue to collaborate with colleagues to explore this relationship.

Jo and Paul: After several years of developing IRUS within the UK, we’re geeking out on seeing a growing appetite for an international family of services that can interoperate with one another to provide a global picture of IR and OA usage. We developed IRUS-UK by working with universities to understand what they need and then delivering a service to meet that need, so we’re hyped to have a US dimension to IRUS through the IRUS-USA pilot project, and excited about the potential for international measurement and benchmarking. Working with CLIR and DLF AIG folks has given us a greater insight into work in the US right now around use and perceptions of analytics. It’s enabled us to learn from their ideas and approaches and to work collaboratively to develop a pilot service. We’re looking forward to working with our colleagues in the US in the coming months and years.

SaveSave

SaveSave

SaveSave

SaveSave

SaveSaveSaveSave

SaveSave

SaveSave

SaveSave

SaveSave

The post Three Questions on IRUS-USA appeared first on DLF.

Current Trends and Goals in the Development of Makerspaces at New England College and Research Libraries / Information Technology and Libraries

This study investigates why and which types of college and research libraries (CRLs) are currently developing Makerspaces (or an equivalent space) for their communities. Based on an online survey and phone interviews with a sample population of CRLs in New England, the investigator found that more than two dozen (26) CRLs had or were in the process of developing a Makerspace in this region. In addition, a number of other CRLs were actively engaged in promoting and diffusing the Maker ethos. Of these libraries, most were motivated to promote open access to new technologies, literacies, and STEM-related knowledge.

The Provision of Mobile Services in US Urban Libraries / Information Technology and Libraries

To determine the present situation regarding services provided to mobile users in US urban libraries, the authors surveyed 138 Urban Libraries Council members utilizing a combination of mobile visits, content analysis, and librarian interviews. The results show that nearly 95% of these libraries have at least one mobile website, mobile catalog, or mobile app. The libraries actively applied new approaches to meet each local community’s remote-access needs via new technologies, including app download links, mobile reference services, scan ISBN, location navigation, and mobile printing. Mobile services that libraries provide today are timely, convenient, and universally applicable.

From Dreamweaver to Drupal: A University Library Website Case Study / Information Technology and Libraries

In 2016, Colgate University Libraries began converting their static HTML website to the Drupal platform. This article outlines the process librarians used to complete this project using only in-house resources and minimal funding. For libraries and similar institutions considering the move to a content management system, this case study can provide a starting point and highlight important issues.

It is Our Flagship: Surveying the Landscape of Digital Interactive Displays in Learning Environments / Information Technology and Libraries

This paper presents the findings of an environmental scan conducted as part of a Digital Exhibits Intern Librarian Project at the Edmonton Public Library in 2016. As part of the Library’s 2016–2018 Business Plan objective to define the vision for a digital exhibits service, this research project aimed to understand the current landscape of digital displays in learning institutions globally. The resulting study consisted of 39 structured interviews with libraries, museums, galleries, schools, and creative design studios. The environmental scan explored the technical infrastructure of digital displays, their user groups, various uses for the technologies within organizational contexts, the content sources, scheduling models, and resourcing needs for this emergent service. Additionally, broader themes surrounding challenges and successes were also included in the study. Despite the variety of approaches taken among learning institutions in supporting digital displays, the majority of organizations have expressed a high degree of satisfaction with these technologies.

Letter from the Editor (June 2018) / Information Technology and Libraries

In the current (June 2018) issue, we continue our celebration of ITAL’s 50th year with a summary of the articles published in the 1970s, the journal’s first full decade of publication. The 1970s are particularly pivotal in library technology, as it marks the introduction of the personal computer, as a hobbyist’s tool, to society. The web is still more than a decade away, but the seeds are being planted.

President's Message / Information Technology and Libraries

June 2018 message from LITA President Andromeda Yelton.

ITAL Editorial Board Thoughts: Events in the Life of ITAL / Information Technology and Libraries

In this, my last Editorial Board Thoughts piece as I end my time on the ITAL Board, I use data gathered through the Crossref Event Data service to give a snapshot of citations, references, and tweets in the life of ITAL.

Managing In-Library Use Data: Putting a Web Geographic Information Systems Platform through its Paces / Information Technology and Libraries

Web Geographic Information System (GIS) platforms have matured to a point where they offer attractive capabilities for collecting, analyzing, sharing, and visualizing in-library use data for space-assessment initiatives. As these platforms continue to evolve, it is reasonable to conclude that enhancements to these platforms will not only offer librarians more opportunities to collect in-library use data to inform the use of physical space in their buildings, but also that they will potentially provide opportunities to more easily share database schemas for defining learning spaces and observations associated with those spaces. This article proposes using web GIS, as opposed to traditional desktop GIS, as an approach for collecting, managing, documenting, analyzing, visualizing, and sharing in-library use data and goes on to highlight the process for utilizing the Esri ArcGIS Online platform for a pilot project by an academic library for this purpose. 

Primo New User Interface: Usability Testing and Local Customizations Implemented in Response / Information Technology and Libraries

Washington State University was the first library system of its 39-member consortium to migrate to Primo New User Interface. Following this migration, we conducted a usability study in July 2017 to better understand how our users fared when the new user interface deviated significantly from the classic interface. From this study, we learned that users had little difficulty using basic and advanced search, signing into and out of primo, and navigating their account. In other areas, where the difference between the two interfaces was more pronounced, study participants experienced more difficulty. Finally, we present customizations implemented at Washington State University to the design of the interface to help alleviate the observed issues.

VIVO Updates for June 17 — Nominations for Leadership Group, EuroCRIS, Membership Drive / DuraSpace News

From Mike Conlon, VIVO Project Director

Nominations for Leadership Group open Each year, VIVO elects three community members to serve on the VIVO Leadership Group. Any person at a VIVO member institution can nominate up to three people to serve. To make a nomination, you must be at a VIVO member institution. Not at a member institution? That’s easy to fix, become a member. To become a member, see Become a Member Now that you are at member institution, send the names of the people you would like to nominate to Kristi Searle at Duraspace. You can nominate anyone you like, including people who have served as community members previously. Kristi will check with each person nominated to make sure they are willing to serve. If they are, they will be asked to provide a short bio to use in the election. Nominations are due June 25, 2018. We look forward to hearing from you!

EuroCRIS I had the pleasure of attending the 2018 EuroCRIS meeting in Umeå Sweden this past week. EuroCRIS meets every other year, and gathers people interested in “CRIS” systems — current research information systems — these are systems that record the research outputs of a research organization. Some people refer to VIVO as a CRIS system. CRIS systems focus on research outputs, not people, and often are directly associated with repositories where the research outputs are stored. In Europe, due to open access and other compliance and reporting requirements, it is increasingly common for a university to *require* that all research outputs be stored in the university’s repository. A very popular choice for such work is DSpace, another Duraspace project. DSpace-CRIS is a modification of DSpace that supports the production of simple profiles of researchers with content in the repository. Much of the conference health with issues of CRIS systems, DSpace-CRIS, and the compliance requirements.

I had the chance to meet and discuss these ides with colleagues from the VIVO community, DSpace community, and Duraspace including Michele Mennielli (Duraspace), David Baker (CASRAI), Brian Lowe (Ontocale), Tatiana Walter (TIB Hannover), Anna Guillaumet (SIGMA, and VIVO Leadership Group), Jordi Cuni (SIGMA), Andrea Bollini (4Science), Caroline Birkle (Managing Director Converis), Ed Simons (President, EuroCRIS), Anna Clements (University of Glasgow), Susana Moriati (4Science), Pablo de Castro (EuroCRIS board), Jan Dvorack (Infoscience, Czech Republic), and Miguel-Angel Sicilia (University of Alcalá, Spain) .

Tatiana Walter gave an excellent talk on the effort to map KDSF in Germany to VIVO. Basic concepts of mapping were covered as well as difficulties that were resolved in the mapping of KDSF. Her talk will be available on the conference web site.

EuroCRIS has a CRIS system output format, CERIF-XML. Many CRIS systems are able to produce CERIF-XML. A mapping of CERIF-XML to VIVO was completed several years ago. It needs to be updated. With a new mapping, an XSLT could be constructed to export VIVO RDF triples from CERIF-compatible CRIS systems. We hope to hear more about the updated mapping and XSLT transform work in the months to come.

The meeting was an excellent chance to discuss CRIS concepts with European colleagues, and to keep up to date on CRIS thinking and activities in Europe.

Membership Drive Interested in helping VIVO grow its membership? Please contact Julia Trimmer VIVO is looking for a person to join the membership drive.

Go VIVO!

The post VIVO Updates for June 17 — Nominations for Leadership Group, EuroCRIS, Membership Drive appeared first on Duraspace.org.

Schema.org Introduces Defined Terms / Richard Wallis

Do you have a list of terms relevant to your data?

Things such as subjects, topics, job titles, a glossary or dictionary of terms, blog post categories, ‘official names’ for things/people/organisations, material types, forms of technology, etc., etc.

Until now these have been difficult to describe in Schema.org. You either had to mark them up as a Thing with a name property, or as a well known Text value.

The latest release of Schema.org (3.4) includes two new Types – DefinedTerm and DefinedTermSet – to make life easier in this area.

The best way to describe potential applications of these very useful types is with an example:

Imagine you are an organisation that promotes and provides solid fuel technology…. 

In product descriptions and articles you endlessly reference solid fuel types such as “Coal”, “Biomass”, “Peat”, “Coke”, etc. For the structured of your data, in product descriptions or article subjects, these terms have important semantic value, so you want to pick them out in your structured data descriptions that you give the search engines.

Using the new Schema.org types, it could look something like this:

  "@context": "http://schema.org",
    "@graph": [
        {
           "@type": "DefinedTerm",
            "@id": "http://hotexample.com/terms/sf1",
            "name": "Coal",
            "inDefinedTermSet": "http://hotexample.com/terms"
        },
        {
            "@type": "DefinedTerm",
            "@id": "http://hotexample.com/terms/sf2",
            "name": "Coke",
            "inDefinedTermSet": "http://hotexample.com/terms"
        },
        {
            "@type": "DefinedTerm",
            "@id": "http://hotexample.com/terms/sf3",
            "name": "Biomass",
            "inDefinedTermSet": "http://hotexample.com/terms"
        },
        {
            "@type": "DefinedTerm",
            "@id": "http://hotexample.com/terms/sf4",
            "name": "Peat",
            "inDefinedTermSet": "http://hotexample.com/terms"
        },
        {
            "@type": "DefinedTermSet",
            "@id": "http://hotexample.com/terms",
            "name": "Solid Fuel Terms"
        }
    ]
}

As DefinedTermSet is a subtype of CreativeWork, it therefore provides many properties to describe the creator, datePublished, license, etc. of the set of terms.

Adding other standard properties could enhance individual term definitions with fuller descriptions, links to other representations of the same thing, and short codes.

For example:

{
    "@type": "DefinedTerm",
    "@id": "http://hotexample.com/terms/sf1",
    "name": "Coal",
    "description": "A combustible black or brownish-black sedimentary rock",
    "termCode": "SFT1",
    "sameAs": "https://en.wikipedia.org/wiki/Coal",
    "inDefinedTermSet": "http://hotexample.com/terms"
}

There are several other examples to be found on the type definition pages in Schema.org.

For those looking for ways to categorise things, with category codes and the like, take a look at the subtypes of these new Types –  CategoryCode and CategoryCodeSet.

The potential uses of this are endless, it will be great to see how it gets adopted.

Federal library funding remains stable as FY 2019 appropriations process moves forward / District Dispatch

Months of advocacy by ALA members across the country has produced the first positive results for fiscal year (FY) 2019 funding as the Labor, Health and Human Services, Education, and Related Agencies (LHHS) Subcommittee of the House Appropriations Committee voted today to provide level funding for the Institute of Museum and Library Services (IMLS) at $240 million. Funding levels for individual programs like the Library Services and Technology Act (LSTA) will be confirmed next week when the Subcommittee releases its accompanying report.

Federal funding for libraries has been in doubt following the administration’s recommendation to eliminate IMLS along with a number of education programs. Another concern was that the House provided lower funding levels than the Senate for the full LHHS bill, choosing to give larger increases for non-education programs (defense, opioid treatment, National Institutes of Health, etc.).

The funding bill now heads to the full House Appropriations Committee, which could mark up its bill as early as next week. While changes in program funding levels typically do not occur at the full committee level, House floor action is uncertain.

The House bill includes modest increases or level funding for a number of other education-related items that may also benefit libraries. Title IV Part A (funding for well-rounded education and technology) is increased $100 million. Career and Technical Education increased $115 million. The National Library of Medicine is up $5 million. National Endowment for the Humanities and the National Endowment for the Arts are both up $2 million (included in the Interior Subcommittee bill).

Today’s House LHHS Subcommittee vote is evidence that ALA members continue to have a significant impact on federal funding priorities. From participating in (virtual) National Library Legislative Day to calling on members of Congress to sign Dear Appropriator letters, ALA members have made it clear to their representatives that funding for our nation’s libraries makes their congressional districts stronger.

The funding levels in the House Subcommittee bill should encourage us – and remind us that the most impactful advocacy comes from year-round engagement with the elected leaders who make decisions about issues that library professionals and the people we serve.

If you have emailed your member of Congress, make a phone call. If you have called, write a letter to the editor. Most of all, invite your representative or senator to visit your library – just like Great River (Okla.) Regional Library did last month– to show them the difference your library makes in the lives of your users, their constituents.

The Senate LHHS Subcommittee will begin consideration of its bill on June 26. We in ALA’s public policy office in Washington, D.C., will continue to provide you the latest information about federal funding as well as important policy issues at times when your advocacy will have the most impact.

 

The post Federal library funding remains stable as FY 2019 appropriations process moves forward appeared first on District Dispatch.

Public domain serials: Renewal inventory complete! / John Mark Ockerbloom

I’m pleased to announce an important milestone in the IMLS-funded project I’m leading to help open access to 20th century public domain serials.  We now have a complete, openly published inventory of all serials with active issue or contribution renewals made through 1977, based on listings in the Copyright Office’s Catalog of Copyright Entries.   For each serial in our inventory, we list the earliest issue’s renewal and the earliest contribution’s renewal made during that time period, if any.  In many cases, we also give additional copyright information, and link to freely readable online issues, tables of contents, and copyright permissions information.

The Copyright Office’s Copyright records catalog includes copyright renewals and other registrations filed from 1978 onward.  Since our inventory now runs all the way up to their catalog’s 1978 starting point, you can now find out if there were any copyright renewals filed for a serial you’re interested in by checking our inventory and the Copyright Office’s database.  (Prior to now, you might have needed to search through lots of volumes of the Catalog of Copyright Entries before making such a determination.  You can now do it much more quickly.)

We also now have renewal information for over 1,000 of these serials available as structured data, as described in a previous post.  These structured data files are available as individual JSON files (linked off of each serial listed in the inventory with a hyperlink), and also can be downloaded in bulk, along with our overall inventory listing, from our Online Books Page GitHub repository.  There are still some listings that are not yet available as structured data, but they’ll be converted over time, and all new listings will be made available as structured data.

Thanks to this structured data, we can now also offer a more consistent and easy-to-use inventory page.  In particular, all of the title links on the page now go to copyright information.   (Before, some links would go to copyright information and some would go to online copies of the serials themselves, making for a somewhat confusing user experience.  You can still get to online serials via the copyright information pages; if a linked title is shown with emphasis, its copyright information page includes a link to free online content available for that title.)

We’re now done, then, with compiling and publishing the data we said we would compile for our project.  But we’re not done with the project yet as a whole.  Here’s what’s still to come:

  • Suggested procedures for using the data we’ve compiled, along with other data, to quickly identify and check public domain serial content.  We’re drafting those procedures now, and hope to publish our draft after we’ve had some experts in copyright and digitization projects review it.  In the meantime, if you want to use the data we’ve compiled to clear copyrights for serials, keep the caveats we describe at the top of our inventory in mind.
  • Documentation for the JSON files we’re using for our copyright data.  The fields and format we’re using are still subject to change, but probably won’t change all that much from what they look like now.  (I might make them JSON-LD files eventually, but hope that I can do that in a backward-compatible way.)  In the meantime, feel free to contact me with questions about the fields I’m using in those files and what they mean.
  • Examples of public domain serials published after 1922 whose copyright status has been cleared with our recommended procedures and data, along with explanations on how we cleared them.

There are also ways you can help out with this project.  Along with the things you can do that we suggested in our earlier post, here are a few new things that we can now help with:

  • If you’re interested in a serial that published between 1923 and 1964 that’s not already in our list, tell us about it.  (You can use use this suggestion form.  All you need to put in it is the title, and whatever is needed to distinguish it from similarly-titled works.)  We can quickly add it to our inventory.  If we find any associated renewals in the Copyright Office’s database, we’ll note the first one.  If we don’t find any associated renewals, we’ll note that.  And if we can quickly find online public domain issues, we’ll link to them.
  • If you’re interested in compiling more comprehensive or details information about a particular serial, you can get in touch with us and we can work with you to get an enhanced JSON file created for it.  For an idea of what can be done, check our our information page for Amazing Stories and its associated JSON file.  We don’t have time ourselves to create these sort of comprehensive pages for all of the serials out there, but we’d love to work with people with the time, interest and skills to create such information pages for serials they’re interested in.

I’m excited about where we’ve gotten so far, and what can be done with this data.  I’d love to hear from you about what you’d like to do with it, and how you might like to extend it.

Schema.org Significant Updates for Tourism and Trips / Richard Wallis

The latest release of Schema.org (3.4) includes some significant enhancements for those interested in marking up tourism, and trips in general.

Tourism
For tourism markup two new types TouristDestination and TouristTrip have joined the already useful TouristAttraction:

  • TouristDestination –  Defined as a Place that contains, or is colocated with, one or more TourstAttractions, often linked by a similar theme or interest to a particular touristType.
  • TouristTrip – A created itinerary of visits to one or more places of interest (TouristAttraction/TouristDestination) often linked by a similar theme, geographic area, or interest to a particular touristType.

These new types, introduced from proposals by The Tourism Structured Web Data Community Group, help complete the structured data mark up picture for tourism.

Any thing, place, cemetery, museum, mountain, amusement park, etc., can be marked up as a tourist attraction. A city, tourist board area, country, etc., can now be marked up as a tourist destination that contains one or more individual attractions. (See example)

In addition, a trip around and between tourist attractions, destinations, and other places, can be marked up including optional provider and offer information. (See examples)

I believe that these additions will cover a large proportion of structured data needs of tourism enthusiasts and the industry that serves them.

Trips
Whilst working on the new TouristTrip type, it became clear that there was a need for a more generic Trip type, which has also been introduced in this release. Trip is now the super-type for BusTrip, Flight, TrainTrip, and TouristTrip. It provides common properties for arrivalTime, departureTime, offers, and provider. In addition, from the tourism work, it introduces properties hasPart, isPartOf, and itinerary, enabling the marking up of a trip with several destinations and possibly sub-trips — for example “A weekend in London” – “A weekend in London: Day 1” – “A weekend in London: Day 2”.

Together these enhancements to Schema.org are not massive, but potentially they can greatly improve capabilities around travel and tourism.

 

(Tourist map image from mappery.com)

New Partner: the Islandora Collaboration Group / Islandora

The Islandora Foundation is pleased to announce that one of our Collaborator member institutions is becoming a Partner: the Islandora Collaboration Group.

Most members of this consortium are liberal arts colleges in the eastern United States, but the organization has grown beyond its initial region and welcomes new members. They have spearheaded the development of Islandora Enterprise (ISLE), a tool designed to significantly ease the process of installing and maintaining Islandora, and have expanded their development efforts to include the LASIR project, which will extend and improve Islandora's services as an institutional repository.

In addition to many contributions by its members to Islandora releases, camps, and community support on the listserv, the ICG introduced the concept of the 'hack/doc' to the Islandora conference, modelled on their own successful bi-annual internal hack/docs.

Current members of the ICG include:

  • Barnard College
  • Colgate University
  • Five College Compass: Digital Collections
    • Hampshire College
    • Mount Holyoke College
    • Smith College
  • Grinnell College
  • Hamilton College
  • Rensselaer Polytechnic Institute
  • Vassar College
  • Wesleyan University
  • Williams College

As a Partner in the Islandora Foundation, the ICG has nominated David Keiser-Clark to our Board of Directors. Based at Williams College, David serves on the steering committees for both ISLE and LASIR, and has been a member of the Islandora Coordinating Committee since 2017. 

When my congressman visited me (or, How National Library Legislative Day came back home) / District Dispatch

This guest post is contributed by Jami Trenam, Associate Director of Collection Development at Great River Regional Library in St. Cloud, Minnesota, and past chair of the Minnesota Library Association’s Legislative Committee.

As a first-year National Library Legislative Day (NLLD) coordinator for Minnesota, I was equally thrilled and anxious to lead our state’s delegation this year. After diligent, patient persistence, I was lucky enough to schedule time with all 10 of our congressional offices. Our group of advocates soaked up the training materials and briefing information to prepare. One of our three asks at NLLD was to extend an invitation to our legislators to visit a local library when they’re back home in the district to witness firsthand the impact libraries have on communities, particularly to see broadband access in action and to learn how libraries leverage federal funding.

The Congressman and librarian sit in chairs on either side of a whiteboard that reads "Sneaky, Squeaky," and children face themRep. Tom Emmer (R-MN-6) reads to a group of children at Great River Regional Library (St. Cloud, Minn.). Also pictured: LuAnne Chandler, Patron Services Associate. Photo credit: Abby Faulkner

After returning home from Washington, D.C., I was sure to follow up to thank each office. Imagine my surprise when the scheduler for Congressman Emmer’s office (R-MN-6), reached out to me to schedule a visit!

The visit was confirmed less than a week in advance – not a lot of time, but absolutely worth the last-minute effort. To prepare, I made sure both frontline staff and administration knew Representative Emmer was coming. The Congressman offered to read House Mouse, Senate Mouse, so we arranged for a low-key story time with a mouse theme. The office indicated the Congressman was also open to a tour, so I took time to connect with a few key staff around messaging.

A key component of visiting with lawmakers is researching your legislator’s track record and values. Knowing Mr. Emmer is quite fiscally conservative and serves on the Financial Services Committee, we made sure to highlight programs and services that demonstrate the library’s stewardship of tax dollars. For example, our library system has a patron-driven, floating collection: because we centralize selection and share materials among all 32 of our branches, we can stretch our dollars to build a broad and deep collection allowing us better respond to patron requests. Further, in Minnesota, Library Services and Technology Act (LSTA) funds are the backbone of our statewide interlibrary loan (ILL) system. An ILL staffer underscored how LSTA dollars facilitate cooperative resource sharing by highlighting how our MNLINK program works.

We shared how the Great River Regional Library system previously used LSTA grant dollars to projects like “We Play Here” (interactive early literacy kits) and a mobile laptop lab to provide digital literacy classes for seniors. We discussed the potential of applying for an LSTA grant to support our partnership with community organizations such as our local workforce center and Adult Basic Education to proactively address the closure of a major manufacturing plant in the area.

While Rep. Emmer may not yet be a leading champion for federal library funding, I think it’s the visit that increased his awareness of how libraries support communities in Minnesota’s 6th District, which is a small win in my book. Plus, his office shared pictures of the visit on Facebook and Twitter.

If you get the chance to have any lawmaker in your library, I encourage you to take it! Do your homework and craft your message, but also remember that once you get them in the door, the activity in your building – the kids registering for summer reading, the people using your meeting rooms, the folks using the wireless and computers – speaks for itself.

The post When my congressman visited me (or, How National Library Legislative Day came back home) appeared first on District Dispatch.

A Look Back at Fusion 4 / Lucidworks

At the end of February, Lucidworks announced the release of Fusion 4. Since then we’ve been highlighting some of the capabilities and new features that it enabled. Here’s a look back at what we’ve covered:

The Basics

Fusion 4 Ready for Download – Written overview of Fusion 4.0 features.
Fusion 4 Overview – Webinar on the new features in Fusion 4.0.
Machine Learning in Fusion 4 – Short blog outlining Fusion 4 ML features.

A/B Testing

A/B Testing Your Search Engine with Fusion 4.0 – Blog on how to successfully test whether your changes improve or degrade click-throughs, purchases, or other measures.
Experiments in Fusion 4.0 – Webinar on A/B testing.

Head-n-Tail Analysis

Head-n-Tail Analysis in Fusion 4 – Webinar on Head-n-Tail analysis. Head-n-Tail fixes user queries that return results your users aren’t expecting.
Use Head-n-Tail Analysis to Increase Engagement – Blog explaining Head-n-Tail analysis.
Keep Users Coming Back – Technical paper on Head-n-Tail analysis.

Advanced Topics

Advanced Spell Check – Written overview of how Fusion 4 can provide solutions for common misspellings and corrections.
Using Learning to Rank to Provide Better Search Results – Blog overviewing how LTR can be used with Fusion 4 signals.
Increase Relevancy and Engagement with Learning to Rank – Technical paper covering how to implement Fusion 4 signals into Solr’s Learning to Rank algorithm.
Using Google Vision Image Search in Fusion – Short video highlighting how to use Google Vision to implement advanced AI-powered image search in Fusion 4.
Smarter Image Search in Fusion with Google’s Vision API – Blog on how to augment Fusion with Google Vision API.

Fusion 4 is a great release and I’d encourage you to download and try it today.

Next Steps

If you’ve tried Fusion 4 and some of its features, let’s dive deeper and look at some use cases!

The post A Look Back at Fusion 4 appeared first on Lucidworks.

New Member: Lib4RI / Islandora

The Islandora Foundation is very happy to announce that Lib4RI will be joining us as a Member in the Foundation.

Based in Switzerland, Lib4RI is the library for four research institutes within the domain of the Swiss Federal Institutes of Technology, a union of Swiss governmental universities and research institutions. As a scientific library, they support these institutes with literature and technical information for research, teaching and consulting.

Libraries Ready to Code librarian offers tips for ALA conferences / District Dispatch

Note: As we are on the cusp of ALA’s 2018 Annual Conference this week, Stephanie Frey from Georgetown County (SC) Library System takes a look back at ALA’s  2018 Midwinter Conference and offers tips for making the most of your Annual Conference. Keep reading for more information on the Libraries Ready to Code Youth & Technology track at ALA 2018.

As a part of the ALA Libraries Ready to Code project I was able to attend the 2018 ALA Midwinter Meeting in Denver.  Along with meeting with other members of the RtC cohort, I had the chance to participate in many Midwinter Meeting activities.  I’d never been out to the midwest or a library convention and was unsure of what to expect besides massive amounts of people. After much consideration, I found that each of these elements led to me have a fantastic time at ALA Midwinter, helped me deal with how huge and overwhelming an experience it can be, and enabled me to get the most out of the conference.

Members of the Libraries Ready to Code cohort meet at ALA’s 2018 Midwinter Meeting in Denver, Colo.

Sit in the Front

I cannot stress this enough: sit up front in panels you attend.

Normally I tend to sit in the back at events. ALA Midwinter already had me so far out of my comfort zone that I decided to give sitting up front a shot, and I got so much more out of it. Sitting up front put me in contact with the most excited and energized people; their energy and sheer glee was contagious. Everyone had so many ideas and was eager to get right into solving whatever problem was thrown our way. At the beginning of each session we were handed sticky notes to keep track of our ideas, and every time it was the groups in the front rows who had forty or more sticky notes crammed full of ideas. With so many ideas flowing, I had so many different epiphanies on my own programming.

For each panel I found the same and some new, eager faces sitting up front ready take away everything they could learn from the experience. It was so much easier to make friends, get to know my cohort and get so many ideas going.

Exchange Ideas

ALA Midwinter puts you in the proximity of other librarians, so many other librarians. Not only were these people eager to present ideas, they were extremely friendly, too. It made it so easy for me to share my own ideas, experiences and challenges, as well as contribute to theirs.

The strength and best benefit of being around other librarians is how the format encouraged everyone to share how they handled a variety of problems common to all library branches, such as pulling older teens into coding activities, attracting students to return and finding online resources for the right age groups. Finding that everyone else was facing the same challenges and finding their own ways of powering through them was empowering. I discovered that some of them used grant money as paid internships to incentivize teens to run their own programs; some encouraged parent involvement to get students to return; others introduced a wealth of other resources, including Google’s Applied Digital Skills courses.

The convention environment was very welcoming to just throwing ideas out there. We bounced so many unpolished ideas at each other, which made it the perfect place to collaborate. I ran into one librarian in every panel I attended, and by the end we determined we needed to do a collaborative project together using Google Docs.

Set Goals

ALA Midwinter was huge; there were hundreds of people to see and the list of panels went on for pages. As a member of the RtC cohort, I was provided with a list of program recommendations, and it helped immensely. Using these suggestions as a guide, I was able to plan my weekend. I was also able to glean plenty of fantastic information, and even more fantastic contacts, by interacting with other librarians interested in the same kinds of programming. I discovered things like Citizen Science Projects, HOMAGO (Hang Out Mess Around Geek Out), and a much simpler way of getting data by having patrons mark a single statement that they feel most applies to them. Having my schedule planned ahead of time made it that much easier to focus on collecting data instead of focusing on where to get the data.

ALA Midwinter was an amazing event. Seeing what people are doing in their own libraries and sharing ideas with others was such an empowering experience. I hope these tips are helpful if ALA Midwinter Meeting or Annual Conference are in your future.

ALA Annual 2018: Libraries Ready to Code Youth & Technology Track

Don’t miss the beta release party for the Libraries Ready to Code Collection on Friday evening (5:30-7:00 p.m.) at the interactive Google space on the exhibit floor (#4029), where you can preview and give your expert librarian feedback on the pilot Collection using the latest devices.  

‘Bridging the Tech Knowledge Gap: Empowering Your Community Through a Seamless Youth Experience (YX) Design”Friday, June 22, 2018 (9:00 a.m. – 12:00 p.m.)

Speaker: YX students and Dr. Mega Subramaniam, Associate Professor and Associate Director for the Information Policy and Access Center (iPAC) at the University of Maryland.

Libraries Ready to Code: From Concept to ProgramFriday, June 22, 2018 (1:30 p.m. – 4:00 p.m.)

Speakers: Libraries Ready to Code cohort library staff and leadership team – Linda Braun (ALA learning consultant), Marijke Visser (ALA Washington Office) and Nicky Rigg (Google)

Libraries Ready to Code: The Inside ScoopSaturday, June 23, 2018 (10:30 a.m. – 11:30 a.m.)

Speakers: Libraries Ready to Code cohort library staff and leadership team – Linda Braun (ALA learning consultant), Marijke Visser (ALA Washington Office) and Nicky Rigg (Google)

Leap into Science: Cultivating a National Network for Informal Science and LiteracySunday, June 24, 2018 (2:30 p.m. – 3:30 p.m.)

Speakers: Tara Cox, manager of professional development at The Franklin Institute, and Karen Peterson of The National Girls Collaborative Project

The post Libraries Ready to Code librarian offers tips for ALA conferences appeared first on District Dispatch.

Registration Open: Fedora and Samvera Camp in Berlin / DuraSpace News

DuraSpace and Data Curation Experts invite you to attend Fedora and Samvera Camp at the Berlin State Library November 5 – 8, 2018.

Fedora is the robust, modular, open source repository platform for the management and dissemination of digital content. The latest version of Fedora features vast improvements in scalability, linked data capabilities, research data support, modularity, ease of use and more.

Samvera (previously known as Hydra) is a grass-roots, open source community creating best in class digital asset management solutions for Libraries, Archives, Museums and others.  The Samvera software offers flexible and rich user interfaces tailored to distinct content types on top of a robust back end – giving adopters the best of both worlds.

Training will begin with the basics and build toward more advanced concepts – no prior Fedora or Samvera experience is required. Participants can expect to come away with a deep dive Fedora and Samvera learning experience coupled with multiple opportunities for applying hands-on techniques working with experienced trainers from both communities.

Previous Fedora Camps and Samvera Camps (previously known as Hydra Camps) have been held throughout the United States, United Kingdom and in the Republic of Ireland.  Most recently, DCE hosted the inaugural Advanced Samvera (Hydra) Camp focusing on advanced Samvera developer skills.  

The upcoming combined camp curriculum will provide a comprehensive overview of Fedora and Samvera by exploring such topics as:

  • Core & Integrated features
  • Data modeling and linked data
  • Content and Metadata management
  • Migrating to Fedora 4.x
  • Deploying Fedora and Samvera in production
  • Ruby, Rails, and collaborative development using Github
  • Introductory Blacklight including search and faceting
  • Preservation Services

The curriculum will be delivered by a knowledgeable team of instructors from the Fedora and Samvera communities: Mark Bussey (DCE), Bess Sadler (DCE), Andrew Woods (DuraSpace), and David Wilcox (DuraSpace)

Attendance is limited to the first 30 registrants.  DuraSpace Members and Registered Service Providers receive a discounted rate.  Register before September 14th to receive a $50 discount!

Register Now!

The post Registration Open: Fedora and Samvera Camp in Berlin appeared first on Duraspace.org.

Hidden costs of development / Terry Reese

A couple weeks ago, I had the opportunity to speak to a group of researchers and software developers.  These were folks that, through their research, had developed a set of tools or services and were interested in making the software available either free or open source, and were trying to get a handle on the costs that may be associated with this kind of development effort.  It was an interesting conversation, in part, because it led me to realize just how rarely we talk about, as a community, the costs of these kinds of efforts – at least, costs that go beyond time.  In fact, in my experience, time seems to be the primary currency that we evaluate most projects – and this can be time spent doing a variety of activities – time doing support, administration, or actual development.  However, if you run a project, there are many other costs that show up that are much more tangible than time; that come with actual dollar amounts that most folks don’t consider.  And while time (development/support/etc.) is likely the most expensive costs, it’s not the one that often derails projects.  Working on MarcEdit, I’ve found a number of hidden costs that came as a surprise to me, but end up being the costs of doing business if you create software that runs locally.  Web applications will have their own requirements – but this means that if you are a developer interested in building the next wiz-bang application – your planning needs to go beyond time management, but preparation for real costs as well.

Since my experience distributing software has primarily been on the desktop/local device side, I’m going to highlight the hidden costs of developing in this environment.  Many of these costs live outside of the actual development project – but become necessary if your interested in seeing your work adopted at an organization (not individual) level.  And within most professional environments – software run in the organizational context often must meet certain criteria.

Known Costs

Time…time is the one thing that everyone worries about.  Time to develop and time to support.  In working on MarcEdit, I would say that 1/5 of the time I spend on the project is doing actual development.  Sometimes this happens over concentrated periods when implementing new features or expanding functionality – but I always spend a couple hours ever week doing routine maintenance on the tool; particularly around accessibility or usability.  The other 4/5s of the time developed to this project is in support.  Prior to the MarcEdit listserv, I probably received 40ish messages every morning asking questions about the tool.  Most of these questions weren’t that involved, and could be answered fairly quickly – but these represented a significant amount of time dealing with the community.   Since the listserv was created, I’ve seen direct questions drop to under 10 daily.  The scope of the questions have changed….generally these are involved questions, where someone is working on projects with large data sets or complicated questions – so I’m not sure it takes less time, but its been an interesting evolution to watch.

Hidden Costs

The costs that I never expected are the tangible dollar costs (generally) that come with a project like this. These fall into 3 categories:

  1. Infrastructure/Tools
  2. Identity management
  3. Privacy

Tools/Infrastructure

I think I’ve always taken it for granted that there is free webhosting, etc. – but when you need access to specific resources and bandwidth, you have to start paying for these services.  My present needs with MarcEdit require a hosting service that allows unlimited (or at the very least, a terabyte) of monthly bandwidth.  A lot of services talk about unlimited bandwidth, but in shared environments – there’s a lot of throttling that happens.  But bandwidth is just one need – you need storage, databases, program environments (for web services), etc.  Over the years, I’ve had to move from low cost shared services, to a dedicated server, to now a cloud based infrastructure so that I can manage CPU cores and bandwidth more in real time.  As the infrastructure needs have increased, the annual tangible costs have increased. 

In addition to server infrastructure, there is local development infrastructure.  You need automated testing tools, access to multiple platforms, etc.  Locally, this means I run a VM network at home, that allows me to automatically test the application against multiple systems.  Long-term, this too may be an area ripe for cloud-based replacement.  Providers like Microsoft through their Azure platform provide cloud based services to support testing across multiple platforms.  However, at this point, the costs don’t work out – partly because my local network supports a number of other local tasks.  But included in tools/infrastructure are development environments, any specialized tools that may or may not need to be purchased, etc.  Figuring out a hard cost here is more difficult, but I’d say I end up purchasing a new computer to replace part of my development infrastructure once every two years.

Identity management

Identity management has become more and more important as each operating systems develop their own walled gardens (app stores), preference digitally signed software, and encourage organizations to use software management clients (which really preference digitally signed software).  Supporting this requirement requires developers to go through the process of getting a code signing certificate.  I went through this process this year (on Windows) and 3 years ago (on MacOS).  On the Windows side, you have to work through a 3rd party and as an individual, prepare to have to work with an attorney.  On the Apple side, you have to join their developer network (which is required to participate in their app store).  These processes can be invasive (if you do this as an individual, fairly invasive), can be time consuming (my 3rd party certificate for Windows tool almost 2 months  to clear), and are renewed annually.  I’m not going to lie – this part sucks.  There is a part of me that feels like I’m paying a tax to develop “validated” programs for each system.  However, its becoming a necessary requirement for desktop/local software.  Over the past two years, I’d been getting more and more organizational pushback from IT shops that were nervous running unsigned software within their environments.  The inclusion of a digital signature on the installers and application provides managers with confidence that the software hasn’t been modified and includes the implicit promise that as of the signing, I as the developer, have done to the best of my ability, ensured that the software is clean.  Web developers are running into this requirement as well (the need to get SSL certificates); though the documentation requirements (and cost) are much lower.

Privacy

This might seem like an odd cost, but it’s a real one if your software starts to be used by governments or sensitive organizations.  One of the biggest surprises that I’ve run into is the need to go through, at times, really invasive background checks to enable some government organizations to utilize the software.  It’s one of the reasons I take a long time to change version numbers – as version numbers seem to the be the event that initiates required background checks.  Never, in my wildest dreams, did I think this would be something I would have to plan for – or have an attorney to deal with.

Wrapping up

Are these the only costs…no.  But they represent the largest (in my experience), and the ones that I find myself having to plan for each year as the costs are significant enough, that I need to make sure that I actually develop a budget to account for real (monetary) and indirect (time) costs.  If you develop within an organization, you will find some of these issues get taken care of by your organization, if you work as an individual – you will find yourself dealing with almost all of these issues yourself.  I’ll be honest, a lot of the fun of doing development work is the actual development and interaction with the community.  I love doing the research and working with users.  This is time that I’m happy to give up.  But as noted above, there are a number of ancillary costs, and these are the ones that are hidden and honestly, not as much fun – but becomes necessary as a tool/service becomes more broadly used/adopted. 

One of the things I learned, over the course of my discussions and hearing experiences of other developers doing different types of work – some of my experiences are common, others are due to to working outside of a larger organization.  But within an organization, there can be other hidden costs that don’t apply to my circumstances.  However, the main thing that I was able to take away from this conversation was that these hidden costs can have a significant impact on a project and an individual or organization’s ability to be successful.  At this meeting, it wasn’t surprising that nearly everyone had an experience where a project died or was abandoned because of issues/costs that fall well outside the traditional development costs (development time/support/documentation).  One thing I’d be curious is if these experiences mirror others within the Library community and if there are other costs (visible or hidden) that you’ve encountered that make supporting new/existing projects.

–tr

Nominations open for NDSA 2018 Innovation Awards / Digital Library Federation

Nominations are now being accepted for the 2018 Innovation Awards for the National Digital Stewardship Alliance (NDSA)!   The NDSA established the Innovation Awards in 2012 to recognize and encourage innovation in the field of digital stewardship.

These awards focus on recognizing excellence in the following areas:

  • Individuals making a significant, innovative contribution to the digital preservation community.
  • Projects whose goals or outcomes represent an inventive, meaningful addition to the understanding or processes required for successful, sustainable digital preservation stewardship.
  • Organizations taking an innovative approach to providing support and guidance to the digital preservation community.
  • Future stewards, especially students, taking a creative approach to advancing knowledge of digital preservation issues and practices.
  • Educators, including trainers or curricular endeavors, promoting innovative approaches and access to digital preservation through partnerships, professional development opportunities, and curriculum.

As a diverse membership group with a shared commitment to digital preservation, the NDSA understands the importance of innovation and risk-taking in developing and supporting a broad range of successful digital preservation activities.  Acknowledging that innovative digital stewardship can take many forms, eligibility for these awards has been left purposely broad. Nominations are open to anyone or anything that falls into the above categories and any entity can be nominated for one of the four awards. Nominees should be US-based people and projects or collaborative international projects that contain a US-based partner. This is your chance to help us highlight and reward novel, risk-taking, and inventive approaches to the challenges of digital preservation.

You can submit a nomination via this quick, easy online submission form: https://www.surveymonkey.com/r/VK63BN5.

Nominations will be accepted until August 31, 2018.  The prizes will be presented to the winners at the Digital Preservation 2018 meeting taking place in Las Vegas, Nevada, on October 17-18, 2018. Winners will be asked to deliver a very brief talk about their activities as part of the awards ceremony.

Help us recognize and reward innovation in digital stewardship and submit a nomination!

We encourage all NDSA members to submit nominations.  We will be hitting electronic mailing lists, but also please promote the awards throughout your community.

For more information on the details on awards from previous years, please see here: http://ndsa.org/awards/

 

The post Nominations open for NDSA 2018 Innovation Awards appeared first on DLF.

UXLibs IV: Conference notes / Shelley Gullikson

uxlibsiv

This is a round-up of my notes from the UXLibs IV conference but it’s certainly not a faithful record; just what stood out to me. It might give a sense of the content for people who missed it or want to revisit. Because it’s so freaking long (as usual), I’ve separated out my own reflections on the conference into a UXLibs IV: Reflection & Inspiration post.

The 4th iteration of UXLibs had a focus on inclusion this year and Day 1 kicked off with intros from Andy and Matt and then Christian Lauersen’s great keynote “Do you want to dance? Inclusion and belonging in libraries and beyond.” I didn’t take a lot of notes (perhaps my mind was already on my workshop), so I’m glad Christian posted his talk. A few of the things he said stood out for me:

  • Inclusion is a process
  • Biases are the stories we make up about people before we know who they really are
  • It’s easy to have values but hard to follow them

Christian also used the great quotation by Verna Myers: “Diversity is being invited to the party; inclusion is being asked to dance.” I’m sure I’ve heard that before, but it resonated more, somehow, hearing it here.

After Christian, I did my workshop. Like last year, I was a little too done to visit the UXLabs during lunch. After lunch were the delegate presentations, and although I was happy with the sessions I chose, I was really sad to miss the others. Everything looked so good! Can’t wait for this year’s yearbook so I can catch up.

Session 1, Track A: Danielle Cooper and SuHui Ho

Danielle’s talk was “Decolonization and user experience research in academic libraries” and she spoke about a research project being done by Ithaka S+R and 11 academic libraries about Indigenous Studies scholars. She talked about how indigenous research differs from Western research and how those differences are being reflected in this project. I didn’t capture everything, but here are some differences:

  • The interview process includes the researcher talking about themselves and why they’re interested in the project; why do they want to know about the things they’re asking the participants to talk about? We usually don’t do this in our user research, trying to present ourselves, instead, as objective observers.
  • Participants get to review the transcript of their interview as well as drafts of the final write-up. They get a voice in how their words are represented and in the findings/results of the research.
  • Related to the above, more space is given to the participants’ words in the results. Rather than just short quotations, long passages are presented to let their words speak for themselves.
  • Participants can choose how they are acknowledged in the report. They are not anonymous by default. Danielle mentioned that this led to issues with Research Ethics Boards, where anonymity is usually required for research with human subjects.

After Danielle, SuHui talked about her work at the University of California, San Diego in trying to balance majority and minority users of the library website on a very diverse campus. Her team worked with 9 library user personas that were developed at Cornell and decided to focus the on undergraduates who were not experts in library research. People represented by other personas could have their needs met by the website, but might have to dig a bit deeper.

SuHui also mentioned the importance of changing up the images used on their library website. Although only 20% of the student population at her university is white, almost all of the images around campus, including on websites, are of white people. So SuHui made sure the photos of people on the library website reflected the diversity of their users. On this, she said it’s important to “act within our power.” I really liked this phrasing.

Session 2, Track C: Jon Earley, Nicola Walton, and Chad Haefele

I was very excited to hear Jon talk about library search at the University of Michigan, and I geeked out over how they moved away from legacy systems and interfaces and built a new search. The new search takes a bento box approach, which my own users hated viscerally a few years ago, but the UMich implementation seems to fix many of the issues students at Carleton had with bento boxes. The pre-cached results and consistent interfaces are pretty great. I was a bit fangirly about the whole thing.

Jon also mentioned that the UX research that drove this project was done before he started at UMich, and the people who had done this research had all left the library. This could have made things very difficult, but they left great documentation behind. I asked what made the documentation of the UX research so good, and Jon said that having priorities and key points highlighted, and very brief reports helped him grasp what was necessary to move from UX research to product design. They also continued to do user research along the design path.

I liked his lessons learned:

  • Include accessibility at each decision …. Rely on HTML and not custom JavaScript widgets
  • Be thoughtful about what deserves your time and resources
  • Performance is a feature
  • Use the words your users use

Nicola Walton from Manchester Metropolitan University was up next with “Behind the clicks: What can eye tracking and user interviews tell us that click statistics can’t?” She was very upfront about the fact that they had done the project backwards: they got an eye tracker first and then figured out how they wanted to use it. She recommended not doing that, but certainly seemed to have no regrets about their experience. She’d found that people get quite excited about eye tracking data, so it was a great way to get in to talk to people who might not otherwise want to talk about UX.

Nicola had lots of videos (though, sadly, not enough time to show them all) which made it clear not only that people struggle to use our library websites, but that what can look like success in the web statistics—people visited the right page—can turn out to be failure—people didn’t actually see what they needed on the page.

Chad Haefele from the University of North Carolina at Chapel Hill was the last in this session and in Day 1. He is looking at possibly funneling people through to different home pages of the library website based on their user type. They are doing a big card sorting exercise with various user types, looking at how often people use specific library services: never, sometimes, often, or always.

Chad was hoping to have data for the conference, but they ran into some difficulty with their Ethics Board over their recruitment strategy. They had planned to use a version of Duke University Library’s “regret lottery” but were not allowed to, so recruitment was delayed.

Day 1 ended with a marvelous conference reception and dinner, terrible 80s music, and dancing. Much fun.

Day 2 started with 3 incredible speakers. I am still fired up from their talks.

Sara Lerén “Inclusive design: all about the extremes”

Sara dove deeper into the notion that users can’t tell us what their needs are. Or in her words:

I’ve always heard that (and seen it in action) but Sara’s explanation of why was a revelation to me. She said that users’ tendency to gloss over difficulties (or, as she put it, “tell us shit”) is likely due to the average brain’s proficiency with cognitive economy. Average brains are really good at minimizing cognitive load through categorization, schemas, automation. (When describing Sara’s session, I’ve said that average brains are really good at sanding down the edges of things; I hope that’s not a misrepresentation.) What Sara called “extreme” brains (non-neurotypical brains) are not so good at minimizing cognitive load in this way. And this is why non-neurotypical users can be better at expressing their real experiences, feelings, and thoughts.

Sara encouraged us to include non-neurotypical users in our research and testing because they will better be able to tell us what’s wrong with our designs. Designing for extremes makes design better for everyone. We see that with designs for physically disabled users: single-lever handles on faucets, curb cuts, and more. Later in her talk, Sara referenced Dana Chisnell’s great work on testing and designing for people with low literacy, and quoted her:

“I came away from that study thinking, why are we testing with anyone with high literacy? Designing for people with low literacy would make it easier for people who are distressed, distracted, sleep-deprived, on medication, whatever. If I could build this into everything, I would.”

Sara’s recommendations for smart user testing:

  • approximately 5 users
  • include extreme users (extreme in neurodiversity, skills, age)
  • test in their natural habitat

Sara also mentioned Yahoo’s vision for an inclusive workplace “for minds of all kinds.” I like this phrasing much more than “neurodiverse,” which sounds a bit clinical to my ear. I’m definitely inspired to see out “minds of all kinds” in my future user research and testing.

Dr. Kit Heyam “Creating trans-inclusive libraries: the UX perspective”

Kit started his presentation with what he called “Trans 101” to make sure we all understood the basics. We can’t work to be trans-inclusive if we don’t understand the multi-facted nature of trans identity. Kit then followed up with examples of experiences trans people have had in libraries.

I didn’t take notes on the exact examples, but one has stuck with me. A student who’s a trans man had an issue with his old name being used in one of the library systems. There was a drawn-out encounter with a library staff member who was not helpful in resolving the problem, and eventually said “It’s so difficult not to offend people these days! You’re not offended are you? It’s an understandable mistake. It’s just so many girls have short hair these days! And your voice…” That student decided it was easier and safer for him to just avoid using library services after that. I found this pretty heart-breaking.

Kit said that what makes the biggest difference for trans folks is not what fits in a policy, but rather the interpersonal relations. He went on to say later that “staff make the user experience.” Great design cannot make up for a terrible encounter with a staff member. And we can’t leave it up to chance whether trans people will encounter welcoming staff; it cannot be what Kit called a “staff lottery.” Some of his specific action recommendations:

  • Updating records
    • Work from a checklist
    • Safeguard confidentiality; could anyone work out that this person is trans?
  • Describing/addressing people – in person or by phone
    • Use gender-neutral language/descriptors
    • Avoid “Sir” and “Madam” / “love” and “mate” / “Ladies and gentlemen”
    • Don’t make assumptions based on your voice: verify ID another way if necessary
  • Signals of inclusivity
    • Pronouns on badges, in email signatures, in meetings
    • Awareness of intersectionality
    • Offer non-binary options (genders/titles) and avoid “he/she” wording
  • Recognise that harassment is about effect, not intent
  • Toilets
    • Don’t assume you know which toilet someone wants to use
    • Have clear procedures for dealing with complaints which stands up for trans rights

The signals of inclusivity show trans people that you’ve thought about them. Though Kit did have an example of a library that gave mixed messages, with staff having pronouns on their badges, but library announcements starting with “Ladies and gentlemen…” It’s important to be consistent.

Most of all, it’s important to have clarity around these kinds of actions and procedures for everyone who works in the library—not just the library staff but also security staff (perhaps especially security staff).

Dr. Janine Bradbury “Safe spaces, neutral spaces? Navigating the library as a researcher of colour”

I fear I’m not going to do justice to Janine’s talk since much of the time I was sitting, rapt, rather than writing anything down. But I will do what I can.

Janine talked about libraries as a literary symbol of literacy. She talked about this symbol being particularly potent for black people and showed a couple of videos to demonstrate this. One was an ad for Bell’s Whisky (Janine asked us to pretend it wasn’t an ad) that showed an older black man learning to read and making his way through stacks of books at the library, starting with early readers and progressing, finally, to a novel written by (it is revealed at the end) his son. Janine posited that if language is power then literacy is about reclaiming power.

Janine then showed a clip of Maya Angelou talking about libraries. At the end of the clip, Dr. Angelou says “Each time I’d go to the library, I felt safe. No bad thing can happen to you in the library.” Janine then spent some time unpacking the notion of libraries being “safe.” She said, “It’s not safe for white people when Maya Angelou is in a library.” But more to the point, libraries are not always safe spaces because they are very often white spaces. White spaces are not always safe for people of colour.

Janine then went on to chronicle her own experiences in various libraries from the time she was a child to now. She spoke of the tension between this kind of lived experience as a library user and the symbolic potency of the library in black culture, such as we saw in the two video clips. That tension is, at least partly, the result of the library as an institution and therefore a place of institutional racism, institutional sexism, etc. Janine later went on to say that the “stamps, fines, charges, cards, documentation” of the library “echoes institutional practices associated with the tracking and surveillance of black bodies.” I found that incredibly interesting and rather chilling.

Related to the recent movement in the UK for decolonising the curriculum, Janine suggested the following actions for decolonising the library:
Actions for decolonizing the library (see image description)

Janine called out the work by Harinder Matharu and Adam Smith from the David Wilson Library at the University of Leicester as a good example of decolonising the library. Harinder and Adam presented on Day 1 of the conference on their work with Black History volunteers to unearth hidden histories of their institution and the impact of those histories on students’ sense of belonging. I can’t wait to read their chapter in this year’s yearbook so I can learn more.

Team Challenge

The remainder of Day 2 was mostly taken up with the Team Challenge. I did like that the challenge was not a competitive one this year; the emphasis was on sharing experiences and it took some of the pressure off. Particularly since it came at the end of the conference and I was pretty beat.

Maybe it was just because I was tired, but I didn’t really enjoy the team challenge. We were to use UX research techniques to reflect on our own individual experiences of doing UX research and then pull those individual experiences into a cohesive team presentation. I was really glad my job had recently taken a positive turn, otherwise I would have found it a very grim afternoon. Still, I didn’t find it very inspiring. But that could be because I’ve done quite a lot of self-reflection in the past year and, when working with a group of UXLibs people from around the world, I’d rather spend the time looking outward and trying to solve actual user problems.

But on the plus side, it was nice to get to know the people on my team. And, in the end, it is always the people that make UXLibs for me. More on that in my Reflection & Inspiration post.

UXLibs IV: Reflection & Inspiration / Shelley Gullikson

Pencil with inscription: UXLibs: Do do do it!

These are my personal (perhaps too personal!) reflections about UXLibs IV and where I found inspiration this year. You may just want my Conference Notes.

This was my fourth time at UXLibs. I was actually thinking of giving it a miss this year. I was having a crap year and was feeling uninspired, hopeless, useless. Why would I want to go to a conference and be surrounded by people who were doing interesting things? Wouldn’t it just make me feel worse? Turns out, no. Quite the opposite.

But back to that feeling of being uninspired, hopeless, useless. Since last summer, I hadn’t been working on any projects at all, nor did I feel much interest in starting any. I had hit the gaping maw of a professional low and couldn’t get myself out. I looked into not just leaving my job, but leaving libraries period.

I noticed a couple of things, compared to times when I felt more engaged:

  • I hadn’t done a scary thing in a while
    • First off, I’d rather stick pins in my eyes than reference those stupid Lululemon bags with sayings like “Do something scary every day” on them. But. Many of the things that I have loved doing started with me cringing while I hit “Send” or “Submit.” Pitching to WeaveUX? Cringe and submit. Sending a very rough first draft to Kristin Meyer? Cringe and send. Pretty much any conference proposal? Cringe and off it goes. I hold the fear and lack of confidence at bay for the second it takes to do a thing I can’t undo. I hadn’t done that in months.
  • I stopped tweeting
    • Partly, I didn’t feel like I had anything useful or interesting to say. Partly, I was disengaging from most contact with people. But I did miss it. Look, I know that the little dopamine lift I get with a like is programmed to keep me addicted to the app and we should all put down our phones and blah blah blah. But I’m not a social media star; my follower and following lists are small and I have met and like most of the people on them. So getting a like is getting a little smile from them, or a touch on the arm: “I know you and I see you.” It makes me think of them, and reminds me that I’m glad to know them. How can this be a bad thing?

So that was where I was when I decided I would come to UXLibs again this year. And then, three weeks before the conference, I got a new boss who managed to restore some of my hope. I no longer felt useless. All that was left was to get inspired.

Cue UXLibs.

Some inspiration I found, in no particular order:

  • I am inspired to heed Sara Lerén’s call to do user research and testing with users on the extremes: “minds of all kinds,” people with low literacy, perhaps students struggling with English as their second (or third or fourth) language, students with disabilities. I have happened upon users in these groups during user testing, but now I will seek them out.
  • I am inspired to heed Kit Heyam’s call to make our library as inclusive and welcoming to trans and non-binary students as possible (within my power). I’ve started looking at our website for gender-neutral language. But that’s just a first step and I hope I can spiral upward from there. Perhaps try to do something to minimize the “staff lottery” for our users.
  • I’m inspired to heed Janine Bradbury’s call to decolonise the library (again, within my power), perhaps with the model of Harinder and Adam’s work on Black History at the University of Leicester.
  • I’m inspired to heed Andy Priestner’s call to think bigger about UX in my library. In many ways, I feel like I’m stuck at first steps and want to start to get to embedding and influencing.
  • I’m inspired by Chad Haefele to keep thinking of new ways to make the library website better, and by Nicola Walton to keep finding new ways to test it. I’ve been letting our site stagnate a bit.
  • I’m inspired by Jon Earley to write better documentation! To streamline spaghetti systems, to improve performance, and to always trust the words of our users.
  • I’m inspired by Danielle Cooper to learn more about indigenous research and look at better supporting indigenous students in my library.
  • And I’m inspired by the people I met and the conversations I had at the conference to keep doing the work, to stay in libraries, to do scary things, to stay connected.

SuHui Ho’s idea that we “act within our power” to improve inclusion turned out to be really inspiring to me. On first blush, it seems a suffocatingly small idea if you feel like you have very little power. However, when I thought about it more I realized that by acting, I can expand my power, which then gives me more scope to act, and it can turn into a wonderful upward spiral. I think it can also be inspiring for those times I’m feeling daunted and overwhelmed, for when I’m uninspired, hopeless, useless. I don’t have to fix everything, I don’t have to do everything, I just have to act within my power.

So, in the spirit of acting within my power, I’d like to invite you to collaborate with me on UX work. Or perhaps invite you to invite me to collaborate with you. I have a sabbatical coming up in July 2019 and I know I’m not suited to spending an entire year working on a project all by myself. So I’m seeking collaborators, co-conspirators for projects large or small. I’ll be staying in Ottawa, so the collaboration would likely be at a distance but I’m definitely open to some travel. I’ve been very vaguely thinking about looking at student help-seeking, or whether we can improve the UX of being a library worker. But I’m completely open to other ideas. It’s still a year out, so there’s lots of time to think and plan.

It feels slightly ridiculous and not a little scary to do this. But dammit, I’m going to cringe and hit “Publish” anyway. Please do do do get in touch.

 

Participate: ORCID in Repositories Task Force / DuraSpace News

Background and group scope
Over the past several years, new ORCID features and increased community uptake have introduced opportunities for ORCID to serve as open infrastructure for automating aspects of repository workflow.

While some repositories have developed sophisticated infrastructure that leverages ORCID to automate workflow, support for ORCID is available out of the box in only a few open source and vendor supplied systems. This means that many institutions that don’t have the resources to customize a system or develop an entirely home-grown solution are unable to make full use of ORCID in their repositories.

To improve workflow automation, author disambiguation, and visibility of repository content using the community-driven infrastructure that ORCID provides, we need better ORCID integration in more repository systems. The ORCID in Repositories Task Force will provide input on repository community needs regarding ORCID and on a set of recommendations for supporting ORCID in repository platforms that will help guide repository system developers.

Charge and Deliverables
This group is charged with reviewing and providing feedback on the proposed recommendations for supporting ORCID in repository systems, including considering:

At what points in repository workflows are ORCID iDs most useful/relevant?
What are the current challenges in using ORCID in repositories?
What ORCID features would be most helpful to include in a repository platform?
The group will develop a set of recommendations to guide repository system developers in designing and building ORCID features. These will be released for public comment before being finalized.

This group will also review and feedback on survey questions that will be used to assess community interest in features proposed in the above recommendations.

Formation & membership
Membership of this group is voluntary, and we invite participation by individuals who have an interest in the topic — including repository providers, repository managers, librarians, IT staff, and research administration staff. The group will be chaired by Michele Mennielli, International Membership and Partnership Manager at DuraSpace. Michele will be supported by Liz Krznarich, ORCID Frontend Tech Lead. ORCID will recognize group members on its website.

Governance
To encourage a “safe space” for frank conversations, discussions during meetings and online conversation will be kept confidential; meetings and other communications including document comments will be considered closed. As with other ORCID task forces, activity, status and outcomes of the group will be shared with the ORCID Board. The group will also share its draft recommendations publicly, for comment by the community, before they are finalized.

Expected effort
We expect the group to attend three one-hour web meetings over the course of three to four months, starting in July 2018, and to dedicate about four hours to reviewing documents outside of the meetings. ORCID staff will generate draft documents, provide logistical support, and take meeting notes.

Meeting 1: Introduce members and review group charge. Discuss survey questions (draft circulated in advance), problems being addressed, and review proposed recommendations.
Homework: Comment on the proposed recommendation.
Meeting 2: Discuss comments on the proposed recommendations and merge comments into a draft recommendation for public comment.
Homework: Comment on draft recommendation.
Meeting 3: Review public comments and finalize recommendations.

Contact
For additional information about the working group, please contact us.

The post Participate: ORCID in Repositories Task Force appeared first on Duraspace.org.

Tweets to @realdonaldtrump; How many fucks are there to give? / Nick Ruest

I’ve been collecting tweets to @realDonaldTrump since June 2017. In my most recent time pulling together, and deduping the dataset I asked myself, “I wonder how many occurrences of ‘fuck’ are in the dataset.” Or, how many fucks are there to give?

Well…

The data is updated by running a query on the Standard Search API every five days.

$ twarc search 'to:realdonaldtrump' --log donald_search_$DATE.log > donald_search_$DATE.jsonl

Which yields something like this every five days.

...
donald_search_2018_05_01.jsonl
donald_search_2018_05_01.log
donald_search_2018_05_06.jsonl
donald_search_2018_05_06.log
donald_search_2018_05_11.jsonl
donald_search_2018_05_11.log
donald_search_2018_05_16.jsonl
donald_search_2018_05_16.log
donald_search_2018_05_21.jsonl
donald_search_2018_05_21.log
donald_search_2018_05_26.jsonl
donald_search_2018_05_26.log
donald_search_2018_05_31.jsonl
donald_search_2018_05_31.log
donald_search_2018_06_01.jsonl
donald_search_2018_06_01.log
donald_search_2018_06_06.jsonl
donald_search_2018_06_06.log
...

Periodically, I cat all the jsonl files together, and then deduplicate them with deduplicate.py. So, this currently leaves us with 90,355,874 tweets to work with.

If you want to follow along, you can grab the most recent set of tweet ids from here. Then “hydrate” them like so:

$ gunzip to_realdonaldtrump_20180606_ids.txt.gz
$ twarc hydrate to_realdonaldtrump_20180606_ids.txt > 20180609.jsonl 

This will probably take quiet a while since there are potentially 90,355,874 tweets to hydrate. In the end, you’ll end up with a jsonl file around 368G.

Once we have our full dataset, first thing we’ll do is remove all of the retweets with noretweets.py, giving us just original tweets at @realDonaldTrump.

$ noretweets.py 20180609.jsonl > 20180609_no_retweets.jsonl

This brings us down to 69,013,268 unique tweets. Your number will probably be less if you’re working with a hydrated dataset because deleted tweets, suspended accounts, and protected accounts will not have tweets hydrated.

$ wc -l 20180609_no_retweets.jsonl

Over the time of collecting, some of the Twitter APIs and fields changed slightly (extended tweets, and 280 character tweets). For us, this means the “text” of our tweets can reside in two different attributes; text or full_text.

So, we need to extract out the text. Let’s use tweet_text.

$ tweet_text.py 20180612_no_retweets.jsonl >| 20180612_tweet_text.txt

Now that we have just the text, we can count how many fucks there are with grep and wc!

$ grep -i "fuck" 20180612_tweet_text.txt | wc -l
1882456

There are 1,882,456 fucks to give!

That’s a fuck to tweet ratio of 2.73%.

For some more fun, let’s take the last 1000 lines of the our new text file, and make an animated gif out of it.

First, let’s get our text:

$ grep -i "fuck" 20180612_tweet_text.txt > fucks.txt
$ tail -n 1000 fucks.txt > 1000_fucks.txt

Then let’s create a little bash script.

#!/bin/bash

index=0

cat /path/to/1000_fucks.txt | while read line; do
  let "index++"
  pad=`printf "%05d" $index`
  convert -size 800x600 -background black -weight 300 -fill white -gravity Center -font Ubuntu caption:"$line" /path/to/images/$pad.png
done
cd /path/to/images
convert -monitor -define registry:temporary-path=/tmp -limit memory 8GiB -limit map 10GiB -delay 90 *.png -loop 0 1000_fucks.gif

Give it a filename, then make it executable, and run it!

In the end, you’ll end up with something like this:

1000 fucks

DSpace 7 Updates from OR2018, Including a Recorded DSpace 7 Demo / DuraSpace News

From Tim Donohue, Technical Lead for DSpace and DSpaceDirect

In case you were not able to join us last week at the Open Repositories Conference (http://or2018.net) in Bozeman, Montana, or just want to review conference materials, we’ve collected information from all the major DSpace presentations and workshops below. (Please note, there were many other presentations and posters that involved DSpace. Below we’ve just noted the major community announcements / demo / tutorials that came out of the conference.)

DSpace 7 Updates, Demo and RoadMap

On Thursday, June 7, I gave an update on the DSpace 7 efforts and provided an early demo of the latest DSpace 7 user interface. While this presentation was not recorded, I’ve recorded a “live” demo of the DSpace 7 UI and made it available on YouTube (see below)
DSpace 7 Update Slides: https://tinyurl.com/or2018-dspace7 (Includes updates, What is coming in DSpace 7, estimated roadmap, and screenshots of the live demo)
DSpace 7 Recorded Demo: https://youtu.be/yKnos2jTdSQ (Includes a preview of REST API, Browse, Search, and a detailed demo of the enhanced Submission & Workflow functionality.)
As announced at OR2018, we are working towards a “beta” release of DSpace 7 by the end of this year, with a first “release candidate” in early 2019, and a final, production release shortly thereafter. We also have a DSpace 7 Community Sprint (new developers are welcome) coming up from July 9-20. Sprint signups are open at https://tinyurl.com/dspace7sprints

DSpace Overview during the Repository Rodeo Panel

On Thursday, June 7, Maureen Walsh (The Ohio State University and chair of the DSpace Community Advisory Team) represented DSpace on the “Repository Rodeo Panel”. This is an annual panel at Open Repositories where all repository platforms provide a brief overview of their platforms, latest accomplishments, what is coming next, and how to get involved. This session was streamed live and recorded.
Repository Rodeo: Slides
Recording of Panel

DSpace 7 Technical Workshops

On Monday, June 4, we hosted two DSpace 7 technical workshops to allow developers and other tech-savvy individuals to learn a bit more about both the DSpace 7 REST API and the DSpace 7 Angular User Interface. These resources are a great way to get more familiar with the new technologies in DSpace 7, and also great learning resources if you are a developer interested in taking part in a future DSpace 7 Sprint.
DSpace 7 REST API Workshop taught by Andrea Bollini (4Science), Terry Brady (Georgetown) and Tim Donohue (DuraSpace)
Workshop Slides (including exercises): https://tinyurl.com/or2018-dspace-rest
Exercises & online tutorial (work in progress): https://dspace-labs.github.io/DSpace7RestTutorial/
DSpace 7 Angular UI Workshop taught by Art Lowell (Atmire) and Tim Donohue (DuraSpace)
Workshop Slides (including exercises): https://tinyurl.com/or2018-dspace-ui
Workshop Wiki page
We hope you all are as excited about DSpace 7 as we are! As several individuals noted at OR2018, DSpace 7 is shaping up to be one of the most exciting releases we’ve had in years!

As always, we welcome your feedback or involvement! We’d also encourage your developers to join us on a future DSpace 7 Sprint (https://tinyurl.com/dspace7sprints). The more help we get, the quicker DSpace 7 will get released! If you have questions, feel free to get in touch via email or on our DSpace Slack.

The post DSpace 7 Updates from OR2018, Including a Recorded DSpace 7 Demo appeared first on Duraspace.org.

The 2018-19 Fedora Leadership Group & Community Nominations / DuraSpace News

Fedora is excited to announce the Leadership Group for the 2018 membership year.

Chris Awre University of Hull
Rob Cartolano Columbia University
Sayeed Choudhury Johns Hopkins University
Stefano Cossu The Art Institute of Chicago
Tom Cramer Stanford University
Jon Dunn Indiana University
Karen Estlund Penn State University
Declan Fleming University of California, San Diego
Maude Francis University of New South Wales
Neil Jefferies University of Oxford
Mark Jordan Islandora Foundation
Steve Marks University of Toronto
Rosalyn Metz Emory University
Tom Murphy ICPSR – University of Michigan
Este Pope Amherst College
Robin Ruggaber University of Virginia
Doron Shalvi National Library of Medicine
Tim Shearer UNC Chapel Hill University Libraries
Dustin Slater The University of Texas Libraries
Jennifer Vinopal The Ohio State University Libraries
Ben Wallberg University of Maryland
Evviva Weinraub Northwestern University
Jared Whiklo University of Manitoba
Maurice York University of Michigan
Patrick Yott Northeastern University

Members of the Leadership Group play a key role in setting the strategic direction and priorities of the project through the approval of the annual budget allocation and project roadmap and establishing the annual community direction.  Under the governance of Fedora, Leadership Group members represent Platinum member institutions, in-kind contributors or are representatives of Gold, Silver, or Bronze member institutions that have been nominated and elected by DuraSpace Members in support of Fedora.  The Fedora community benefits greatly from the engagement of it’s Leadership Group and is excited to welcome these members.

The Fedora project is seeking two individuals from the community to be an active participant in the future of the Fedora project by serving as a member of the Fedora Leadership Group.  

Beginning today, we invite anyone in the Fedora community, DuraSpace members (whose institution doesn’t already have a Leadership Group seat) or non-members of DuraSpace, to nominate an individual who you believe would be a good representative of the community (self-nominations are welcome).

Ideal candidates should be familiar with Fedora and have an interest in being engaged with key project decisions and the broader user community. It is also helpful if the candidate has fiscal or staffing responsibility within their organization and able to bring the commitment, creativity, and dedication that the role calls for.

Learn more about the Fedora Leadership Group here.

Please submit your nomination using this form by June 25, 2018.  Self-nominations are welcome.  

Next Steps

At the end of the nomination process anyone nominated will be asked to submit a brief personal statement expressing why they would be a suitable candidate for the Leadership Group.  An election will follow at which time the Fedora community will be asked to vote for two candidates.

If you have any questions about the Fedora project governance or the nomination and election process please contact David Wilcox, Fedora Product Manager.

The post The 2018-19 Fedora Leadership Group & Community Nominations appeared first on Duraspace.org.

Seeking Expressions of Interest for Islandoracon 2019 / Islandora

The Islandora Foundation is seeking Expressions of Interest to host the 2019 Islandoracon.

The first Islandora Conference was held in August, 2015 at Islandora’s birthplace in Charlottetown, PE. For our second conference, we took the event to Hamilton, ON and more than 100 Islandorians came out to join us. In 2019 we will hold our third Islandoracon, and we're looking for a place.

If you would like to host the next Islandoracon please contact community@islandora.ca with your response to the following by July 20th 2018:

Requirements:

The host must cover the cost of the venue (whether by offering your own space or paying the rent on a venue). All other costs (transportation, catering, supplies, etc) will be covered by the Islandora Foundation. The venue must have:

  • Space for up to 150 attendees, with room for at least two simultaneous tracks, and additional pre-conference workshop facilities, with appropriate A/V equipment. Laptop-friendly seating a strong preference.
  • Wireless internet capable of supporting 150+ simultaneous connections, at no extra charge for conference attendees.
  • A location convenient to an airport and hotels (or other accommodations, such as student housing)
  • A local planning committee willing to help with organization

The host is not responsible for developing the Islandoracon program, pre-conference events, sponsorships, or social events, but their input is certainly valued.

The EOI must include:

  • The name of the institution(s)
  • Primary contact (with email)
  • Proposed location, with a brief description of amenities, travel, and other consideration that would make it a good location for the conference.
  • A proposed time of year. We do not have a set schedule, so if there is a season when your venue is particularly attractive, the conference dates can move accordingly.

The location will be selected by the Islandoracon Planning Committee, a working group of the Islandora Coordinating Committee.

If you have any questions about hosting Islandoracon, please don't hesitate to ask

Call for Nominations: VIVO Community Leadership Group Seats / DuraSpace News

VIVO Community,

The VIVO Leadership Group is the strategic decision-making body for VIVO, and consists of representatives from member institutions and three community members. These three community members will serve on the Leadership Group for one year.

It’s time to hold nominations and elections for three community members, and we need your input. Here’s how it works:

  • Anyone affiliated with a VIVO member institution can nominate individuals  who you believe would be good representatives of the community.  Multiple people from the same institution can make nominations.
  • You can nominate one, two, or three people from the VIVO community using the Nomination Form.
  • You can nominate yourself (provided you’re affiliated with a VIVO member institution).
  • Please send your nominations before midnight on June 25.

Once we receive nominations, we’ll ask for each nominee interested to submit a personal statement explaining why they are interested in serving on the Leadership Group and why they would be an ideal candidate.  An election will follow and VIVO Community Liaisons will vote for three nominees.

Your voice is critical to VIVO, and this is your opportunity to help shape the VIVO Leadership Group. Send your nominations today!

Nomination Form

If you have any questions about the Community Leadership Group nomination or election process please contact Kristi Searle.

The post Call for Nominations: VIVO Community Leadership Group Seats appeared first on Duraspace.org.

Interested in writing for Lead Pipe? We’re calling for submissions. / In the Library, With the Lead Pipe

Do you have an idea, experience, or perspective that will contribute to library literature and conversations? If so, we want to hear from you. The Editorial Board of In the Library with the Lead Pipe is actively seeking submissions from all library viewpoints for consideration for publication in this journal.

Lead Pipe is an open access, open peer reviewed journal, and we strive to publish content spanning all aspects of librarianship. The majority of article proposals we receive tend to come from librarians in academic library contexts, but we want to widen our representation. Therefore, we’d particularly like to invite library workers at any level with public, school, and special library perspectives, as well as folks who work in archives and other arenas beyond librarianship (such as galleries and museums), to consider proposing your article ideas for publication here.

If you’d like more information about our submissions and publication process, check out our submissions guidelines. And if you’d like to chat about your ideas and Lead Pipe as a potential venue to share them, note that several of our editorial board members will be at the upcoming ALA Annual Conference in New Orleans. We’d love to hear what you’re thinking in terms of article proposals. You can tweet at us to find a time and place to meet up:

Keep an eye on the Lead Pipe Twitter feed to find out about times we’ll pop up to talk to potential authors. Or look for our buttons with the Lead Pipe logo that we’ll be wearing throughout the conference. We look forward to hearing the great article ideas you’ve been considering!

LITA @ ALA Annual 2018 – Top Technology Trends / LITA

If you are going to conference be sure you don’t miss the latest Top Tech Trend panel.

LITA Top Technology Trends
Sunday, June 24, 2018, 1:00 PM – 2:00 PM
Location: MCCenter 344

LITA’s premier program on trends and advances in technology features an ongoing roundtable discussion by a panel of LITA technology experts and thought leaders. The panelists will describe changes and advances in technology that they see having an impact on the library world and suggest what libraries might do to take advantage of these trends. This year panelists include:

  • Marshall Breeding: Session moderator, Independent Consultant, Library Technology Guides
  • Jason Bengtson: Assistant Director, Kansas State University Libraries
  • Laura Cole: Director, BiblioTech
  • Justin de la Cruz: Unit Head, E-Learning Technology, Atlanta University Center Robert W. Woodruff Library
  • Marydee Ojala: Editor-in-Chief, Online Searcher: Information Discovery, Technology, Strategies, Information Today, Inc.
  • Reina Williams: Reference Librarian and Education Coordinator, Library of Rush University Medical Center

Marshall Breeding head shot  Jason Bengtson  Laura Cole head shot

Justin De La Cruz head shot  Marydee Ojala head shot  Reina Williams head shot

More information including full bios and possible trends is available at the Top Tech Trends site.

Discover the more than 20 other LITA programs and discussions to make your ALA Annual experience complete.

Questions or Comments?

Contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

UXLibs Workshop: Adding Useful Friction to Library UX / Shelley Gullikson

Post-its with ideas on adding friction

This is a version of the content I presented as part of my workshop at UXLibs IV in Sheffield on June 6. Clearly, I talked way too much.

This is very much an exploratory workshop. I’m going to talk for a bit about an idea or concept and then you’ll have time to reflect and explore how that might apply in your own library. Then I’ll talk some more, you’ll reflect some more, and so on. We could easily do a lot of “think, pair, share” but I know that model doesn’t work for everyone – some of you need quiet time to think, some of you need to talk to think, and there are people in the middle. I’d like you to be able to think and explore in the way you need to. So, if you prefer to talk through your ideas with other people could you write a big T on a sticky and stick it on yourself? And if you prefer to think quietly to yourself, could you write a big Q on a sticky and put it on? If you don’t care, just label yourself a talker. When we get to the time when we’re reflecting and exploring, if you’re a talker, please seek out other talkers and let the quiet folks do their own thing. You can also change your letters as we go.

On to friction!

What do I mean when I talk about friction? Some people define friction in UX as anything that frustrates you, but I think of it more as something that slows you down – like in physics. You have an object sliding down an incline – more friction makes it go slower. And yes, heat or frustration can build up, but I am primarily talking about the slowing effect.

When people talk about friction in UX, it’s usually in the context of wanting to remove it. Because we don’t want to slow people down unnecessarily; we want to help them do what they need to do with the least amount of fuss. But often libraries do slow people down. We see buildings with a jumble of signage. We see websites with a lot of text and jargon. We have a lot of processes with way too many steps.

And I think that’s why many of us are drawn to UX. We see things like this and we want make things easier for our users. Some of you may be familiar with Steve Krug. Don’t Make Me Think is a classic in web usability. Essentially, Krug says: People don’t want to think about your website, they just want to do what they need to do. When you go to the library website, you shouldn’t have to think too hard to find the hours. When you come into the library, you shouldn’t have to think too hard to find a toilet. It should be clear, it should be easy. I love Don’t Make Me Think.

But, today, I want to explore the idea that there may be times when we do want our users to think.

An example, though not from libraries. Some of you might remember from earlier this year, an alert went out to people in Hawaii. “Ballistic missile threat inbound to Hawaii. Seek immediate shelter. This is not a drill.” But it was a drill.

A shot of the screen the employee who made the mistake saw made the rounds almost immediately. Computer screen with a confusing list of links to alerts, warnings and drills

You can definitely argue that there was too much friction on that screen. But, as many argued in the days that followed, there was clearly not enough friction in the design of the system to prevent errors. It seemed so obvious that people designed a bunch “are you sure” pop-ups that they thought could have helped slow down the employee. To help make him think.

Now, what on earth does this have to do with libraries? Our users are not going to be sending missile alerts from our library catalogues. But have a look at this example: Clicking on a journal title launches another search that wipes out search results and history

I do a search in Summon for user experience friction, limited to journal articles, full text, and in the last three years. I look at the results and see one from Journal of Documentation. What happens if I click that link? Let’s assume I notice this: Search within Journal of Documentation. I don’t want to do that. Let’s clear and get back to my results. I didn’t want to clear everything! Okay, I’ll use the back button. Got back to search within the journal, so back again. I just want to get back to my original results. But they’re gone. Once I click that link, I can’t go back. And okay, losing your search isn’t the same as triggering a missile alert. But it’s not great. This definitely needs a little more friction before a single click wipes out my search.

So friction can come in the form of preventing people from making mistakes. Slow them down, maybe give them some extra information, so they don’t do something they don’t mean to do. But friction can have other

At the Interaction17 conference, Christina Xu talked about friction in UX design in China. While she was working there, she found that sometimes she had to engage in a conversation by phone or chat with someone at a business before she could complete an online transaction. Even when she used the equivalent of Uber in China, called Didi, her driver would call to confirm her location. She found this weird but realized that this kind of friction was being used to establish accountability or trust; to help build a relationship. In an academic library context, maybe a student has to contact a subject liaison as a step in their research assignment. That could make it more likely that the student contacts their liaison again. My local public library makes users come in to a branch at least once a year to renew their library card. So friction can definitely be in the physical world as well.

(At this point, I asked workshop participants to think of their own libraries and where there may be spaces or services or interfaces where users could be helped by being slowed down a bit. They could brainstorm with others or think on their own, and they wrote their ideas down on post-its.)

It can be tricky to think about slowing users down, when we usually try to streamline things for them. So, I’m going to introduce another way to add friction in your library without slowing down your users. Instead, you can add friction for library staff.

At my library, we redesigned our online subject guides to try to make them less overwhelming for students. We moved from a tabbed design to accordions. In our user testing of this design, we found that more than 5 accordions was a little much for students to quickly scan. So, on the back end we built in a little barrier: Option to add more than 5 accordions is disabled in the editing interface

The button to add more section is disabled when you get to your 5th accordion. Staff are allowed to have more but they have to make a request. And we’ve only had one person make requests for more accordions. That little bit of friction in the back-end design has been enough to keep people to 5 accordions which, ultimately, makes a better front-end design for our users.

On the physical side, at my library, we used to make students bring us the call number of books they wanted from our Reserves Room (a closed stacks area). If students came to the desk with just a title they were sent away to look up the call number. Now, we have a computer at the Reserves Desk and staff are encouraged to help students find what they need. They don’t have to, but since the computer is right there, they look like jerks if they don’t. Adding friction for staff can be a way of aligning staff work with user needs. You want to make it easy for staff to act in ways that support users. Or, to put it another way, you want to make it difficult for staff to act in ways that don’t support users.

(At this point, I again asked people to think about adding friction for staff in their libraries in order to encourage user-centred behaviours—though no electric shocks or trap doors, please. Again, they wrote their ideas on post-its.)

We’re going to change gears again and think about adding friction to improve inclusion. Many of my examples and ideas around this come from the wonderful book Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. The book is about how a lot of design assumes a pretty narrow range of demographics or life experience. One of the driving forces behind the book was Eric’s experience of getting a Year in Review ad from Facebook, full of happy clip-art people dancing around a picture of his young daughter who had died from brain cancer that year. Definitely not a year for dancing. Eric doesn’t say this directly, but this kind of thing tells a user “our product, our service, it’s not for you.” And I don’t think we ever want to tell our users that the library is not for them.

So where does friction come in? Well, sometimes trying to cut down on friction in processes and interfaces can create a design or service that leaves out certain people.  Maggie Delano describes using the period tracker Glow, that only gives these three options for why you’d want to track your period:

  • Avoiding pregnancy
  • Trying to conceive
  • Fertility treatments.

Maggie says, “It’s telling women that the only women worth designing technology for are those women who are capable of conceiving and who are not only in a relationship, but … in a sexual relationship with someone who can potentially get them pregnant.” This left out Maggie and it leaves out a lot of people with periods. But three options fit so nicely on the screen! Adding more options is going to create complexity, it’s going to slow things down. But it’s also going to be more inclusive.

Another example is the radio button for selecting gender on a form. Even when it’s expanded beyond a binary choice, it’s problematic:

Gender: Female, Male, Other, Prefer not to sayFrom Sabrina Fonseca “Designing Forms for Gender Diversity and Inclusion

Do you really want to make people self-identify as “other”? Is “Prefer not to say” the safest choice? But what if you’re actually quite happy to say but you’re not represented here? I would question whether you really need this information. But if you do, make it a free form question for everyone. Don’t just make people who don’t fit the binary write in their answer; give everyone that same friction of choosing what to enter. And explain what the information will be used for, so someone doesn’t get outed unexpectedly. Yes, this will make your form wordier. Yes, it will slow people down. But this friction is worth it to make sure that you’re not telling groups of your users that the library is not for them.

(At this point, I asked participants to think about where in their libraries they could add a little friction—on the user side or on the staff side—that could help with inclusion; that could implicitly or explicitly let a group of users know that they are welcome. Again, ideas were captured on post-its. After this, participants posted all of their ideas and spent time looking at others’ ideas.)

Now I want to talk for a little bit about friction and user research.

Remember that Hawaii missile warning and people saying that there should have been an “are you sure?” pop-up? It seems like a no-brainer. We see them everywhere for even trivial things. But because we see them everywhere for even trivial things, they don’t always work. In a book about the use of technology in medicine and hospitals, Robert Wachter reports that at one hospital, of 350,000 medication orders per month, pharmacists received pop-up alerts on nearly half of them (The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age). That’s a pop-up alert about every 18 seconds. Pharmacists, unsurprisingly, learned to click past those alerts without even looking at them. You can’t just add friction, you have to test it and test it in your users’ context. Is the friction creating a useful pause, or is it just frustrating?

Another aspect of friction and user research is what happens when users bring friction with them. Design for Real Life has a whole chapter on what they call stress cases – not edge cases but stress cases – and how stress depletes our cognitive resources. A stressed brain simply does not function as well. And for those of us in academic libraries, we can be certain that our users are coming to us stressed.

A 2016 survey of students in universities in my home province of Ontario found that 65% had experienced overwhelming anxiety in the past 12 months. 65% had experienced overwhelming anxiety. Some libraries are trying to include resources on their websites to help stressed students.

But how can you make sure your sites and services are usable by people under stress? Users who agree to participate in research or testing are generally not those who are experiencing overwhelming anxiety. And certainly you don’t want to create overwhelming anxiety for your participants! But you can generate a little bit of cognitive stress by having them remember a sequence of 8 letters and numbers or a sequence of 8 symbols, or you can have some conversational noise in the background. These fairly simple activities can mimic a stressed brain and let you know if your designs work okay for people under some stress.

Although we generally want UX in libraries to be smooth and easy, there may be times when we want to slow people down a bit. To give them a bit of space to think: to help them avoid mistakes, to help them make connections. Or we might want to slow down our library staff to help them act in ways that serve users well. There might be ways friction can help our libraries be more inclusive, more welcoming. And we might want to add friction to our user research in order to mimic some of the stress our users are under when they come to us. We definitely should test any friction we add to our services, spaces or interfaces to make sure that we’re not actually creating frustration instead of that useful pause.

(Before people left the workshop, I promised to share the ideas generated in both workshops. There were so many post-its! But here are all of the ideas generated by the participants.)

I wonder if there should be cake….2019 and planning for MarcEdit @ 20 / Terry Reese

This past year has been a bit of a blur for me. Professionally, I’ve been tied up in some large projects at work that we are just starting to see significant benefits from; I’ve just come out of a year-long project to update MarcEdit; completed the second edition of a book I co-author (https://www.alastore.ala.org/content/building-digital-libraries-second-edition); and have been getting more involved with Ohio State’s University governance. Personally, I’m trying not to panic as my oldest becomes a senior and we start thinking about college and life outside of the nest. It’s been a busy, but fruitful period. So, it came as a bit of surprise the other day when I was preparing for a workshop and I realized that in 2019, MarcEdit will turn 20 years old. 2-0. My word…it doesn’t seem like it has been that long. But indeed, this project started when I was an undergraduate at the University of Oregon. It didn’t look like it does today – then it was just a couple assembly libraries that I used to create/edit MARC records with; but this was its genesis. The following year, 2000, is the first year MarcEdit would get a GUI, and become something I shared with the rest of the world…but there it is…20 years.

I keep thinking at some point, I need to take some time and write down some thoughts on what it means to be associated with a project this long or to be fortunate enough to create something that has made a tangible impact on the profession…even if that impact is only a modest one. At some point, I’ll reflect a bit more on all this…and the people that made this work possible; especially in those early days when I was still trying to figure out the whole Windows thing (I still feel bad about all the times Kyle Banerjee would test new builds only to have his Windows registry deleted. He suffered so the community didn’t have to ); and later when the community (especially George Mason and Ian Fairclough) took it upon themselves to organize in a way that I hadn’t been successful in shepherding. I’m not sure projects like this exist for this long without a long-line of people that have either publicly or privately laid the foundation for the work.

But 20 years…I’m thinking that there needs to be cake. I’m not sure where or when yet (I have a couple ideas); but I definitely think this needs to happen…with cake; preferable some kind of red velvet cake. As plans develop, I’ll definitely keep people in the loop. Till then, happy editing.

–tr

We are atomized. We are monetized. We are ephemera. Do we deserve more online? / Meredith Farkas

11180721716_1baa040430_b

In March and April, I took about 5 weeks off from social media. I didn’t post anything to or look at Twitter, Facebook, or Instagram. I’d wondered if I’d feel disconnected or feel some irresistible pull like an addict to their drug of choice. To be honest, I didn’t really feel any of that. I didn’t miss it at all. I wondered about how people were doing, but I didn’t miss the stream of microposts on social media. I still read blogs via my RSS reader, so it’s not like I was 100% disconnected. While away, I’d missed a bunch of kerfuffles on the Internet, missed some big events in the lives of my friends, and missed meeting up with a friend who was visiting Portland (who could have emailed me had they really wanted to get together). There were a few moments when something happened or I had a funny thought that I normally would have posted, but I didn’t feel a sense of loss from not documenting that moment.

What I noticed more, especially when I returned, was that no one missed me. Maybe that’s the real lesson one gets from a social media fast. With a constant stream of information from people you know (and don’t), the absence of any one person isn’t really noticed. This became readily apparently when one of my close colleagues who frequently comments on my social media posts hadn’t even noticed I wasn’t online all that time. Ouch! There’s always other content to fill the void so that you don’t even notice there is a void. Does any one person really matter in all of that?

I’ve been thinking about the tenuous nature of connection in the social media age. It’s so easy to lose people we love in the stream or to believe that because we’re “connected” via social media that we’re much more connected than we really are. About a month ago, my first love died of cancer. Allan was Danish and we fell in love while I was studying in Copenhagen 20 years ago. It was a love like I’d never felt before. While I look back on my other past relationships now and can’t see what I saw in former partners, Allan became the yardstick against which I measured every other relationship until I found someone else who was like him in so many ways.

When Allan died, I hadn’t seen him in person in 10 years, but the thought of him not being on the earth anymore makes me physically sick every time I think about it. While I was on my social media hiatus, I missed the Facebook post where he shared that his cancer had taken a turn for the worse and I wish more than anything that I’d had the chance to reach out to him before he left this world to tell him how much I loved him. But I could just as easily have missed his post had I been on social media that day. The stream makes it so easy to miss the big important things in a sea of mundane, snarky, and funny posts. Everything — from a status update about lunch from an acquaintance to a post about a cancer diagnosis from a loved one — is weighted the same. Even after doing a major culling of my Facebook friends by over 50%, I still feel like I miss so much.

I feel his loss keenly. I can trace so many things I do, care about, or believe now back to him. I can’t listen to certain music without thinking of him. And it feels like with his death some part of my own life died; the part that was that 20 year old girl crazy in love in Denmark. Without him to remember it too — to confirm how remarkable and rare that time was for both of us — it almost feels like it never really happened. And instead of there being a hole where he once was, Facebook seamlessly fills that empty space with cat videos, snarky animated GIFs, and baby pictures. Allan can disappear. I can disappear. You can disappear. And social media just fills a void I don’t want to be filled.

I’ve been writing about missing the community that existed around blogging and my uneasiness with Twitter for 10  years now. I’m one of those old-fashioned weirdos who still uses an RSS reader. My husband and I both use Tiny Tiny RSS, which is installed on our server and functions just like Google Reader did. I still like subscribing to specific blogs or news sites and seeing content specifically from those places. I might discover something great on Twitter, but if it doesn’t have an RSS feed, I’m not going to follow it long-term. I’m a terrible multitasker so dipping into the stream throughout the day has never worked well for me (and I’m less close to old blogging buddies because of it to my great sadness). I miss too much. I take time each day to go through my RSS feeds and mark things I want to read more in depth for when I have time that week. In Twitter or on Facebook, I’ll see an article that interests me, but if I don’t have time at that moment to read it, I almost inevitably forget about it. Sure, I could use another tool to keep track of those things, but I don’t really want yet another tool to keep track of.

There is a lot that we gained with Twitter and Facebook. That is undeniable. But I think it’s important that we look at what we lost and ask ourselves if it was worth it.

Today, I read an interview with Jaron Lanier in The Millions (which I subscribe to via RSS) about his new book Ten Arguments for Deleting Your Social Media Accounts Now. This quote really resonated with me:

I think of a social media company, in particular Facebook and to a degree Google, as an existential mafia. They’re saying, you have to work with us or you effectively won’t exist. You’ll become invisible to everybody. Your very corporeality is in our hands, so give us a cut of your being. It’s a very strange moment. Ultimately, the power of a protection racket does rest with their ability to keep a community in fear.

It’s really worth reading the whole interview — a lot of food for thought on the compromises we feel we have to make to “exist” online in the Facebook/Twitter era. Other recently-published books on the subject include Siva Vaidhyanathan’s Antisocial Media: How Facebook Disconnects Us and Undermines Democracy and Stand out of our Light: Freedom and Resistance in the Attention Economy.

Thinking about criticisms of social media (and there seem to be a lot of books coming out on the subject right now), I remember the early critiques feeling very elitist to me. Crowdsourcing is a myth and devalues expertise. Reading online is destroying people’s capacity for long-form reading and sustained thinking. Remember Michael Gorman and “the blog people?” There was a lack of recognition that crowdsourcing can sometimes create things no expert could create alone. That social media had the power to bring (and keep) people together. That it could help talented people be discovered. That it could bring attention to abuses of power and power social movements for change. That it can sometimes help people find their community and  feel less alone.

But I think recent criticisms recognize social media’s power for good while also acknowledging the very real potential for damage. Like the weaponization of social media by hate groups and Facebook and Twitter’s reluctance to do anything useful about it. The manipulation of the public for political gain or political destabilization. The easy proliferation of fake news. The paradoxical increase in loneliness all of this connectedness can engender. The passive “social snacking” that’s the social equivalent of bingeing on Twinkies and never quite fills your emotional tank. The pressures, performance, and sometimes narcissism involved in creating and curating a representation of yourself online. The rage-filled pile-on Twitter can become. The monetization of the minutiae of our lives and thoughts and the realization that our content was not as private as we thought it was. The time spent on screens away (whether physically or psychologically) from our loved ones.

What was most frustrating about blogs was the distributed nature of the conversation, but moving to a centralized space destroyed the close sense of community, at least for me. In the move from blogs to the centralized ecosystem, what we gained in the ease of connection and the quantity of connections we lost in quality of those connections. And maybe I’m just old and cranky now, but what I want are deeper connections and conversations with people.

I’m not the only person missing the blogging ecosystem of 10-15 years ago. Chris Zammarelli wrote about missing the thoughtful writing and community we had in the era of the Carnival of the Infosciences (OMG remember that??). Dan Cohen makes the case for taking control of our social content and going “Back to the Blog.” Dan makes an interesting point about the sense of “ambient humanity” (love that term!) that keeps us coming back to Twitter and Facebook:

It is psychological gravity, not technical inertia, however, that is the greater force against the open web. Human beings are social animals and centralized social media like Twitter and Facebook provide a powerful sense of ambient humanity—the feeling that “others are here”—that is often missing when one writes on one’s own site. Facebook has a whole team of Ph.D.s in social psychology finding ways to increase that feeling of ambient humanity and thus increase your usage of their service.

And yet, as I mentioned, your individual presence or absence is of little consequence. And the things you write are as ephemeral as autumn leaves (which in the case of rashly posted Tweets might be a good thing). I wrote this about my frustration with the ephemerality of professional communication on Twitter and Facebook nearly seven years ago:

I know it’s futile to argue for a return to blogging as the primary means of professional conversation in social media. But I think it’s valuable to consider what we lose by replacing blogging with steam-based social media (not supplementing, but replacing). A loss of control, of history, of scholarly relevance and perhaps of deeper and more meaningful discussions (though I know I risk sounding like Michael Gorman with his “blog people” screed). There are things I post to Twitter that I think others might like to know about that I don’t feel merit an entire blog post. Twitter has a lot of advantages over blogs for a lot of things. But it is not an adequate replacement for the kind of thoughtful conversations one can have via blogs. There were a lot of blogs that I loved years ago that have become nearly (or truly) defunct as their authors have moved to Twitter or FriendFeed to have the majority of their professional conversations. I know it’s just the way things go, but I can’t help but feel some disappointment that it’s the way things are going.

We’re also giving ourselves — or at least our digital representations and content — to companies that don’t protect us in any meaningful way (from others or themselves). I want to go back to a world where we owned and had control over the means of production (if we wanted to self-host). I don’t like the atomization of our identities and our content. I don’t like how so many social platforms seem to exist to make the author — the source of the content — matter less even if it gets more people reading it. Authorship matters. I want to have deeper, more meaningful conversations via social media. There’s a lot of talk about the importance of mindfulness in our professional practice and personal well-being. In today’s social media landscape, maybe blogging is the “slow practice” we need to become more thoughtful, reflective, and intentional. In the wake of Kate Spade and Anthony Bourdain’s deaths, people are tweeting and writing on Facebook that “you matter” and yet our choices of social media often make the individual matter so much less.

The challenge would be for people to build better infrastructure that facilitates following blogs, following the distributed conversation across blogs, and making connections via blogging. We managed it in the early days of social media — could we do it now?

What would a humane, supportive, social networking platform/ecosystem look like for you? What is needed to make that happen? What is missing from your current social media diet? What would you not want to lose from Twitter/Facebook/Instagram/whatever? I know there’s no going back, but I also think we need to remember that we’re not stuck with Twitter, Facebook, etc. We can move forward.

Image credit: Ed Yourdon