Planet Code4Lib

John Wharton RIP / David Rosenthal

My friend John Wharton died last Wednesday of cancer. He was intelligent, eccentric, opinionated on many subjects, and occasionally extraordinarily irritating. He was important in the history of computing as the instruction set architect (PDF) of the Intel 8051, Intel's highest-volume microprocessor, and probably the most-implemented instruction set of all time, and as the long-time chair of the Asilomar Microcomputer Workshop. I attended AMW first in 1987, and nearly all years since. I served on the committee with John from 2000 through 2016, when grandparent duties forced a hiatus.

On hearing of his death, I thought to update his Wikipedia page, but found none. I collected much information from fellow Asilomar attendees, and drafted a page for him, which is currently under review. Most of the best stories about John have no chance of satisfying Wikipedia's strict standards for sourcing and relevance, so I have collected some below the fold for posterity.


John at the memorial for Gary Kildall,
with Marianne Mueller and Mark Dahmke

Oblique Perspective

John was a founding member of the editorial board of Microprocessor Report, writing for it frequently. His opinion columns were often contrarian, and ended up being called "Oblique Perspective". A collection from August 1988 to August 1995 includes among others:
  • Architecture vs. Implementation in RISC Wars (8/88)
  • Unanswered Questions on the i860 (5/89)
  • The "Truth" About Benchmarks (5/18/90)
  • Does Microcomputer R&D Really Pay Off? (9/19/90)
  • Have The Marketing Gurus Gone Too Far? (5/15/91)
  • A Software Emulation Primer (10/2/19)
  • The Irrelevance Of Being Earnest (4/15/92)
  • Why RISC Is Doomed (8/19/92)
  • Brave New Worlds (12/9/92)
  • Breaking Moore's Law (5/8/95)
  • Is Intel Sandbagging on Speed? (8/21/95)
They're all worth reading, I've just hit the highlights.

The red Porsche 944

One of the "Oblique Perspectives" deserves special mention. In How To Win Design Contests (10/17/90) John propounds a set of rules for winning design contests and, in a section entitled A Case-Study In Goal-Oriented Design, shows how he used them to win a red Porsche 944! The rules are:
  1. Go ahead and enter. No-one else will, and you can't win otherwise.
    Another department staged a single-board-computer design contest, with development systems and in-circuit emulators worth thousands of dollars as prizes. Their most creative entry proposed to install computers in public lavatories to monitor paper towel and toilet paper consumption and alert the janitor if a crisis was imminent. No one could tell if the proposal was a joke, but it won top honors anyway.
  2. Consider what the sponsor really wants.
    So the best way to win is to reverse engineer the sponsor's intentions: figure out what characteristics he'd most like to publicize and then put together an application-possibly contrived-with each of these characteristics.
  3. Keep it really, really, really simple.
    The judges will have a number of entries to evaluate, and those they understand will have the inside track. ... It's far better to address a real-world problem the public already understands so they can begin to grasp your solution and see the widget's advantages immediately.
  4. Devote time to your entry commensurate with the value of the prize.
    The entry form may request a five-line summary of your proposal and its advantages, but this isn't a quick-pick lottery. If the contest is worth entering, it's worth entering right, and neatness counts.
John starts by applying rule 4:
The Porsche was worth about $25,000. Based on the tum-out of previous contests, I guessed Seeq would get at best four other serious entries, which put my odds of winning at one in five. That justified an investment of up to $5,000 worth of my time - about two weeks - enough to go thoroughly overboard on my entry.
And goes on to describe designing and prototyping a "smart lock". He concludes:
It seems my expectations were overly optimistic on two fronts. I'd underestimated the number of competing designs by an order of magnitude, and while my entry's basic concepts and gimmicks were all developed in one evening, it took a week longer than I'd planned to debug the breadboard and document the design.

Even so, I, too, was happy with the results. Some months after the contest ended - on my birthday, by happy coincidence -! got the call. My lock had been judged best-of-show; I'd won the car. The award ceremony - with full media coverage - was one week later.

So, if you notice a shiny, red, no-longer-new Porsche cruising the streets of Silicon Valley, sporting vanity plates "EEPRIZ," you'll be seeing one of the spoils of design contests. Fame and fortune can be yours, too, if you simply apply a little creative effort.

The Asilomar Microcomputer Workshop

AMW started in 1975 with sponsorship from IEEE. David Laws' writes in his brief history of the workshop about the first John attended and spoke at:
The last IEEE-sponsored workshop in 1980 featured a rich program of Silicon Valley nobility. Jim Clark of Stanford University spoke on the geometry engine that kick-started Silicon Graphics. RISC pioneer Dave Patterson of UC Berkeley covered “Single Chip Computers of the Future,” a topic that evolved over the subsequent year and led to his 1981 “The RISC” talk. In 2018 Patterson shared the Turing Award with John Hennessy of Stanford for their work on RISC architecture. Gary Kildall, who both influenced and was influenced by discussions at the workshop, described his PL/I compiler. Designer of the first planar IC and the first MOS IC, Bob Norman talked about applications of VLSI. Carver Mead capped this off with a keynote talk on his design methodology.
John started chairing sessions in 1983, and became Chair of the workshop in 1985, a position he continued to hold through 1997. He was Program Chair from 1999 through 2017. The format, the eclectic content, and the longevity of AMW are all testament to John's work over three decades.

The title of John's 1980 talk was "Microprocessor-controlled carburetion", which presumably had something to do with ...

Engine Control Computers

In Found Technology (4/17/91) John recounted having a problem with his Toyota in Tehachapi, CA and attempting to impress the Master Mechanic:
"In fact, I developed Ford's very first engine computer, back in the '70s." That should impress him, I thought.

He pondered briefly, then asked: "EEC-3 or -4?"

Damn! This guy was good. "I thought it was EEC-1," I began, trying to remember the "electronic engine control" designators. "It was the first time a computer ... "

"Nah, EEC-1 and -2 used discrete parts," he interrupted. "EEC-3 was the first with a microprocessor."

"That was it, then. It had an off-the-shelf 8048."

"You mean you designed EEC-3?" the Master Mechanic asked incredulously. "Hey, George!" he shouted to the guy working under the hood. 'When you're done fixing this guy's car, push it out back and torch it! He designed EEC-3!"

So much for impressing the Mechanic. "Huh?" I shot back defensively. "Did EEC-3 have a problem?"

"Reliability, mostly," he replied. "The 02-sensor brackets could break, and the connectors corroded."

I beat a hasty retreat. "Those sound like hardware problems," I said. "All I did was the software."
Stan Mazor recounts the early history of engine control computers:
While an App Engineer at Intel, GM engaged me to help them design a car that used an on board computer to do lots of stuff, even measure the tire pressure of a moving vehicle. Little did I understand at the time that auto companies HATED electronic company's components, and their motive was to prove that computer chips COULD NOT be used. I only learned that late in their project work. When VW announced and demonstrated an on board diagnostic computer, the auto industry (USA) was hugely embarrassed and tried to catch up with VW.

Due to pollution and government mandates, Ford implemented a catalytic converter with a Zirconium Oxide sensor. Their 8048 computer measured the pollution, and servo'd the car's carburetor mixture of fuel and oxygen. (recall ancient cars had a manual choke). John and his associate were Intel app engineers on the project. As I best recall their program used pulse-width duty cycle modulation to control the 'solenoid' controlling fuel mix, (Hint: too lean, too rich, too lean, too lean, etc.)

Now comes the interesting story: Cranking the starter on a cold car, injected huge transient voltage into the CPU, and scrambled the program counter, and the app could start at any instruction, no that's not quite the issue.

The 8048 has 1 and 2 byte instructions, so the program counter could enter up at any byte, yes, the middle of an instruction and interpret that byte as an op code, even if it was a numeric constant, or half of a jump address !!!

No that's only half the story: It turns out that one of the 8048 instructions is irreversible under program control and the only way out (of mode 2), was to hit the reset line!!!. So the poor guys (John) had to re-write their code to insure that no single byte of object code could be the same as that magical (and unwanted) instruction operation code.

Autonomous Vehicles

In 1988 Bruce Koball and John published A test vehicle for Braitenberg control structures. The abstract reads:
This paper describes the implementation of model vehicles with neural network control systems based on Valentino Braitenberg's thought experiment. The vehicles provide a platform for experimentation with neural network functions in a novel format. Their operation is facilitated by the use of a commercially available neural network simulator to define the network for downloading to the vehicles. 
Koball & Wharton, Figure 2
The block diagram shows a vehicle with right and left sensor arrays feeding an 8051 running neural network code from an EPROM. The network definition is downloaded into RAM via a serial link. The 8051 drives right and left motors. Their development environment was MacBrain:
The network model implemented in the current firmware is similar to that used by MacBrain, a neural network simulator for Macintosh computers, ... MacBrain allows the user to create and edit networks on screen, load and save network definitions to disk, and run simulations of networks while observing the changing activation of the various network units.
In "Further Work" they wrote:
One area of potential interest not addressed in initial version of the vehicle's network simulation model was the dynamic alteration of network parameters during operation based on sensory input and network activation. While any of the network's parameters could be changed in this manner, the most likely candidate would be the link weights. The alteration of link weights based on some criteria is a widely used model for "learning" in neural network experimentation and, indeed, is thought to be part of the mechanism for learning and memory in real biological nervous systems.

John's 15 Minutes Of Fame

Goal-Oriented Design Pays Off!
John was the subject of Reinvent the Wheel? This Software Engineer Deconstructs It, a 1999 New York Times profile by Katie Hafner. It includes this example of goal-oriented design:
In 1996, when the Letterman show came to San Francisco, Mr. Wharton made a calculated effort to get noticed. He figured out which seats the camera would be most likely to focus on and made sure that he was seated there. He made himself conspicuous by undoing his ponytail and donning a tie-dyed shirt. He looked ''just like the sort of San Francisco hippie'' the show's producers would expect to see, he said.

It worked. Mr. Letterman himself strode into the audience, asked Mr. Wharton his name, then asked if he would agree to take a shower in the host's dressing room -- an ongoing gag of Mr. Letterman's. Mr. Wharton happily obliged. The cameras followed Mr. Wharton, from his torso up, as he disrobed, stepped into the shower and lathered up. On his way back into the audience, clad in a white bathrobe, he managed to snatch a copy of the script.
John relived this experience for KPIX-TV on May 14, 2015.

A Crate of Furbies

Katie Hafner's profile of John included this section on Furby:
Mr. Wharton derives much of his satisfaction from the thrill of the puzzle itself, like the inner workings of the Furby. Apart from Dave Hampton, the inventor and engineer who created Furby and whom Mr. Wharton reveres, Mr. Wharton may understand Furby's innards better than anyone else. Although he and Mr. Hampton are acquainted, Mr. Wharton would never think to ask Mr. Hampton for a road map.

''That would be cheating,'' he said. ''It would be like asking the guy who wrote the crossword puzzle for the answers.''
Dave Hampton was a frequent attendee at AMW. One year he bought a large box full of Furbies as gifts for the attendees. At John's instigation, a few of us turned them all on, replaced them carefully in the box, carried the box carefully into the room where the meeting was underway, and shook the box. Whereupon they all woke up and started to talk to their friends. The sound of about a hundred Furbies in full chatter has to be heard to be believed.

Harold Evans' They Made America;

Sir Harold Evans's book They Made America: From the Steam Engine to the Search Engine: Two Centuries of Innovators has a chapter (pg 402 on) about Gary Kildall and CP/M, based almost entirely on John's work for an abortive biography. Sir Harold spoke about the book at the 31st AMW in 2005 (Motto: Never Trust a Computer Workshop Over 30!).

Source
The chapter included the story of how Tim Paterson's QDOS (Quick and Dirty Operating System), reverse-engineered from CP/M, became MicroSoft's 86-DOS, which led Paterson to file:
a defamation lawsuit, Paterson v Little, Brown & Co, against Sir Harry in Seattle, claiming the book's assertions had caused him "great pain and mental anguish". The court heard the detailed API evidence, and rejected Paterson's suit in 2007. US federal Judge Thomas Zilly observed that Evans' description of Paterson's software as a "rip-off" was negative, but not necessarily defamatory, and said the technical evidence justified Sir Harry's characterisation of QDOS as a "rip-off". 
Much of the technical evidence came from John, who was tasked at Intel with evaluating 86-DOS and showed that it was only a partial clone of CP/M, as described in the image of a letter to Microprocessor Report in 1994. The image comes from John's obituary by Andrew Orlowski in The Register.

Update: Here is the video Brian Berg linked to in this comment, with John talking about Gary Kildall at the dedication ceremony of the IEEE Milestone for CP/M.

blah blah blah: diversity and inclusion / Tara Robertson

It was such an honour to be invited to speak at National Digital Forum in Wellington. This was the biggest talk I’ve ever done and it’s the first talk I’ve done on the diversity and inclusion. I surprised myself by how emotional I got at the end and it couldn’t have been a safer place to share my ideas and my feelings.

The talk was recorded. I’ll add the video once it’s up.

blah blah blah: diversity and inclusion

title slide

Kia ora koutou
Ngā mihi nui ki ngā tangata whenua o tēnei rohe.
Ko Tara Robertson ahau, nō Vancouver ahau.
Tēnā koutou katoa.

(Thank you Courtney Johnston for the mihi and to Georgie Ferrari who recorded it on her iPhone so I could practice it again and again!)

I am so excited and delighted to be invited here. I arrived from Canada about 10 days ago and this whole trip has been full of wonderful and serendipitous connections. It’s been amazing to reconnect with old friends and colleagues. I appreciate all the hospitality that has been extended to me and my wife–thank you so much Fiona and the rest of the organizing committee. We feel really welcomed and taken care of.

Finding the right title is something that I’m not very good at. I’m going to mostly talk about my work at Mozilla but I’m going to take some detours and blah blah blah.

Thank you to:

 

 

 

 

 

I’m one of those annoying extroverts who needs to think out loud. I appreciate the generosity that all of these people have extended me. These people are friends, colleagues, comrades, librarians, sex worker activists, academics, feminists, queers and artists. I want to acknowledge and thank all of these people up front as extended feminist citation practice.

I’m standing on the shoulders of these giants.

Here’s the links I’m going to reference:

bit.ly/tara-ndf

 

 

 

 

 

 

http://bit.ly/tara-ndf

Hello, I’m Tara…

Mr. PGphoto used with permission from Tourism Prince George

 

I was born in Vancouver and grew up in a logging town called Prince George. Prince George is 800km north of Vancouver, at the junction of the Nechako and Fraser rivers. It is on the traditional territory of the Lheidli T’enneh, which means the people of where the two rivers flow together. Growing up most towns in a 10 hour drive in any direction didn’t have any McDonald’s and Prince George, population 75,000, had 4. This is Mr. PG, the town mascot. 8m tall, originally made of wood, he rotted and the replacement is built to last out of fiberglass and sheet metal.

My mom is Japanese-Canadian and my dad is white, of Scottish and Irish ancestry. I’ve lived in 7 different countries including Scotland and Japan–partly to learn about the world but I think I was also looking for a sense of belonging and home. Being mixed race and queer means I’ve spent most of my life feeling like I don’t belong and that I don’t fit. This has also given me a first hand, personal view of group dynamics–I see things that many people in the majority groups do not.

In 2009, I moved here with the intent to spend a year in Wellington. I’m grateful that Courtney Johnston hired me on contract, to work on the National Library’s website. To be honest, I was a bit crap. I was trying to figure out a bunch of things in my life and wasn’t the greatest employee. I made some colossal errors, including taking the website down 3 times. I had planned to stay here for a year, but got homesick after 6 months during a cold and wet Wellington winter. The silver lining of feeling homesick was that I finally realized where my home was.

My home

Vancouver skyline

 

 

 

 

 

 

Vancouver has been my home for 15 years. The Pacific Ocean and the mountains feel like a giant hug. Old friendships and community connections also root me in Vancouver. Google Maps says it takes approximately 17 hours to fly here from there.

Before Mozilla I was a librarian for 12 years working mostly in post-secondary institutions. I was drawn to libraries because I’d volunteered in activist and feminist libraries and care deeply about access to information. People often ask me about my odd career path from libraries to doing diversity and inclusion work in the tech sector. I was active in the library technology community where I led some work to make our conferences safer and more inclusive. For the last 5 years of my librarian career I managed an accessibility organization that served students with print disabilities by format shifting their textbooks into a digital formats that they could use. I’m still very passionate about accessibility and universal design. I love that the NDF organizers care about accessibility and communication access. The interpreter Tania and I also know each other from 16 years ago when we lived in Hokkaido, Japan. It feels really special to have her interpreting my words in to NZ sign language.

I’ve been at Mozilla just over a year. As the Diversity and Inclusion Strategic Partner I’m the data person on our team. I’ve been building out our infrastructure so we can measure progress on diversity metrics. I partner with different parts of the organization on specific strategies for cultural inclusion. I’ve led projects on trans inclusion and continue to advocate for accessibility.

Mozilla!

Mozilla: keep the web open and free written above a big eye

 

 

 

 

 

 

Mozilla has 1200 staff and 10,000 volunteer community members worldwide.  Our mission is to ensure the Internet is a global public resource, open and accessible to all. The way we do this is with open source products, like the Firefox web browser. If you’re not using Firefox I suggest you give it a try as we relaunched Firefox Quantum last fall. It’s fast and we don’t do bad things with your data.

Mozilla is a company that has one shareholder, the not-for-profit Mozilla Foundation.

The Mozilla Foundation does awesome work on policy, publishes the Internet Health Report, host MozFest in London, and offers fellowships to 26 technologists, activists, and scientists from more than 10 countries, including New Zealand. This year our Fellows include:

  • A neuroscientist building open-source laboratory hardware.
  • An artist and maker who is looking to make weird projects that can only really live on the decentralized web, and to build tools and tutorials to help other people make even better, weirder things
  • and Sam Muirhead, here in Wellington. Sam is working on an open source approach to the creation and adaptation of illustrations, comics, and animation. The aim is to support international activist networks running digital campaigns in diverse cultural contexts — enabling local chapters to speak with their own creative voice, while building solidarity and sharing resources across the network.

I got to meet this cohort of Fellows in Toronto and they are one of the most interesting groups of people I’ve ever met. I’m so excited about the change that they’re making in the world.

Whose voices are missing? How do we include these voices?

""

 

 

 

 

 

 

These are two questions that have guided my work for the last 10 years.

In most social situations, I think it’s always interesting to observe:

  • Who is in the room?
  • Who is at the table?
  • Who speaks a lot?
  • Who has social capital?
  • Who feels welcome?
  • Whose ideas are respected and centered by default?

I think even more interesting is to note:

  • Who is missing?
  • Who is sitting on the margins?
  • Who doesn’t feel welcome?
  • Who has to fight to have their viewpoints heard and respected?

How diversity makes us smarter

5 women of colour sitting around a meeting room tablephoto from www.wocintechchat.com

 

For groups that value innovation and new ideas, diversity is key. There’s plenty of social science research that demonstrates this but one of my favourite articles is by Dr. Katherine Phillips, Professor of Leadership and Ethics and Senior Vice Dean at Columbia Business School. Her article How Diversity Makes Us Smarter in Scientific American is an accessible summary of some of the key research in this area.

Dr. Phillips says that when we’re around people like us, whether it’s people who are the same race, gender, have the same political viewpoints as us, it leads us to think we all hold the same information and share the same perspective. When we hear dissent from someone who is different from us, it provokes more thought than when it comes from someone who looks like us. Diversity jolts us into cognitive action in ways that homogeneity does not. Simply by being in the presence of someone who is not like you, you will be more diligent and open-minded. You will work harder on explaining your rationale and anticipating alternatives than you would have otherwise.

There’s a couple of other important points in Dr. Phillips’ article. While diverse groups performed better than homogeneous groups they also had more conflict and enjoyed working together less. As someone works in D&I this means that as we build more diverse teams we also need to also build people’s skills on understanding unconscious bias, giving and receiving feedback and communicating when there’s conflict.

Mozilla’s mission is to ensure the Internet is a global public resource, open and accessible to all–how can we do that if we don’t have everyone at the table building the tools to do this? It’s not just about diversity, people need to feel that they can bring their whole selves to the table and that difference will be accepted and valued. This is the inclusion piece.

What is something that someone has done to make you feel included?

Think (1 min) Think quietly and write down your idea. Pair (2 min) Find someone you haven't worked with yet and share your ideas.

 

 

 

 

 

 

So, I want you to think about something that someone did to make you feel included. The example you think about can be from work, social, school, family, church, sports team…whatever. I’m going to give you 1 minute to quietly think about this and to write your answer down.

OK, great! I want you to get into groups of two and share what you wrote down. You have 2 minutes. Go!

Share (4 min)

In a group of 4 share your ideas and pick one to share with the whole group. Add your ideas to this doc: bit.ly/NDF-2018

 

OK, I’m going to change the question slightly now. The question is: What can we do to make this community even more inclusive?

I want you to get into groups of 4, discuss this question and write down your group’s ideas in the Google Doc at bit.ly/NDF-2018

You have 4 minutes. Go!

(Thanks to the people who organized the responses! I love librarians!)

Diversity is the mix of people

diversity is the mix of people

 

 

 

 

 

At the start of our D&I journey at Mozilla we did 20 focus groups with Mozillians. We heard about many diversity dimensions in our findings and they have shaped the way we define diversity. Diversity is all the things that make us who we are…it is our specific, unique, beautiful mix of people.

In the top right hand corner there’s MoFo and MoCo. MoCo is our internal shorthand for the Mozilla Corporation. Internally we call people who work for the Foundation MoFos.

Inclusion is getting the mix to work

Inclusion is getting the mix to work

 

And then, what is inclusion? We Mozillians believe inclusion is getting our specific mix of people to work well together, to invite voices forward, to speak boldly but respectfully, and listen intently. Inclusion is about how each of us wants to be treated.

Quote from Mitchell Baker

quote in notes following this image

 

This is a quote from Mitchell Baker, our Chairwoman.

Mozilla’s mission is to build the Internet as a global public resource, open and accessible to all. ‘Open and accessible to all’ implies a deep commitment to inclusion, and to building inclusive practices. As part of this commitment we describe a set of ‘behaviors of inclusion’ that we aspire to. These are set out in Mozilla’s Community Participation Guidelines.

Community Participation Guidelines (CPG) http://mzl.la/cpg

2 Black women sitting on a couch, in conversationphoto from www.wocintechchat.com

The CPG is the Code of Conduct at Mozilla. It outlines both behaviours we want to see and behaviours that are unacceptable.

The following behaviors are expected of all Mozillians:

Be Respectful

Value each other’s ideas, styles and viewpoints. We may not always agree, but disagreement is no excuse for poor manners. Be open to different possibilities and to being wrong. Be kind in all interactions and communications, especially when debating the merits of different options. Be aware of your impact and how intense interactions may be affecting people. Be direct, constructive and positive. Take responsibility for your impact and your mistakes – if someone says they have been harmed through your words or actions, listen carefully, apologize sincerely, and correct the behavior going forward.

Be Direct but Professional

We are likely to have some discussions about if and when criticism is respectful and when it’s not. We must be able to speak directly when we disagree and when we think we need to improve. We cannot withhold hard truths. Doing so respectfully is hard, doing so when others don’t seem to be listening is harder, and hearing such comments when one is the recipient can be even harder still. We need to be honest and direct, as well as respectful.

I love that this is written in plain English. Recently I found myself dragging my feet on having a hard conversation with someone I care about at work. When I was practicing for this talk I heard myself saying “We cannot withhold hard truths. We need to be honest and direct, as well as respectful.” This was the nudge I needed to have this conversation. Looking back I wished I’d had this conversation about a month before I worked up the courage to do so.

The CPG also outlines behaviours that are not tolerated. These include:

  • violence
  • threats of violence
  • personal attacks
  • derogatory language
  • disruptive behaviour (like heckling speakers)
  • and unwelcome sexual attention or physical contact.

This includes touching a person without permission, including sensitive areas such as their hair, pregnant stomach, mobility device (wheelchair, scooter, etc) or tattoos. This also includes physically blocking or intimidating another person. Physical contact or simulated physical contact (such as emojis like “kiss”) without affirmative consent is not acceptable.

I love that the CPG includes these concrete examples–some of them I hadn’t thought about before.

The CPG also includes information about consequences of unacceptable behaviours and information on how to report. It is open licensed under a CC Attribution Sharealike license.

The work we all do has a ripple effect in the world. Mozillians in Brazil used our CPG as the base of their open letter to a JS conference to call out a transphobic incident. And a couple of weeks ago, the SQLite community adopted our CPG as their code of conduct.

Open source is “startlingly white and male” No rockstars. No ninjas.

3 lego ninja figurinesphoto from https://flic.kr/p/9hu7yA

 

 

 

 

 

In an article in Wired titled Diversity in Open Source Is Even Worse Than in Tech Overall Klint Finley writes:

…even though users of the open source software present in countless products and services are now as diverse as the internet itself, the open source development community remains startlingly white and male—even by the tech industry’s dismal standards.

I had a lot of imposter syndrome throughout the application process for Mozilla. I was just a librarian at a college in Canada that no one had heard of. Who did I think I was applying to work for Mozilla? There were 3 sentences in the job posting that made me apply:

  • You demonstrate a history of working in a collaborative and open manner—whether that be in open source projects or simply openly discussing projects and questions.
  • You should apply even if you don’t feel that your credentials are a 100% match with the position description.
  • We are looking for relevant skills and experience, not a checklist that exactly matches the position itself.

Of course this was by design. Knowing that open source skews white and male, requiring open source experience would limit the people who would choose to apply, and likely some excellent candidates would self select out. The key experience is open collaboration, not open source experience.

We also use a tool called Text.io to make sure that our job postings use balanced language. Thankfully we don’t post job ads for code ninjas and rockstar developers anymore.

Debiasing hiring

red curtain on a stagephoto from https://flic.kr/p/aMeBa8

 

In the 1970s, top orchestras in the US were only 5% women. At that time there were lots of reasons given for this including:

  • “women have smaller techniques than men,”
  • “women more temperamental and more likely to demand special attention or treatment,” and that
  • “the more women, the poorer the sound.”

Zubin Mehta, conductor of the Los Angeles Symphony from 1964-78 and of the New York Philharmonic from 1978-90, said, “I just don’t think women should be in an orchestra.” (Goldin and Rouse, p 719)

By 2000, orchestras were up to almost 30% women. Part of the reason for the change was the introduction of “blind auditions”, where the musicians literally auditioned behind a curtain so that the panel couldn’t see them. They were only assessing candidates based on how they sounded. They found that even with the curtain that there were other telltale signs, like the click clack of women’s high heels. They either added a carpet or got women to take their shoes off and had a man make clomp clomp clomp noises with his shoes. Now most US orchestras are 40-50% women, though there are very few women who are conductors or who play in the brass section. In researching this I learned about “the brass ceiling”.

At Mozilla our version of the blind audition is a tool called HackerRank. This enables hiring managers to evaluate candidates based on their code, not their perceived gender or race, or the university that they graduated from. We started using Hackerrank to select candidates for our internships. There was more than 4x improvement in first two years of HackerRank, we went from:

  • From 2 women to 13 women
  • From 7 colleges to 27 colleges + 1 code academy
  • 61% of the 2017 cohort were women and/or People of Color

Meritocracy

Words Matter – Moving Beyond “Meritocracy”

photo credit: https://flic.kr/p/aLeBxF

 

I’ve been involved in open source projects for more than 10 years. When I first got involved I really bought into the idea of a meritocracy, which means those with merit rise to the top. Merit is based on your contributions, talent and achievements, and not on your job title, the company you work for, or the university you graduated from. I now see that meritocracy has a tonne of bias baked into it. We come with different privilege, access to resources, tools, and technology. It’s not a level playing field.

Last month Mozilla stopped using meritocracy as a way to describe our governance and leadership structures. This was a big deal. Emma Irwin, our D&I community lead writes “From the beginning of this journey to a more inclusive organization, we have been thinking about the words we use as important carriers of our intended culture and the culture we wish to see in the broader movements we participate in.”

Mitchell says:

I personally long for a word that conveys a person’s ability to demonstrate competence and expertise and commitment separate from job title, or college degree, or management hierarchy, and to be evaluated fairly by one’s peers. I long for a word that makes it clear that each individual who shares our mission is welcome, and valued, and will get a fair deal at Mozilla – that they will be recognized and celebrated for their contributions without regard to other factors.

Sadly, “meritocracy” is not that word. Maybe it once was, or could have been. But not today. The challenge is not to retain a word that has become tainted. The challenge is to build teams and culture and systems that are truly inclusive. This is where we focus.

External diversity disclosure

“We are sharing our results to date to be transparent and hold ourselves accountable to our global community as we strive to build a more diverse and inclusive organization that reflects the people we serve. We aim to create a working environment where everyone can thrive and do their best work. We are not where we want to be, and have a lot more to do.” Chris Beard, CEO Mozilla Corporation

 

In April we did our first ever external diversity disclosure. This is voluntary and we’ve joined about 30 tech companies that have published high level demographic data. As of the end of last year women made up 24% of Mozilla overall, 33% in leadership, 13% in tech roles. Underrepresented minorities (Black, Latinx, Indigenous folks) in the US made up 7% of Mozilla overall, 0% in leadership, 6% in tech roles.

Our CEO Chris Beard said: “We are not where we want to be, and have a lot more to do.” I appreciate this intellectual honesty and transparency.

I’m excited for us to publish the 2018 update so we can share our progress.

Librarianship: startlingly white

91% of academic librarians in Canada are white, no demographic data for NZ, 88% of librarians in the US are white

 

 

We know that librarianship is a female dominated profession, but there’s not much data about the racial makeup of librarians in Canada, NZ and the US.

In Canada there was one study done by the Canadian Association of Professional Academic Librarians of, not surprisingly academic librarians. They collected 1730 names and email addresses by looking at college and university websites. Of the 1730 people they contacted, they received 904 responses. 91% of respondents were white. Only 2% of respondents identified as Indigenous–First Nations, Metis or Inuit.

Being in a community of librarians I’m often a lazy researcher. I’ll do a quick search for something then reach out to someone I know who is more expert in the topic than me. When Fiona asked if I had any questions about the NZ context I asked her my demographic question. She in turn used the same methodology and reached out to LIANZA, Te Rōpū Whakahau and RNANZ. No one was aware of demographic research that had been done in this area.

According to ALA’s Diversity Counts report in 2009-10, 88% of credentialed librarians in the US are white. Thanks to Barbara Chawner, Leslie Kuo and April Hathcock who pointed me to this report.

According to ALISE: Library and Information Science Education Statistical Report in 2015, 79% of students at ALA accredited universities in the US are white. This means that the pipeline for future librarians is only slightly more diverse than the workforce.

The sparse or non-existing data tells a story by what’s missing. We measure what we care about. I hope that our library associations or researchers will take on this important work. We need to know what our baseline is and we need to be able to track change over time.

Should the MLIS/MLS be a requirement for all librarian jobs? No.

Should the MLIS/MLS be a requirement for all librarian jobs? No.

 

 

 

 

 

 

 

If we go back to the arguments around diversity and innovation, working to make our workplaces more diverse is something that libraries must do to survive and be relevant. There’s additional arguments about reflecting the diversity of our user groups and society. Also–it’s the right thing to do.

Seeing the lack of racial diversity in the library school student data, which is our pipeline, we need to to rethink the MLIS/MLS as a requirement for all librarian jobs. We need to articulate the core competencies for what is important in libraries now and broaden our view of whose qualifications are relevant. We need to recruit from a more diverse pool of candidates. I’m not talking about lowering the bar, rather being more critical of what libraries need, which might raise the bar.

We need to stop talking about cultural fit on our hiring committees. Culture fit means that we’re perpetuating a monoculture of people who look just like us and think just like us–this isn’t what we need to be relevant now, or in the future.

In addition to rethinking our hiring pool, we need to build in additional scaffolding so that people of colour can imagine a future for themselves in libraries, where there’s mentorship and a promotion path is clear. As diversity and inclusion are intertwined we need to work to change the culture of libraries so that people of colour can bring their full selves to work and that that difference would be valued. This will mean some hard and necessary conversations about our culture and whiteness.

Developing a culture of consent

postcards: Yes way. No way.

 

I’m going to shift gears and talk about consent now. code4lib is a library technology conference and community where I feel at home. In 2015 I proposed that we ask speakers for permission to livestream their talks and that we use coloured lanyards as a visual shorthand to communicate people’s desire to be in photos online. Red meant absolutely no photos, green meant photos are fine, and yellow meant you needed to ask. (blog post)

Some of the initial comments from men who had been in the community longer than me bummed me out. Some of those comments included:

  • “This needs to be opt out, not opt in.”
  • “I enjoy taking candid photos of people at the conference and no one seems to mind.”
  • “My old Hippy soul cringes at unnecessary paperwork. A consent form means nothing. Situations change. Even a well-intended agreement sometimes needs to be reneged on.”

I was able to get enough support to get this off the ground. Another woman of colour, Ranti Junus, helped me pull together a consent form and we did the work of talking to all of the speakers. Thankfully things have changed a lot and this is now standard practice at code4lib and many other conferences have followed suit.

Consent and digitization ethics

5 red umbrellas in a treephoto credit: https://flic.kr/p/kAT3Ws

 

 

Consent is something that’s really important to me as a feminist. I want to take a quick detour and share a personal story.

In Spring 2016 I came out in my professional life as former sex worker. I know what it’s like to have content about myself online that I didn’t consent to. In my case, it’s a newspaper article that appeared in a major Canadian newspaper that identifies me as a sex worker and a librarian. For most of my career I’d been terrified that my employer or my colleagues would find this out. We live in a judgmental society where there are many negative stereotypes about sex workers. I was worried that this would undermine my professional reputation.

I think that we would all agree that open access to information is a good thing. If you remember, this is the reason I became a librarian. However, over the last couple of years I’ve come to realize that this isn’t an absolute and that there are some times where it’s not appropriate or ethical for information to be open to all.

In 2016 I learned that Reveal Digital, a nonprofit that works with libraries, digitized On Our Backs, a lesbian porn magazine that ran from 1984-2004. For a brief moment I was really excited — porn that was nostalgic for me was online! Then I quickly thought about friends who appeared in this magazine before the internet existed. I was worried that this kind of exposure could be personally or professionally harmful for them. There are ethical issues with digitizing collections like this. Consenting to a porn shoot that would be in a queer magazine with a limited print run is a different thing to consenting to have your porn shoot be available online.

For a year I kept digging and researching this topic—I visited Cornell University’s Rare Book and Manuscripts Collection and found the contributor contracts, learned a lot more about US copyright law, and most importantly I talked to queer women who modeled for On Our Backs about their thoughts and feelings about this.

Quote from an anonymous model

When I heard all the issues of the magazine are being digitized, my heart sank. I meant this work to be for my community and now I’m being objectified in a way that I have no control over. People can cut up my body and make a collage. My professional and personal life can be high jacked. These are uses I never intended and still don’t want.

 

 

 

 

This is a quote from one of the models from an email to me.

She writes: “People can cut up my body and make a collage. My professional and personal life can be high jacked. These are uses I never intended and still don’t want.”

I was successful in getting this collection taken down from Reveal Digital’s collection by publicly questioning the ethics of digitization projects like this and amplifying the voices of models who appeared in On Our Backs.

There are other culturally sensitive materials that should not be wholly digitized and made available through open access. When I was researching this topic, Stewart Yates pointed me to NZ Electronic Text Centre’s community consultation and report on digitizing the book Moko: or Maori Tattooing that was published in 1896.

Inclusive event planning

red lanyards

 

 

 

 

 

 

OK! Back to red lanyards!

When I came to Mozilla I was delighted to see that we had a way to opt out of photos, even during work events.

Here’s a bit from a blog post from Brianna Mark, our Senior Event Planner:

Like many of the people who use Firefox, our employees value being able to choose — with clarity and confidence — what information they share with whom. One of the ways we look out for this, when hosting our All Hands events, is by offering our attendees the choice of a white or red lanyard. White lanyards mean you are okay being photographed. A red one means you are not. Wearing a name badge is required during our events so a colored lanyard is a very visible way to communicate a preference without having to say a word. It also makes it easy to spot and remove any photographs that may have been taken by mistake.

Like with our work, Mozilla’s values don’t necessarily tell us what to do but rather remind us of how we should do it. Making red lanyards available to our employees and their families as part of our semi-annual events is a small but tangible manifestation of just what we mean.

Pronoun stickers

text: This year we have pronoun stickers available for badges. These stickers are optional. If you choose you can put one on your badge, as a way to let people know what pronoun to use when referring to you. Please show respect for others by using the correctd pronouns to refer to them--and by not making assumptions about what pronoun to use based on someone's appearance. If you don't know what pronoun to use, it is better to ask than to use the wrong pronoun. Thanks for helping make All Hands a safer space for everyone attending.

 

I love how we keep iterating on our culture. At our last All Hands Brianna added pronoun stickers for people to add to their name badges. I like this is something we can all do to make our culture more inclusive.

This is another example of how our actions creates ripples in the world. After seeing this photo on social media, a labour union adopted this idea for their conference.

All Hands

Foxy with lots of Mozillians in the backgroundphoto from https://flic.kr/p/pMhPcq

So, I’ve mentioned All Hands a few times now. All Hands is our twice yearly meeting where we all come together in person.

Mozilla staff are in 16 countries and 40% of our workforce is remote. All Hands is critical part of building the connective tissue that allows us to work well together the rest of the year.

The week after I get home to Canada, I’m heading to Orlando for our next All Hands.

This past summer All Hands was in SF. The big event is the plenary session where our senior execs talk about where we’re at and where we’re going. Imagine 1200 of us in this giant hotel ballroom…

San Francisco All Hands – Lauren’s talk

 

In between each of the executive presentations, regular staff were interspersed reading thank you emails from our users and sharing other short snippets.

This was the short snippet between the Chief Marketing Officer and the Chief People Officer.

PLAY VIDEO

Transcript:

Hi. My name is Lauren Niolet. I work on lifecycle marketing out of my home in North Carolina. I recently sent a letter to Jascha, who you just met, and I’m going to share it with all of you now.

Jascha, you might recall a conversation we briefly had at Austin All Hands about some interesting changes in my life. But just to put a label on it, I’m transitioning my gender presentation to female. This has been a lifelong time coming. While I wouldn’t say changing genders is anything close to the easiest thing I’ve ever done, this ongoing process has already been one of the best. I’ve been asking colleagues, one or two at a time, to start calling me Lauren, and referring to me with feminine pronouns. I’d like for you to do the same. Don’t worry about slip ups. I forget at least once a day and it’s my name.

Like any self-respecting marketer, I’m working with HR on a go-to-market strategy to take this news big. That is, by the way, highfalutin talk for an email to all of marketing. But I’m writing to give you an early heads up.

I do want to mention that your personal and professional commitment to making Mozilla marketing a safe space that values all people was a huge factor in my decision to begin transition. As a member of the group that worked on team norms, I’m very aware that things here weren’t perfect. But I also know that after I began living authentically, I would feel respected and protected at Mozilla. And the work I do would be more important than my pronouns. You should know how much of an incredible impact your commitment to these values can have on one individual life. Thank you just doesn’t seem to capture it.

Thank you.

Lauren moved me to tears–and I wasn’t the only one in the audience who was crying. I have deep admiration for her courage.

There was also an amazing feeling in the room. After the loud cheers I could feel people’s careful attention in how they were leaning forward and listening with care and attention.

I wrote guidelines to support staff who are transitioning their gender at work. Initially I intended for it to be a simple list of places where one would need to update usernames and gender markers, but it became more comprehensive to give context to understanding gender more broadly, for managers to understand their responsibilities, and for all staff to understand how they can make Mozilla a more welcoming and inclusive place. I heard from managers that they wanted to do the right thing and were worried they might make a mistake and hurt someone. So, I organized some training to help our staff level up their knowledge and comfort in being inclusive of trans and non-binary colleagues. 180 people RSVPed to attend the sessions, and over the recordings have been viewed 300 times over just a couple of weeks.

Mozillians care and want to learn more and do better.

What is the most important thing in the world?

What is the most important thing in the world? It is the people, it is the people, it is the people.

 

As of yesterday morning I didn’t have an ending to this talk and was starting to get a bit worried.

Fiona organized two tours for us at the National Library. Michael Edson had a question about Māori worldviews. Just then Bella, a Māori elder, was walking by. She was very generous with her time and explained some things about her culture. One of the things she said stuck in my head and heart, and I realized it was the thread that ties this whole talk together.

What is the most important thing in the world?

He tāngata, he tāngata, he tāngata

It is the people, it is the people, it is the people

Thank you

Thank you

Data Stories and CAP API Full-Text Search / Harvard Library Innovation Lab

Data sets have tales to tell. In the Caselaw Access Project API, full-text search is a way to find these stories in 300+ years of U.S caselaw, from individual cases to larger themes.

This August, John Bowers began exploring this idea in the blog post Telling Stories with CAP Data: The Prolific Mr. Cartwright, writing: “In the hands of an interested researcher with questions to ask, a few gigabytes of digitized caselaw can speak volumes to the progress of American legal history and its millions of little stories.". Here, I wanted to use the CAP API full-text search as a path to find some of these stories using one keyword: pumpkins.

The CAP API full-text search option was one way to look at the presence of pumpkins in the history of U.S. caselaw. Viewing the CAP API Case List, I filtered cases using the Full-Text Search field to encompass only items that included the term “pumpkins”:

api.case.law/v1/cases/?search=pumpkins

This query returned 640 cases, the oldest decision dating to 1812 and the most recent in 2017. Next, I wanted to take a closer look at these cases in detail. To view the full case text, I logged in, revisited that same query for “pumpkins”, filtering the search to display full case text.

By running a full-text search, we can begin to pull out themes in Caselaw Access Project data. Of the 640 cases returned by our search that included the word “pumpkins”, the jurisdictions that produced the most published cases including this word were Louisiana (30) followed by Georgia (22) and Illinois (21).

In browsing the full cases returned by our query, some stories stand out. One such case is Guyer v. School Board of Alachua County, decided outside Gainesville, Florida, in 1994. Centered around the question of whether Halloween decorations including "the depiction of witches, cauldrons, and brooms" in public schools were based on secular or religious practice and promotion of the occult, this case concluded with the opinion:

Witches, cauldrons, and brooms in the context of a school Halloween celebration appear to be nothing more than a mere “shadow”, if that, in the realm of establishment cause jurisprudence."

In searching the cases available through the Caselaw Access Project API, each query can tell a story. Try your own full-text query and share it with us at @caselawaccess.

DLF Fellow Reflection: Alicia Zuniga / Digital Library Federation

 

This post was written by Alicia Zuniga (@aliciazuniga), who received an ARL+DLF Fellowship to attend the 2018 Forum.

Alicia Zuniga is the Media Library Specialist for the California Tobacco Control Program where she researches and substantiates the program’s statewide media campaigns and has been tasked with implementing an internal digital library.

She is passionate about communicating scientific findings to the public in an accessible and engaging way. Her main interests include metadata, open access, and science and scholarly communication. Her previous roles include Web Coordinator for Sacramento Public Library, Senior Publications Assistant at the open access publisher, Public Library of Science, and Information Officer for the California Department of Public Health’s 2017 website redesign. She received her MLIS from San Jose State University.

I am so thankful to have been able to attend the 2018 DLF Forum as an ARL+DLF Forum Fellow. When I reflect on the most thought-provoking moment of the forum, the opening plenary speech by Anasuya Sengupta is the obvious answer (if you didn’t get a chance to see it live, be sure to check out the recorded livestream). One of the concepts that stood out for me was her endorsement of Wikipedia as a tool to find primary sources. I have spent countless amounts of time trying to convince my peers outside of library science that Wikipedia is a valid source of information to use as a starting point in research. The cautions of Wikipedia that were firmly ingrained in us during middle and high school, and even college, were shaken out of me during my graduate studies, but for most others my age the belief that Wikipedia is inaccurate still prevails. This notion persists, despite the fact that even Google, the trusted confidante of our most embarrassing questions, relies in part on Wikipedia data for the Knowledge Graph in their search results.

Sengupta stated that Wikipedia should be viewed with the same lens of both caution and potential as anything else we find on the internet. The public is so quick to regard community-driven information sources like Wikipedia with skepticism, but will retweet an article having only read the headline on social media in a heartbeat. The evaluation part seems to be missing in our information consumption these days, or it is applied unevenly across news sources. While I don’t agree that anyone should cite Wikipedia directly, it’s a great place to find primary sources in the references. Subject matter experts can also determine what critical information is missing and contribute to the growing corpus of crowdsourced information themselves.

Sengupta’s talk emphasized that the information on Wikipedia is only as strong as the diversity of individuals who contribute to its creation. As part of a public health organization, we have a responsibility to make sure that the health information in Wikipedia is the most up-to-date and supported by robust research.

One of the ways that I have been inspired from this talk was a renewed energy to put toward organizing a Wikipedia edit-a-thon in my own program and our partnership organizations.  Sengupta’s talk emphasized that the information on Wikipedia is only as strong as the diversity of individuals who contribute to its creation. As part of a public health organization, we have a responsibility to make sure that the health information in Wikipedia is the most up-to-date and supported by robust research.

        Being afforded the opportunity to attend this conference is something I do not take for granted. As the only staff member in my program with any library responsibilities, it is easy to feel disconnected from the world that I had become so entrenched in as a graduate student and Spectrum Scholar. Being able to connect at conferences like this reinvigorates my passion to our field.

Want to know more about the DLF Forum Fellowship Program? Check out last year’s call for applications.

If you’d like to get involved with the scholarship committee for the 2019 Forum (October 13-16, 2019 in Tampa, FL), look for the Planning Committee sign-up form later this year. More information about 2019 fellowships will be posted in late spring.

The post DLF Fellow Reflection: Alicia Zuniga appeared first on DLF.

What I’ve been up to these past many months / Terry Reese

For the past 6 years, I’ve volunteered first as a coach and now as the coordinator of the Robotics program at the Grandview Heights Middle School.  The program starts every August and usually wraps around the second week of Jan.  This year, I’ve been the program coordinator and mentor/coach for 6 teams with almost 50 middle schools between 4th-8th grade.  It’s a lot of fun, and a lot of hard work.  If I had to guess, I probably spend close to 25 hours every week working with the kids as they prepare for the regional tournaments.  But it’s hard work that pays off.  Grandview has consistently sent teams to the Super Regionals and consistently is recognized for excellence in the 4 core awards (Robot Table, Robot Design, Core Values, and Project).

Saturday, our teams competed and the Grandview teams did a great job representing themselves.  We had two teams win awards and one of our coaches recognized with a coaching award.  In total, one team won the Robot Design Award (second year in a row a Grandview team has done that) and one of our teams won the Champions Award (finished first in the tournament).  It’s a long day, and the kids put in so much effort throughout the year, so it’s always gratifying to see them succeed.

For me, this program is a labor of love.  For 4 months out of the year, it makes things super busy.  But both of my kids have been through the program and it has been tremendously impactful to them.  And for so many other kids as well – as they transition to high school and beyond.  I often think about what I could be doing with my time, and what I can be doing to make my world a better place.  I try to keep that in mind at work and in my profession.  But there are few things that I can think of that have more of an impact than giving these kids this opportunity and positive experience.  For some, this is something that changes their outlook on science and stem.  For others, its a place to make new friends.  For all, it’s an opportunity to learn life lessons and grow.  Yes, it takes a lot of my time.  Yes, it takes a lot of time for the parents.  But as I tell people – if you want to change the world – focus on changing the world for a child.

–tr

Fellow Reflection: Natasha Jenkins / Digital Library Federation

 

This post was written by Natasha Jenkins, who received a DLF HBCU Fellowship to attend the 2018 Forum.

Natasha Jenkins is currently the Information Literacy Librarian at Alabama State University, where she is responsible for marketing and teaching library resources to members of the University community. Her varied interests include assessment, project management, mentoring, and succession planning.

 

Technology, social awareness, and opportunities to learn more about an area of librarianship that I only work with indirectly…I was obviously in the right place. My true introduction to the Digital Library Federation Forum did not occur at the opening plenary, or during registration, or even the first few sessions I attended. True, I had breakfast with a table full of people who looked nothing like me. I sat in sessions with people who had “Black Lives Matter” stickers on their backpacks and laptops. However, it wasn’t until  lunch on day one that I was introduced to the vision of the forum.

During the meal I met four ladies: the first was from a traveling New York social justice museum, and the remaining three were from various California academic libraries. We introduced ourselves to one another, each telling a little about our roles in the world of digital libraries and museums. The more we talked, the more the conversation drifted into the role of social awareness in our daily activities. We discussed many of the things Anasuya Sengupta mentioned during the opening plenary concerning incorporating native voices into Wikipedia, and decolonizing digital libraries. As the lone information literacy librarian at the table, I was challenged with questions like, “What do you tell students about Wikipedia?” and “What ways do you work with digital librarians to ensure they include language that students understand?” I also asked my own questions. For example, “At what point will current social justice issues be deemed an integral part of our digital collections?”

As the lone information literacy librarian at the table, I was challenged with questions like, “What do you tell students about Wikipedia?” and “What ways do you work with digital librarians to ensure they include language that students understand?” I also asked my own questions. For example, “At what point will current social justice issues be deemed an integral part of our digital collections?”

The issue of decolonization was very interesting. As an assertive African American woman I have always felt that I was responsible for telling my story- defining my story, creating and curating the things involved in my story. I do this via social media or born digital media. At the forum I learned that this is what decolonization is about. Unfortunately archived digital collections do not have the ability to do this. Anasuya Sengupta spoke about decolonizing for those collections that cannot speak for themselves. As a digital librarian, minority, woman, socially aware member of society, I am responsible for decolonizing certain collections.

Each discussion about the lack of representation of minorities in digital libraries, and the synchronization of the digital and physical library shaped my perception of the remainder of the forum.

Between the conversation during lunch on day one and the speaker’s presentation at the opening plenary, the tone was set for the duration of the forum. Each discussion about the lack of representation of minorities in digital libraries, and the synchronization of the digital and physical library shaped my perception of the remainder of the forum. Overall, I was excited to be in the midst of digital librarians who were socially conscious, that were committed to ensuring that the voices of all people be heard in decolonized collections, and who were interested in collaboration.

Want to know more about the DLF Forum Fellowship Program? Check out last year’s call for applications.

If you’d like to get involved with the scholarship committee for the 2019 Forum (October 13-16, 2019 in Tampa, FL), look for the Planning Committee sign-up form later this year. More information about 2019 fellowships will be posted in late spring.

The post Fellow Reflection: Natasha Jenkins appeared first on DLF.

Michele Morgan: Evergreen Contributor of the Month / Evergreen ILS

The Evergreen Outreach Committee is thrilled to announce that our first Evergreen Contributor of the Month is Michele Morgan of NOBLE.

Michele has been involved with Evergreen since 2012, and in that time has quietly but steadily made a positive difference in the code and the community.  She has been a significant contributor to Launchpad, with 97 bug reports to her name and 175 bugs commented upon.  Michele has also authored 16 bugfix patches that are now a part of the Evergreen codebase.  

“I like things to work right,” Michele notes.  “I like writing bugs and I like trying to fix bugs.”  Michele has a reputation as a careful and thorough tester of code, and has participated in every community Bug Squashing event since their inception in 2014.

Michele’s approach is always end-user focused.  “I advocate for the end user, because I used to be one,” she says.  In her day job at NOBLE Michele serves as a Technical Support Analyst and takes calls from NOBLE’s member libraries about any Evergreen issues they’re having.  

As the front line of support for NOBLE, she gets user input firsthand and helps them to resolve problems.  She has spent most of her career on the public services side of libraries, including serving as Head of Circulation, which gives her a valuable perspective on the day-to-day workflows of library staff.

Michele can often be found in the #evergreen IRC channel, welcoming newcomers, giving advice, and tossing around ideas with other community members.  She’s always willing to help another Evergreener troubleshoot an issue in IRC.

Michele recommends that new community members lurk and listen — but not be afraid of stepping in!  “Listening is a great way to start,” she says. “Lurk in IRC, be on the lists, answer what you can. Make yourself comfortable with Evergreen.  It’s OK to say something wrong, the community is very supportive.”

Michele’s contributions and consistent helpfulness make her a great example of how individual people can be big difference-makers in open source projects.  “It’s a community effort,” she concludes. “You do what you can when you can, and it’s definitely a community effort.”

Do you know someone in the community who deserves a bit of extra recognition?  Please use this form to submit your nominations.  We ask for your email in case we have any questions, but all nominations will be kept confidential.

Any questions can be directed to Andrea Buntz Neiman via abneiman@equinoxinitiative.org or abneiman in IRC.

Cryptocurrencies' Seven Deadly Paradoxes / David Rosenthal

John Lewis of the Bank of England pens a must-read, well-linked summary of the problems of cryptocurrencies in The seven deadly paradoxes of cryptocurrency. Below the fold, a few comments on each of the seven.

Lewis' paradoxes are:
  1. BTC transaction fees
    The congestion paradox. As we saw when Bitcoin's "price" spiked last November, transactions are affordable only when no-one wants to transact:
    Bitcoin has an estimated maximum of 7 transactions per second vs 24,000 for visa. More transactions competing to get processed creates logjams  and delays. Transaction fees have to rise in order to eliminate the excess demand. So Bitcoin’s high transaction cost problem gets worse, not better, as transaction demand expands.
    Worse, pending transactions are in a blind auction to be included in the next block. Because users don't know how much to bid to be included, they either overpay, or suffer a long delay or possibly fail completely.

  2. The storage paradox. To participate directly, rather than via an exchange, a user needs to (a) have high enough bandwidth and low enough latency to the community of miners, and (b) store the entire blockchain:
    so an N-fold increase in users and transactions, means an N-squared fold increase in aggregate storage needs. The BIS have crunched the numbers for a hypothetical distributed ledger of all US retail transactions, and reckon that storage demands would grow to over 100 gigabytes per user within two and half years.
    Of course, there are proposals to make this waste of storage be the waste of resources that secures the blockchain itself. But, as I pointed out in Making PIEs Is Hard, they have problems too.

  3. The mining paradox. Since, as Eric Budish points out:
    From a computer security perspective, the key thing to note ... is that the security of the blockchain is linear in the amount of expenditure on mining power, ... In contrast, in many other contexts investments in computer security yield convex returns (e.g., traditional uses of cryptography) — analogously to how a lock on a door increases the security of a house by more than the cost of the lock.
    the cost of mining cryptocurrencies is a feature not a bug, and thus miners have to defray their costs:
    Rewarding “miners” with new units of currency for processing transactions leads to a tension between users and miners. This crystalises in Bitcoin’s conflict over how many transactions can be processed in a block. Miners want this kept small: keeping the currency illiquid, creating more congestion and raising transaction fees – thus increasing rewards for miners facing ever more energy intensive transaction verification. But users want the exact opposite: higher capacity, lower transactions costs and more liquidity, and so favour larger block sizes.

    Izabella Kaminska points out this tradeoff has been *temporarily* masked by capital inflows creating subsidies via the mining rewards system. ... A private cryptocurrency must continually attract more capital inflows to mask the transactions costs (a staggering ≈1.6% of system payment volume). By contrast, most traditional mediums of exchange don’t require such sizeable capital inflows to maintain their transactions infrastructure.
    BTC "price"
    This need for an inflow of funds from investors who believe the "price" must inevitably go up means that if a cryptocurrency's "price" is dropping or flat for a long time the currency's "price" risks a collapse as miners are forced to dump it to pay bills.

  4. The concentration paradox. As I discussed in Gini Coefficients Of Cryptocurrencies, HODL-ings of cryptocurrencies are highly concentrated:
    An asset is valued by the market price at which it changes hands. Only a fraction of the stock is actually traded at any point in time. So the price reflects the views of the marginal market participant. You can raise the value of an asset you own by buying even more of it, as your purchases push the market price up. But realising that gain requires selling- which makes someone else the marginal buyer and thus pushes the market price downwards.

    For many assets these liquidation effects are small. But for cryptos they are much larger because i) Exchanges are illiquid, ii) Some players are vast relative to the market iii) There isn’t a natural balance of buyers and sellers iv) opinion is more volatile and polarised.  High prices reflect cornering the market and hoarding, rather than ability to readily sell to a host of willing buyers.
    The big HODL-ers cashed out around $30B in last November's massive pump-and-dump scheme.

  5. The valuation paradox. Why does a cryptocurrency have a value?
    The discounted cashflow model of asset pricing says value comes from (risk-adjusted, net present discounted) future income flows. For a government bonds it’s the interest plus principal repayment, for a share it’s dividends, for housing it’s rental payments. The algebra of pricing these income flows can get complicated, but for cryptocurrencies with no yield the maths is easy: Zero income means zero value. ... What other sources of value are there? The mere expectation that in future cryptocurrencies will be worth more than they are today and so can be flipped for a profit? The problem is that if, as Paul Krugman argues their “value depends entirely on self fulfilling expectations”, that is a textbook definition of a bubble.
    Why do people think cryptocurrencies have a "value"? Because their "price" is displayed on websites just like "fiat currencies" and equities. But, unlike "fiat currencies" and equities, the "price" of cryptocurrencies is the result of massive manipulation, primarily via the Tether "stablecoin" and massive wash trading, in extremely thin markets.

  6. The anonymity paradox. Most cryptocurrencies provide pseudonymity, not anonymity, and unmasking the person behind the pseudonym is easy in practice. But even users of cryptocurrencies that claim anonymity such as Monero need exceptional opsec to avoid exposure. But Lewis makes a different point:
    The (greater) anonymity which cryptocurrencies offer is generally a weakness not a strength. True, it creates a core transactions demand from money launderers, tax evaders and purveyors of illicit goods because they make funds and transactors hard to trace. But for the (much bigger) range of legal financial transactions, it is a drawback.

    It makes detecting nefarious behaviour harder, and limits what remedial/enforcement actions can be taken. Whilst blockchains can verify a payment has been received and prevent double spending (albeit imperfectly), many other problems are unsolved.
    ...
    Auer and Claessens demonstrate empirically that developments which help establish legal frameworks for cryptocurrencies increase their value. Keep a cryptocurrency far from regulated institutions and you reduce its value, because it drastically restricts the pool of willing transactors and transactions. Bring it closer to the realm of regulated financial institutions and it increases in value.
    Given the likely advances in forensic technology over time, and the authorities' long memories, recording your illegal transactions in an immutable public ledger is probably foolish.

  7. The innovation paradox:
    Lewis' last paradox is the most entertaining:
    Perhaps the biggest irony of all is that the more optimistic you are about tomorrow’s cryptocurrencies, the more pessimistic you must be about the value of today’s.

    Suppose bitcoin, ethereum, ripple et al are just the early flawed manifestations of an emergent disruptive technology. Perhaps new and better cryptocurrencies will arise to overcome all of the intrinsic problems of today’s. ... Whereas goods derive worth from their value when consumed, currency derives worth from the belief that it will be accepted as for payment and/or hold its value *in the future*. Expect it to be worthless in the future, and it becomes worthless now. If new cryptocurrencies emerge to resolve the problems of the current crop, then today’s will get displaced and be rendered worthless.
    Cryptocurrencies are mechanisms for transferring wealth from later to earlier adopters. That is the importance of "number go up" as David Gerard puts it. Thus, unlike "fiat currencies", there is a continual demand from new wanna-be early adopters for new cryptocurrencies to be created. There are already a couple of thousand of them. There is a real chance that the intensive research into blockchain technology will result in a "new and better" cryptocurrency.
Go read the whole post and click on the links, you'll be glad you did.

2018 AMIA Cross-Pollinator: Jason Corum / Digital Library Federation

 

The Association of Moving Image Archivists (AMIA) and DLF will be sending Jason Corum to attend the 2018 DLF/AMIA Hack Day and AMIA conference in Portland, Oregon! During the event, Jason will collaborate on projects with other attendees to develop solutions for digital audiovisual preservation and access.

About the Awardee

Jason Corum is a coder and writer based in Los Angeles, CA. After a 10-year career in communications with organizations like the Biotechnology Industry Organization, the Brookings Institution, and World Food Program USA, Jason decided that he wanted to build cool online tools rather than manage them. Through self-education and a course with General Assembly, Jason acquired the skills needed to join Sol Systems, a solar finance and development firm, as a junior web developer. He is now a web developer with the WGBH Media Library and Archives where he works on the American Archive of Public Broadcasting, OpenVault from WGBH, Fix It +, as well as a forthcoming Samvera-based Archival Management System.

About Hack Day and the Award

The fifth AMIA+DLF Hack Day (November 28 at the Portland Hilton Downtown) will be a unique opportunity for practitioners and managers of digital audiovisual collections to join with developers and engineers for an intense day of collaboration to develop solutions for digital audiovisual preservation and access.

The goal of the AMIA + DLF Award is to bring “cross-pollinators”–developers and software engineers who can provide unique perspectives to moving image and sound archivists’ work with digital materials, share a vision of the library world from their perspective, and enrich the Hack Day event–to the conference.

Interested in participating in Hack Day either virtually or in person? Registration is free! Sign up now: http://bit.ly/AMIADLF2018

The post 2018 AMIA Cross-Pollinator: Jason Corum appeared first on DLF.

Increase Your Revenue to Profit Ratio / Lucidworks

Retail margins are in jeopardy. Between mobile shopping, online retail, and the popularity of social media coupled with a decade of flat sector profitability, retailers must change how they fundamentally operate or join the ever-growing graveyard of bygone brands. Even a stalwart brand like Sears has failed to make this transition and has announced its likely demise.

However, there is a brighter future for retailers who combine smart, focused business strategies, cost controls, as well as data and AI technologies. From Michael Kors to Home Depot, smart retailers are combining data technologies, artificial intelligence, and search to drive better decisions, improve margins, and increase sales.

How can you perform like a leader?

“Being agile enough to compete isn’t a one-time exercise that happens by just cutting costs. Success comes from reinvesting those savings in activities that will drive competitive advantage”

Cut Costs But Don’t Stop There

According to Accenture, “being agile enough to compete isn’t a one-time exercise that happens by just cutting costs. Success comes from reinvesting those savings in activities that will drive competitive advantage and revenue growth, such as creating a more efficient operating model, embedding enterprise wide process excellence or building leading edge capabilities.”

Top performing retailers don’t cost-cut their way to profitability. Failing retailers like Sears/Kmart have tried that for years. Cutting your way into profitability is rare and if you get there you’re never a leader. Once a company is profitable, the right cuts can propel it into a leadership position.

Making the right cuts means taking a holistic view of the organization and the customer experience. It means looking at the retail outlet, distribution center, and opportunities for automation from factory to point of sale

Master Dynamic Pricing and Price Testing

Matching competitors’ prices seems intuitive, especially online, but isn’t always the wisest approach. Formulating the right pricing strategy is difficult. Setting the right price requires recording and understanding consumer signals and broad experimentation. These days dynamic pricing strategies often must be done per SKU and in some cases per customer.

According to McKinsey, retailers should consider dynamic pricing strategies and “conduct a pilot in a handful of categories for concept design and testing. Done right, the pilot—and the subsequent rollout of dynamic pricing across all product categories—will yield meaningful improvements in revenue, profit, and customer price perception.”

Key to any pricing strategy is testing whether it works and if it influences customer buying decisions positively or negatively. Ethically, per-customer pricing should be done very carefully to avoid including any kind of demography that might be linked to race, gender, or other sensitive parameters. Pricing should be based more strictly on customer behavior signals similar to how Orbitz and other sites have implemented it.

Price testing should be implemented in combination with search A-B testing. Sometimes boosting more expensive (or lower cost) brands will also yield stronger results. As in software and hardware development, the strongest results will be the retailers who run the best tests.

“Retailers who deploy a bunch of next generation social shopping are often times making up for poor competitiveness in other areas and have seen a 17% decrease in sales and a 36% decrease in share price.” – Accenture

Avoiding Digital Window Dressing

According to Accenture, digital window dressing is “any digital capability that is a ‘nice to have’ but does not make up for a lack of competitiveness in core areas such as price, assortment, customer service, etc.” Retailers who cut costs and move to digital marketplaces because competitors are doing it tend to underperform.

Brands are selling wares on sites like Facebook and Pinterest (or selling through them) in greater numbers. But retailers who deploy a bunch of these next-generation social shopping tactics are often making up for poor competitiveness in other areas and have seen a 17% decrease in sales and a 36% decrease in share price.

“Industry frontrunners are competitive because of differentiators enabled by digital investments that span their business, from competitive pricing and hassle-free delivery to broad selection and shopping made simple.” – Accenture

The right digital investments and partnerships can increase sales and profitability. For omnichannel retailers, those investments should focus on enhancing the in-store experience and connecting with consumer signal data that includes the online and mobile experience as well.

According to Accenture, “industry frontrunners are competitive because of differentiators enabled by digital investments that span their business, from competitive pricing and hassle-free delivery to broad selection and shopping made simple.” In other words, the investment shouldn’t end with customer experience. Back office logistics are critical and the retailers that use AI and other technologies to drive operational efficiency while eliminating silos will become more competitive.

“When it comes to profitability, an online sale is not an equivalent replacement for that same item purchased in store.”

Rethink Your Omnichannel and Distribution Strategies

According to the retail consulting firm AlixPartners, “when it comes to profitability, an online sale is not an equivalent replacement for that same item purchased in store.” Some retailers have an idea that online sales are automatically cheaper or that order online and pickup in the store is the lowest cost option to the retailer. By some models this may not prove true when all of the costs are added up.

Research has shown that online customers spend less on average than in-store customers and that distance from the store is the main factor on whether someone shops online. Encouraging online shoppers to come into the store increases profits. Encouraging in-store shoppers to go online decreases profits.

The best omnichannel strategy focuses on making sure customers have a seamless, personalized experience and gives them a reason to come into the store if possible. This means, among other things, providing excellent search and recommendations. The best omnichannel strategy also takes into account the supply chain as well as other costs while measuring profitability all along the way.

AI Helps Industry Leaders to Specialize and Focus

Some of the more general department stores that never made the full omnichannel transition, like KMart/Sears and JC Penney are failing while retailers that focus on one section of the market or on one general area are profiting. Look at success stories like Home Depot, TJ Maxx/Marshalls, and Best Buy. They each specialize in a specific market segment allowing them to cater to and delight their customers.

It is no accident that specialization allows for better AI recommendations and customer targeting. Specialization also gives brands greater control over supply chains, inventory, and merchandising. This focus tightens up everything from cost to how items are displayed, searched, and recommended.

Your best customers are the ones that come to your store or your site first and only go elsewhere if you can’t help them. It is in a retailer’s best interest to be everything they want you to be.

‘Signals’ Help You Focus on Your Best Customers

Especially with online retail, specializing and focusing means thinking about not all of your customers but your “best” customers. Customers who drop-in once to take advantage of a low price aren’t your best customers. Your best customers are the ones that come to your store or your site first and only go elsewhere if you can’t help them. It is in a retailer’s best interest to be everything they want you to be.

Know these customers. There are tools and technologies that allow you to capture customer signals to better understand their behavior. Signals are customer behavior data that help you recognize and focus on these customers and recommend things to them in the store, and across mobile, web, and other channels. These signals can even tell you when these customers are not as satisfied as they have been in the past so you can develop a plan to incentivize them to come back.

Leaders Do Everything Necessary

One trick of the trade isn’t enough to lead the industry in profitability. It is a combination of focus, rethinking and cutting costs, and deploying smart technology that allows you to cater to your best customers and delight them. By combining a set of strategies and the appropriate well thought out technologies, leading retailers can profit even in uncertain times.

Next Steps

The post Increase Your Revenue to Profit Ratio appeared first on Lucidworks.

Twitter / pinboard

Ne'er had the pleasure to attend #Code4lib myself ... but if you're thinking about it but can't afford to go - ther…

Jobs in Information Technology: November 14, 2018 / LITA

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Rice University, Fondren Library, Government Information Coordinator, Houston, TX

Winona State University, Electronic Resources Librarian, Winona, MN

Colorado School of Mines, System Architecture & Web Services Librarian, Golden, CO

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Looking Back at Islandora CampCA / Islandora

We held our first Islandora Camp in San Diego last week, with many thanks to our hosts at San Diego State University. The overall structure followed our now tried-and-true pattern for camps: one day of general sessions introducing Islandora and community that builds it, one day of hands-on workshops with tracks for front-end administrators or developers, and one day with sessions taking a closer look at individual sites and tools. The near-release maturity of Islandora CLAW added some excitement to our workshops, as we did our level best to fit a couple of days worth of learning into eight hours. We'll be tuning the balance between the two stacks as camp continue forward, but thankfully next year's Islandoracon will allow plenty of time for both stacks to shine.

Day three is always a favorite for camp instructors, as we get a chance to join the audience and see what people are doing with Islandora out in the world. We saw some great use of Diego Pino's Islandora Multi Importer as an integral part of SDSU's Islandora workflow in Katie Romibiles session Islandora at San Diego State University. This was followed by Islandora Batch Uploader, a tool from Western Washington University that dropped jaws as David Bass fed it random images from a collection and it handed back accurate tags and a suggested abstract, using some pretty slick AI tools. Seriously, check this out:

Continuing the theme of using AI with Islandora, Tommy Keswick from Caltech showed us how they're using AI tools to hep OCR make sense of handwritten documents in his session Analyzing bulk OCR Results Among Mixed Typed and Handwritten Documents. We followed with a couple of sessions more focussed on the technical side, looking at ways to tweak Islandora's speed with Apache Camel in Islandora, Fedora, Camel: Getting Over the Hump, or to set up for a migration with the Move to Islandora Kit. Finally, a close look at the University of Tenessee Knoxville's experiences with moving to Islandora-as-IR in UTK’s IR Submission Workflow: Students submit directly into the repository? Sure, why not?.

What's next for Islandora events? We'll be holding a single Camp next June in Dübendorf, just outside of Zürich, Switzerland, and then doing a whole week of Islandora at Islandoracon in Vancouver, BC. Our Camp roster for 2020 is wide open, so if your institution is interested in being a host, please drop us a line!

Fellow Reflection: Steve Lapommeray / Digital Library Federation

 

This post was written by Steve Lapommeray, who received a DLF Students and New Professionals Fellowship to attend the 2018 Forum.

Steve Lapommeray is a Programmer Analyst in the Digital Initiatives team at the McGill University Library, working on deployment automation, websites, and supporting their ILS.

At his first DLF Forum, he looks forward to learning from and collaborating with other professionals in the digital library field. He is excited to have the opportunity to learn more about digital library projects, gain skills that he can apply at his library, and break out of the silos that separate librarianship from application development.

As a recipient of a DLF Students & New Professionals Fellowship, I got the opportunity to go to the 2018 DLF Forum for the first time. I went with the idea of trying to balance attending sessions that dealt with my chosen field (programming) with sessions that dealt with other aspects of digital libraries that I was less familiar with (everything else). With so much to see and learn, the conference could get overwhelming at times. The quiet room and meditation sessions were much appreciated!

One session (out of many!) that piqued my interest was #m4e: Topic Modeling and Machine Learning. The introduction to the concept by Bret Davidson and Kevin Beswick of NCSU Libraries showed how they were able to use machine learning to have a self-driving kart in Mario Kart 64. For libraries, functionality such as an automatic first pass of metadata, improvements in video/image processing and OCR are avenues that should be explored.

They also mentioned that the initial data is often the source of algorithmic bias in deep learning. The initial data sets that feed the machine learning algorithm can very easily come from a narrow range of sources, and there is a need to create more representative data sets. Ways to mitigate this bias are to expose to the user that this technology is being used, to give the user the option to provide feedback, and the option to turn the technology off altogether. User awareness of how the results are being generated can demystify some of the machine learning process, as well as allow the user to make more informed decisions rather than accept the algorithm as the absolute source of truth.

Another way to correct issues in the algorithm is to use “transfer learning”. It’s a way to retrain parts of the algorithm that are not giving optimal results. Parts of the machine learning layer are taken out of the whole and retrained on smaller data sets. This is to improve the decision-making of the individual parts without having to involve the entire system. Once the retraining is completed, the removed parts are put back into the whole.

One advantage for users in the library and cultural heritage institution field is that the service providers are not in the business of making money, so they can focus on providing the best user experience.

The “Future/Death of the Library: A Collaborative, Experimental, Introspective Dive into Digital Humanities” talk by Rebekah Cummings, Anna Neatrour, and Elizabeth Callaway of the University of Utah also had very interesting observations. Mentions of the death of and the future of the library in texts were found through topic modeling using R. They then found what words were used in relation to each other and generated word clouds of the most common terms. They then analyzed which terms surfaced most often. This approach does have limitations. A term such as “electronic book” counts as two separate words rather than as one concept. As such, that term would not correctly represented in a world cloud. Sadly, this approach was not able to predict the ultimate fate of libraries.

Erin Wolfe of the University of Kansas spoke about the Black Book Interactive Project, continuing on the theme of topic modeling and data mining with regards to African American literary texts. This project addresses creating metadata for African American literature.

Lastly, Darnelle Melvin of UNLV gave the “Using Machine Learning & Text-mining for Scholarly Output” talk. His work is currently in a data collection phase and makes use of machine learning and text mining.

Apart from that session, I attended others dealing with labour inequities with regards to library staff, 3D and virtual reality collections, linked data, and institutional repository migrations. It was a lot of information to take in and I’m glad that the shared notes and slides are available online. Thank you to DLF, my fellow fellows, and all of the speakers, panelists, presenters, and attendees. This was an amazing opportunity to explore areas of the library world that I normally would not be exposed to and a chance to meet some great people.

Want to know more about the DLF Forum Fellowship Program? Check out last year’s call for applications.

If you’d like to get involved with the scholarship committee for the 2019 Forum (October 13-16, 2019 in Tampa, FL), look for the Planning Committee sign-up form later this year. More information about 2019 fellowships will be posted in late spring.

The post Fellow Reflection: Steve Lapommeray appeared first on DLF.

DSpace in Brazil / DuraSpace News

DuraSpace recently announced the first Country-specific Webinar Series with support from Neki IT, a Brazilian Certified DSpace Contributor. In this series different Brazilian institutions present their experiences with DSpace. As a Country-specific series, the webinars will be offered in the Portuguese language.

This new initiative supported by DuraSpace in collaboration with community members is intended to support the Brazilian community in sharing know-how, best practices and use cases on how to implement DSpace to provide open access to many valuable cultural and scientific resources.

Additionally to better enable communications within the Brazilian DSpace community, there is now an official and dedicated WIKI pages for the Brazilian User Group and a Slack channel under the broader DSpace channel (more information here).

LUME
The second webinar in the series presented LUME, a digital repository of the Federal University of Rio Grande do Sul that allows access to digital collections of materials produced within the University. During the webinar the LUME’s team presented their experience with DSpace, addressing the features of the repository they use and the difficulties they encountered in implementation. A recording off this webinar will be made available.

IBICT
The third webinar will offer the IBICT (Instituto Brasileiro de Informação em Ciência e Tecnologia) perspective on DSpace, presenting reasons why IBICT believes in DSpace as an important instrument for sharing knowledge about building digital repositories while also demonstrating the importance of Open Source tools that enable access to information. Please register here: https://goo.gl/forms/rT2Nc7qfSc8kQJGm1

The post DSpace in Brazil appeared first on Duraspace.org.

“I Remember…”: A Written-Reflection Program for Student Library Workers / In the Library, With the Lead Pipe

In Brief: Two librarians who run a library commons space implemented a written reflection program with their undergraduate student employees to improve team communication, create a qualitative record of the space, and generate case studies for discussion in group meetings. In this article, they present and analyze examples of their student workers’ reflective writing about their library space, delve into the literature of written reflection, and share how they changed the program after an assessment.

Part I: Written Reflection in the Research Commons

I remember all the sounds of opening the library. The *shhnk* of swiping my card through the reader outside. The creaky turn and loud *click* of the door opening after turning the handle. The echoing *clack* of my boots resonating throughout the stairwell after stepping on the concrete floor….

These are the first few lines of a written reflection that an undergraduate student worker wrote in the library space we manage. A little over two years ago, we started a program in which we made written reflection a part of the jobs of the eight students we work with. These students staff a help desk, so periodically during their scheduled shifts, we cover the desk for them for a half hour. They have that time to reflect in writing about their work and their relationship to it, and the writing above shows a student describing what it’s like to open our library space. This student focuses on the sounds our space makes, playing with onomatopoeia, and they continue:

….The slight metal *hiss* of the stair gate’s springs being bent. The low, booming *gong* that echoes through the stairwell as the gate closes and hits the metal rail, syncopated with my boots descending down the steps. The quieter, smoother *sree* of the basement door handle. The lighter, higher pitched footsteps on the tiled hallway floor. The *chk* and *bmm* of the door closing as I round the hallway corner. The *sree* of another door handle and the muted *pmm* of my boots on carpeted floor. The echoing *bomm, bomm, bomm* of steps on concrete in another, more acousting stairwell. The vacuum cleaner’s *shvrooooooooom* getting louder and louder after each step up. The friendly *hello*, or *happy friday*. The almost unnoticeable *crinkle, crinkle, shvoop* of taking of jackets and setting down bags. The muted *click, click, click* of impatience as I wake the computer. The loud rattling and sometimes sudden *gleck, gleck, gleck, gleck* in rapid succession as I move a whiteboard without releasing the plastic brakes on the wheels. The whining *phwemp* of  wiping down whiteboards. The *sveeeee*, *jingle, jangle*, and *svooooo* of opening and closing the drawers to retrieve the keys. The loud, mechanical *chk, chk*, *jingle*, *chk chk chk*, and *fwooomp* of unlocking and opening the doors. The *good morning* as patrons file in.

(Note: Throughout this article, we have chosen to preserve the spelling, punctuation, and syntax that students used in their writing and to forego any use of “SIC” in brackets.)

Reading this student’s reflection now, we are amazed anew by how creative and detailed it is. We had given this person a writing prompt for the reflection–a prompt we’ll discuss later–and it’s fascinating to see how they made sense of it and turned it into something wholly their own. It’s of note, too, that this student wrote this reflection by memory. They weren’t composing as they walked through the labyrinthine path they take to open the library in the morning. No, they were able to recall these specifics while sitting still, and such an act seems to show that the space in which they work isn’t just something to remember but a Memory Palace–that is, a place that helps one remember because it’s meaningful. And reading this reflection, taking in details we ourselves never considered, we are reminded again of why we began this program in the first place. We had three reasons for deciding to pay student workers to reflect in writing while on the job, and they are these:

  • to improve communication between them and us
  • to preserve a qualitative record of the space in which we work
  • to use these writings in group meetings, where we treat them as case studies and works of literature

But like anyone who limits themselves to threes, we’ve discovered additional, unexpected reasons for committing to written reflection. Some of these are easy to quantify or justify, while others are more intangible. At times, we’ve noticed that the value of written reflection isn’t just what information the reflections convey. It’s simply, elegantly, the open-ended but focused practice itself that’s worthwhile.

Written Reflections Are Not Just for Therapists and Professors; Student Workers Can Benefit, Too!

Early on, when we began to consider incorporating written reflection into the library work of undergraduate students, we scanned library literature to see if such practices already existed. We wanted to find guidance on how to set up a written-reflection practice–as well as how to assess it–but our initial searches yielded nothing. We have yet to find anything directly related to what we’ve done. In fact, the only combination of “written reflection” and “librar*” we’ve found is in an article about MA Librarianship students in Sheffield, UK, who wrote reflections in a library management class (Greenall and Sen, 2016).

We were shocked to find so little research about the value of written reflection in librarianship and student work in libraries, and we were surprised further that when we opened our search to undergraduate-student work in general, we still found next to nothing about written reflection. The closest thing we could find is by Sykes and Dean (2013), who write about the uses of written reflection in a Work-Integrated Learning curriculum–a program in which third-year students find placement in an internship (p.186). They found that framing reflection as a “practice” rather than an “activity” brought about a shift in students’ thinking that reflections can lead to real-world action (p. 190).

When we broadened our search terms to the workplace in general, we finally had some success in locating scholarship about written reflection and its uses in employment. We found articles about written reflection and corporate managers at a Fortune 500 company (Wood Daudelin, 1996), an engineer at a refinery (Rigano & Edwards, 1998), and workers at a software company (Cyboran, 2005). This research all came to the conclusion that written reflection not only improved critical thinking skills but also productivity and job satisfaction.

As we continued to review literature related to written reflection, some of its deepest pools proved to be in the fields of therapy, education, and writing composition. Written reflection, especially in the form of journaling, has been practiced in therapy and counseling for decades. Ira Progoff’s (1975) At a Journal Workshop is a prime example of a reflective writing process that has gone from being an anomaly to an accepted practice to an institution. Gillie Bolton’s, Victoria Field’s, and Kate Thompson’s (2006) Writing Works: A Resource Handbook for Therapeutic Writing Workshops and Activities is another text that has popularized reflective writing in therapeutic contexts.

Reflection, written and otherwise, has a long history in education. One early place to start is John Dewey (1910), who in How We Think argued for the value of what he called “reflective thought” and defined it as follows: “Active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it, and the further conclusions to which it tends, constitutes reflective thought” (p.6). What’s more, for over forty years in higher education, instructors have taught written reflection in writing composition classes, where it has undergone at least three generations of changes (Yancey, 2016, p. 9). Within these generations, Expressivist pedagogy, “which employs freewriting, journal keeping, reflective writing, and small-group dialogic collaborative response” has spoken to us and is the closest design and most obvious inspiration for the work we’ve done (Tate, Rupiper & Schick, 2001, p. 19). In particular, within Expressivist pedagogy, we’ve been most drawn to Peter Elbow and bell hooks, who are “paradigmatic examples of expressivist teachers” (Tate, Rupiper & Schick, 2001, p. 20). Their research about introspective writing practices as well as the practical, radical, and self-affirming ways of using such writing served as a crucial precedent for us.

This look at literature about written reflection helped us think about what it might look like in our own space, specifically, or in the world of library science, in general. In particular, we were curious about whether or not written reflection would be an annoying, disconnected add-on to what student workers were already doing or if it would actually affect their work, their attitudes about it, and the ways they imagine themselves.

Student Workers Reflect on Employment at the University of Washington Research Commons

We work in a library space–the University of Washington Research Commons–that is meant to help researchers through processes that are experimental, creative, and interdisciplinary, so in many ways it was a perfect setting for testing out a practice in written reflection that we believed was both novel and boundary spanning. As part of our work, we supervise eight undergraduate workers who staff a help desk, and in the Autumn Quarter of 2016, we began to fiddle with including written reflection in the training of new student workers. We did this by sharing a Google Doc with them, giving them time to periodically reflect in writing during their training, and reading their reflections and offering comments.

As mentioned earlier, we were aware of Peter Elbow’s work, especially his book Writing Without Teachers, and imagined that encouraging new library workers to reflect about the Research Commons might help them track not just what they were learning in their training but how they felt about it. Outlining a reflective-writing practice, Elbow (1998) writes, “Each week, take a fresh sheet of paper and write a brief account of what you think you got out of that week’s work: freewriting for class, any other writing, class reactions. These entries cannot profess to the truth. They are meant as a record of how you see things at the moment” (p. 145). In addition, we were attracted to the thinkings of theorists in Critical Pedagogy (like Paulo Freire and bell hooks), especially with regard to how they make sense of the concept of “praxis.” Dealing with the term and how it relates to reflection, Freire (2012) writes, “Problem-posing education bases itself on creativity and stimulates true reflection and action upon reality, thereby responding to the vocation of persons as beings who are authentic only when engaged in inquiry and creative transformation… Education is thus constantly remade in the praxis. In order to be, it must become” (p. 84). For us, what all this means is that written reflection isn’t just a mode for thinking and remembering; the act of reflecting is an action that has the potential to bring about effects that can jump off a document. Whatever is written on a page can very much become real in a place like the Research Commons.

Below you’ll find a reflection that a student wrote in their first few weeks on the job. We believe it illustrates Peter Elbow’s point that reflection doesn’t necessarily capture capital “T” Truth but a record of perception from moments that could otherwise be forgotten:

Working at the Research Commons has been quite like what I expected, based on the description of the job, training, and talking to other student squad members. I enjoy the pace of the position, as it allows for the luxury of reading interesting articles as well as catching up on class readings. Most student jobs do not allow this, which makes me feel grateful for it. Aside from the magically moving furniture of the space at closing and occasional lack of human interaction, there aren’t many frustrations with this position (if those are frustrations at all). I’ve encountered some nice patrons here and there. Most are just students or faculty wishing to check out/return cords and markers, and they are usually in a rush. There were a couple people who stood out to me, however. One young man came up and decided to give me a gift certificate to the coffee shop he worked at (at the Henry) as a part of his mission of giving free cups of coffee to people who worked at libraries. Another older man came up and told me he was a new student here and wanted a small tour of the technology around here. Sometimes things like these happen, which is nice.

And though this reflection isn’t really written in the spirit of Paulo Freire’s problem-posing education, which jostles authors and readers into action (into praxis) via reflection, we do nevertheless think that the passage above shows someone who is in the process of becoming. This burgeoning reveals itself in the student’s turning over in their mind the pros and cons of different types of student work as well as the varied interactions they had had with patrons.

We were pleased with this simple practice of having new student workers reflect in writing during their training. Their writing was helping us understand some of the questions and concerns that new people might have about working in the Research Commons, and it led us to get to know them in ways that differed from in-person interactions. We probably would have stuck with this enlightening–though limited–practice and never thought about expanding it if we hadn’t attended a presentation about High-Impact Practices (or HIPs). In this presentation, some of our colleagues at the University of Washington Tacoma Library laid out a new initiative in which they were systematically incorporating High-Impact Practices into their work and strategic plan (“UW Tacoma Library and High-Impact Educational Practices,” n.d.). According to George Kuh (2008), who coined the term “High-Impact Practices,”  HIPs are things that “have been widely tested and have been shown to be beneficial for college students from many backgrounds” (p. 9). They are practices like these:

  • First-year seminars and experiences
  • Common intellectual experiences
  • Learning communities
  • Writing-intensive courses
  • Collaborative assignments and projects
  • Undergraduate research
  • Diversity/global learning
  • Service learning, community-based learning
  • Internships
  • Capstone courses and projects (p. 9-11)

Our colleagues’ work with HIPs captured our imagination and made us wonder if we should experiment with different HIPs or perhaps further develop the reflective-writing practice we had started. We began to think that all the student workers we supervise–not just the new ones in training–could benefit from the reflections and that perhaps the writing should be even more frequent and focused. With regard to the High-Impact Practice of “intensive writing courses,” Kuh (2008) writes, “Students are encouraged to produce and revise various forms of writing for different audiences in different disciplines” (p. 10), so we wondered if we could push our students’ reflections to be even more varied and intense. For example, we imagined that, in terms of audience, it might be beneficial for student workers to think about not only writing for us, their supervisors, but also for each other. Further, we began to think of written reflections as “Super HIPs” because we envisioned ways of connecting them to learning communities and collaborative projects.

First Rounds of Revamped Reflection: “Why are we doing this again?”

In the Winter Quarter of 2017, we introduced a revised and revamped written-reflection practice in the Research Commons–one that all student workers would do every quarter of the academic year. We explained this change to everyone (albeit hurriedly–more on that later…), and as we had already been doing, we shared a Google Doc with each of the eight workers, letting them know that this writing would be shared with us and no one else without their consent. We made it clear that they would get compensated for the time they put into their reflections, so we decided we would do this by covering the help desk for students for a half hour during times in which they were slated to work. That way, we wouldn’t have to schedule separate times for them or ask them to do their reflections outside of their regular hours.

In this new program, we experimented with using a writing prompt. With the training reflections, we had simply asked students to write about how they were feeling or what they were thinking about, but with a new prompt, we decided to use an activity that Dr. Phyllis Moore, Chair of the Liberal Arts Department at the Kansas City Art Institute, created. When Moore provides orientation to new adjunct writing-composition instructors in what she calls “Comp Camp,” she often shares a creative-writing activity that brings about unusually vivid and reflective results. It was inspired by the painter and poet Joe Brainard, who is known for having written a number of books in which every line starts with “I remember…” For example, Brainard (2012) writes lines like these:

“I remember the first drawing I remember doing. It was of a bride with a very long train” (p. 5).

“I remember corrugated ribbon that you ran across the blade of a pair of scissors and it curled up” (p. 30).

“I remember a dream of meeting a man made out of a very soft yellow cheese and when I went to shake his hand I just pulled his whole arm off” (p. 134).

The first part of Phyllis Moore’s prompt is to share some of Brainard’s work with students. Next, the students get some time to list quickly some “I remember…” lines of their own, and in doing this, it’s important that they be as specific, detailed, and sensory focused as possible. Once the students list their lines, they pick one of them and develop it into a few paragraphs that tell a story. Finally, they examine their stories and write a few lines about what they think they mean. We were grabbed by this activity because it reminded us of some of the Expressivist writing strategies that Peter Elbow argues for in Writing Without Teachers.  For example, he says, “It’s at the beginning of things that you most need to get yourself to write a lot and fast. Beginnings are hardest: the beginning of a sentence, of a paragraph, of a section, of a stanza, of a whole piece” (1989, p. 26). With the “I remember…” activity, in its first part, it’s hard to get stuck because you know you’re starting every line with the same two words.

And writing about getting past beginnings and into selecting something to develop, Elbow says, “Sum up this main point, this incipient center of gravity in a sentence. Write it down. It’s got to stick its neck out, not just hedge or wonder” (p. 20). This advice from Elbow helped us to make sense of the second part of Phyllis Moore’s prompt, where students move from listing “I remember…” lines to picking one “center of gravity” to stick with and expand.

So we took this activity and covered the help desk for half-hour spells so that the student workers could do their reflections. We recommended that they focus on their work and memories in the Research Commons, but we also said that if they had trouble getting started they could write about any experiences they deemed appropriate. If they didn’t like the “I remember…” prompt, we gave them the option not to use it at all and to spend the time reflecting in writing however they wanted. The writing at the very beginning of this article is one example of how a student responded to the first part of the prompt. Here are some more “I remember…” examples from the Winter 2017 quarter, all of which are set in the Research Commons:

Student 1:

I remember craning my neck to see the slightest bit of snow through the windows in the corner.

I remember noticing my surroundings and how the RC [Research Commons] is kind of like a fish bowl. I wrote a poem about it.

Student 2:

I remember when I tried to replace a marker cartridge that was still full. Blue ink splattered everywhere, on the desk and on my hands. Luckily, it’s washable.

I remember the man who wears fake glasses and glitter on his face realizing he and I both had the same favorite Twilight Zone episode. He was so excited to recommend me more “monster” shows (and later campy shows) that he wrote down the names of 15 ones to watch, each on a different green scratch paper.

Student 3:

I remember the days when my best friend would stop by and bring me tea when I was working at the desk. The tea always had honey in it, and it would make me smile every time.

I remember when a girl asked me to close the door during a Black Lives Matter protest because it was distracting her from studying for her midterm. I said no.

When we first read these lines, we immediately felt happy that we opened up the reflective-writing practice to everyone and that we planned to do it every quarter. These “I remember…” moments, with their crisp specificity and poetics, communicated important details and emotions to us that we had been missing. We also enjoyed the experience of being surprised by student workers whom we thought we knew and surely took for granted. In their writing, they revealed funny, intimate, and surprising insights. Their gusto in responding to the writing prompt made us think of something bell hooks writes in Teaching to Transgress: “The first paradigm that shaped my pedagogy was the idea that the classroom should be an exciting place, never boring. And if boredom should prevail, then pedagogical strategies were needed that would intervene, alter, even disrupt the atmosphere” (1994, p. 7). Though we weren’t bored by our workplace, we still felt that this writing brought fresh excitement and frisson to it.

But the students didn’t stop with isolated lines. After completing the first part of the prompt, they continued by selecting one “I remember…” and developing it into a story. One student expanded one of the lines above into this true story:

During spring quarter of last year, there was a large Black Lives Matter protest that marched through the libraries. The protest exited the libraries through the Research Commons lobby, and they were armed with megaphones, signs and a lot of emotion. All of the students in the Research Commons stopped what they were doing, and quietly watched as the protestors marched by, except for this one girl. About five minutes into the protests exit, a girl came up to me, looked me dead in the face, and said, “Can you close the doors or something? This is too loud”. I calmly replied, “I’m sorry, but you have to understand why I can’t do that. It’s incredibly disrespectful, and the Research Commons is an open space, so closing the doors will make no difference”. The girl then looked disgusted, and promptly retorted back with, “Black lives matter? My midterm matters more”. That was the day that I realized that being in college doesn’t automatically make students immune to ignorance.

I remember this story because it was so appalling. This girl showed no remorse for her words, and had such a hatred in her heart for people who were trying to peacefully make a difference. I will never forget the look on her face, and I will never forget how her words made me feel. My encounter with her made me realize that college doesn’t purge a person of their ignorance and close mindedness. It made me realize that sometimes college can make a person more self-centered, whether it be the pressure of maintaining grades or making friends. This experience has made me more cautious in the way that I handle frustrated students.

When we first read the reflection above, we were excited and moved. We found it beautifully written and engaging to read, and it gave us a fresh insight into a place about which we thought we were experts. In addition, we were energized by the writing in that we believed it to be an example of what Paolo Freire would call “problem-posing education” in that the person who wrote it is clearly engaged in critical inquiry, not to mention “a constant unveiling of reality” (2012, p. 81). In the narrative, they are wrestling with what’s ethical and true.

To say that we were pleased by this reflection as well as the other seven is an understatement. And to say that the students were as pleased we were, unfortunately, was not at all the case. Instead, for the students, there was mostly confusion and some frustration about this new practice. Though some of them seemed intrigued by it, others were simply tolerant or at a loss. More than once, they asked us, “Why are we doing this again?”At the outset, we had hurriedly outlined what we were doing, but at this stuck point we decided we needed to do what we should have done in the first place: carefully detail what a written-reflection practice is and why we were committing to it. We also invited comments, feedback, and questions.

To address this disconnect, we waited for our next monthly group meeting, and we gave a presentation in which we covered Critical Pedagogy, praxis, High-Impact Practices, and research about the value of written reflection. Because we had only cursorily explained why we were committing to a written-reflection practice, we now did so explicitly. We said we saw three key benefits: to improve communication between student workers and supervisors, to maintain a qualitative record of the Research Commons, and to use reflections–with writers’ permission only–in monthly group meetings as case studies and discussion starters. We spent the rest of the meeting in conversation with each other, and when we finished, everyone seemed far more accepting of the experiment.

After the meeting, we were able to settle into the written-reflection practice, and it seemed as though there was less puzzlement and more acceptance of what we were doing. A couple of quarters into this practice, we even conducted some assessment of it, and though the assessment had some weaknesses, it did nevertheless indicate that the students saw some value in reflecting while on the job.

We decided to make written reflection a solidified part of student work in the Research Commons, and we’ve stuck with it to the present day. Over time, we tinkered with alterations and revisions. For example, we’ve tried out different prompts. The “I remember…” one proved to work well, but for the sake of variation, we tested out a modified version of Lynda Barry’s “Other People’s Mothers” exercise in her book What It Is (Barry, 2008, p. 151-154). This was our attempt:

  1. Make a list of ten powerful/strange/specific/weird/beautiful objects or people from your time at the Research Commons.
  2. Pick one of those objects or people.
  3. Answer some of these questions about that object or person:
        1. Where are you in the Research Commons?
        2. What are you doing?
        3. Why are you there?
        4. What time of day or night is it?
        5. Who else is there?
        6. What season is it?
        7. What is in front of you? Behind you? Left? Right? Above?
  4. Beginning with “I am,” tell us what is happening. Write it like a story with details and dialogue.
  5. If you have the time, look at what you’ve written and write a line or two about what it means or how it’s significant to you.

 

This prompt did produce results, though some of the students said it was too complicated and that a half hour wasn’t enough time for them to work through all its parts.

Another prompt that we experimented with–one that was far more popular–came by way of a colleague, Anne Davis, who is a Collection Development Coordinator and Anthropology Librarian. Hearing about our written-reflection practice and amused by it, she said that a good activity might be to ask the students to periodically walk loops through the 15,000-square-foot space of the Research Commons and take notes about what they noticed. Then, later in the quarter, the students could choose one or more of their noticings and use them to catapult into reflection. This idea immediately appealed to us because it reminded us of the work of Eleanor Duckworth, a theorist and researcher in the Harvard Graduate School of Education who had challenged and inspired graduate students for decades. Duckworth had done research with Jean Piaget in the 1960s and went on to found the concept of “Critical Exploration,” a process in which children question, investigate, hypothesize, and reflect about problems. With Critical Exploration, the most important thing is that people learn not from being told but through close observation and inquiry (Duckworth, 2006, p. 171). By requesting that students take at least three strolls through the Research Commons and ponder the question “What do you notice?” our hope was that they’d begin to assemble new statements and stories of the place in which we work and not simply take it for how it’s defined in its web pages.

As mentioned, the students enjoyed this prompt, and one of them even chose to record more than the three noticings we requested. This is that student’s account:

170411 – Hushed phone calls and rapid typing in the morning light. The trees outside the eastern windows filtered the sun into a pleasant pale green color on the carpet floor.

170413 – Someone took the welcome whiteboard in the lobby w/o me noticing again…. How does this keep happening?

170416 – Someone straightened the paintings on the north wall.

170418 – Where do things belong? The whiteboards all used to have (arbitrary) spots that they belonged in and would be reset to.

170420 – The paintings are no longer straight on the wall…..

170423 – Sometimes I see people rolling long distances across the floor in their chairs when it would be far easier to just stand up.

170425 – The temperatures is normal in the research commons. Not too cold. Not too hot. I can wear a light sweater and be comfortable for 3 hours. Incredible.

170427 – There are 41 people in the RC at 9:45am.

170430 – N/A

170502 – Sometimes people just come in here to chill. Some of our regulars are just here for an hour to be on their phones. It’s nice.

170504 – I keep forgetting to mention … whoever opens thursday is not changing the signs. Also, today I answered a reference question about bees!

These noticings are rich and varied, showing the number of stories and experiences involved in one student’s work in the Research Commons. This expanded composition came from this student’s noticings:

Reading through the above, I notice that most of my observations in regards to the research commons as a space have to do with how people interact with it. People straightening the paintings. People moving the whiteboards. People just sitting in a chair for an hour on their phones. As we sit behind the desk, there are many blind spots that hide all of these tiny interactions. First, there’s the big yellow stairwell, the core of the building, that blocks any view of about half of the research commons. Then there’s presentation place, which only allows the slightest glimpse of what’s going on behind it’s tall whiteboard walls through the small arch to the west. The screen behind the desk, although transparent, is still just opaque enough to blot out important details (plus it’s behind the desk, and how often do we turn around in our chairs?). The view from the desk is really quite limited. There’s the entrance, the lobby, Green B, the stairway, the green chairs to the north, and the large whiteboard tables to the south. That’s it. If you move your head around a bit you can get glimpses into Green A too. In order to really see what’s going on in the RC, you have to walk (or roll in your chair, but the one at the desk is a little too tall for that).

I think this raises an interesting idea about what our role is at the desk. In one meeting a while ago, I remember discussing what service we fulfill at the desk. We’re a help desk. We provide information about the libraries, the RC, campus, and where the bathrooms are. We check out materials, we help patrons with technology, but we’re also there to make sure patrons are using the space appropriately. Walking around as a practice, doing so with the purpose of observing, highlights the blind spots at the desk and how much is always going on throughout the RC. I noticed people more, and I noticed their activities too, but most interestingly I noticed the traces of where people had been and what they’d been doing through the objects that were out of place. The RC is a dynamic space. In order to understand how people use it, we can look towards the space and its materials.

Writing like this is fascinating to us–not just because of the information it conveys and the channels of communication it opens. It interests us because it’s part of a tradition of thinking and learning that goes back over a hundred years to, at the very least, John Dewey. Earlier in this article, we cite Dewey’s lines,  “Active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it, and the further conclusions to which it tends, constitutes reflective thought” (p.6). And such behavior is exactly what we see in this student worker’s writing. We see they are looking for new patterns–and questioning old ones–all with the desire to make meaning and define purpose.

Part II: Early Assessment

In this section we address the insights and challenges of a very early assessment that we conducted about written reflections, High-Impact Practices, and connections between student work and student lives. We began planning the assessment after several months using written reflections and conducted it about six months into the program. We did the assessment to better understand what the impact of work in the Research Commons was for our student employees; although we took a holistic look at working in the Research Commons rather than only focusing on written reflection, the assessment gave us useful information about our written reflection program.

This early assessment produced some insights, discussed below, that allowed us to make changes to the program. As a result, we focused our student employment experiments on written reflections, rather than continuing to try to offer a wide range of HIPs. We also made some changes in how we frame and scaffold written reflection with our students. Finally, we learned from the assessment’s limitations and gained clarity about the kind of further assessment we’d like to do.  To more deeply understand written reflections and what they contribute to the Research Commons, we need to look at the program today and the reflections themselves–things we do in more depth in the final section of this article. We also need to reflect on how we have used pieces of reflective writing to communicate with each other, and what value we have come to take from those communications.

The Assessment

To assess our program of written reflections, we created an interview guide that covered a lot of ground related to student employment in the Research Commons. The eight questions were very broad, soliciting student input on the bigger work-life-academics picture within which they did their work at the Research Commons. We asked questions about connections between students’ work and their personal and academic lives, HIPs, and general learning in addition to written reflection. In fact, only one of the eight questions was focused solely on written reflection. The interview guide was very broad, but it did give us broad results that ultimately helped us focus our energy on written reflection going forward.

We pursued our university’s IRB process, but the IRB considered the project’s primary function to be assessment despite our stated plan to publish about the assessment, and so they determined it was not subject to IRB regulation. However, we nonetheless followed appropriate ethical protocol for research with regard to participant consent and identity.

We have made assessment interview participants anonymous in this article, and anything that could identify them has been removed. When we spoke with them about participating in assessment interviews, we made it clear that though they were required to do reflections for their jobs, they were not required to do interviews. Our library assessment team conducted the six twenty to thirty-minute semi-structured interviews and provided us with a written summary of the results that included a limited number of quotes.

We have never seen a full transcript of the interviews: all we have seen is the report written by our assessment team based on those transcripts. That report contained some interview participant quotes, pulled out by our assessment team, as well as an overall analysis of themes within the results (again conducted by our assessment team).  All quotes and insights in this section come from that assessment report, and we have indicated whether we are quoting an interview participant (as quoted in the report) or the report itself. We proceeded in this way to protect the students’ privacy and allow them to feel comfortable sharing honestly in a context where they weren’t speaking to their supervisors.

Insights

The assessment report surfaced an important benefit of reflection along with a major contradiction: the assessment team indicated that “while many students felt that the reflections supported relationship-building among colleagues, they did not see this as directly useful to their work in the Research Commons” and “although some students questioned the value of writing and discussing reflections, they expressed interest in sharing stories about their professional experiences.” One student quoted in the report said, “Seeing what my peers reflected on in winter quarter was valuable. I now see the job through their eyes as well as mine.” This student was referring to our early experiments in using written reflection for communication: at the point we conducted the assessment, we had held several team meetings in which we all looked at several reflections together and talked about the different ways we can handle challenging situations. The reflection about a Black Lives Matter protest in the first section of this article was one such example.

To us, as supervisors, sharing stories and relationship-building among colleagues is important and does contribute to a better Research Commons. Does this mean that the students who participated in the assessment did not see relationship-building in the same way? Did they have preconceived opinions about what their supervisors might find useful? Because we made an ethical choice not to view the interview transcripts to protect participant privacy, we can’t know for certain.  However, we could and did use the information we received to experiment with improvements.

We began to focus more on reflections as communication tools for students and supervisors, and as ways for students to share their stories, experiences and impressions with each other. Something that we tried as an early experiment–sharing reflections in group meetings–has become a core part of the program. In the final section of this article, we provide some examples of how we are now using reflections in this way. We also continued to work to better scaffold and contextualize written reflection in the Research Commons, work that we began in response to student confusion early in the program.

Lessons Learned

While we we were able to make some changes to our program of written reflection based on our assessment results, we were somewhat limited by the structure of the assessment itself. This discussion of our limitations provides future directions for assessment, and future questions for investigation.

We limited the depth in which we could investigate written reflection by asking students a very broad range of questions that went far beyond written reflection. We asked multiple questions about HIPs and focused extensively on connecting Research Commons work to student employees’ personal lives and career goals. This prevented us from focusing on how practices like reflection affect work in the Research Commons and relationships among supervisors and workers. Now that our early assessment has emphasized relationship-building as an aspect of written reflection that students particularly value, we see this as an area for potential future in-depth assessment.

Additionally, the procedures we developed to protect student privacy were important and necessary but prevented us from seeing the raw data the assessment generated. When we received the report summarizing the interviews, some things confused us, and we struggled with context. We saw inconsistencies that both baffled and intrigued us, as discussed above in the “Insights” section. Future assessment will need to continue to navigate this tension between student privacy and access to data.

After our early assessment, we were able to make changes intended to help new student workers in the Research Commons make sense of written reflection as part of their paid work. We also made changes in response to the value students place on written reflection as a communication tool. We learned much more about the questions we still have about written reflection in student employment, and the types of future assessment of this program that could be conducted.

Part III: Where are we now?

Where are we today?

In these pages, we’ve given our reasoning for having student workers do written reflections on the job. We’ve shown some examples of their work, and we’ve gotten into some of the questions and conflicts we’ve encountered in introducing the practice as well as assessing it. As we write this article, though, what are things like? Where are we now? And what can other educators and library workers learn from the current landscape of our written reflection program? In the Research Commons today, written reflection gives us a tool for team communication, team-building, and personal expression, along with a record of our library space. These outcomes are related to our initial program goals listed at the beginning of this article, yet they are deeper for the assessment, learning, and changes that we undertaken over the last several years.

After conducting our assessment during the Spring Quarter of 2017, we realized we’d be hiring new student workers, and we saw this turnover as a chance to revise our job description and to make it clear to future workers that written reflection is a required and valued part of what we do. To our job description, in the “Duties” category, we added the line, “Periodically reflect in writing, sound recording, or drawing about work in the Research Commons.” Now, we make sure that we define what such a duty entails in interviews and ask prospective workers what they think about it or if they have any questions or concerns. We believe that this revision has not only taken away some confusion, but also encouraged those who like to reflect to self-select.

In the job duty we cite above, we made an additional alteration. Students have the option not just to reflect in writing. They can also do so by recording their voice or drawing something–like a portrait or a comic. We made this change after speaking with Kathleen Collins, who is a colleague of ours and the Children’s Literature and Sociology Librarian. When we described our written-reflection practice to her, she wondered about students who might prefer to express themselves in different ways. She helped us see that we were preferencing one mode of communication over others, so we decided to offer other modes–or potentially multi-modes–in the practice. At this writing, no one has yet reflected by recording their voice or drawing or painting something, but this option is now available.

As our job descriptions and hiring practices have evolved to center written reflection, so have our team communication practices. The assessment, in which students identified communication and story-sharing among team members as a benefit, highlighted the importance of reflections as a communication tool between team members. One of the consistent joys of this program today is discussing the stories of student workers in our monthly group meetings. Above, in Part II of this article, we quoted a student in our early assessment who said “seeing what my peers reflected on in winter quarter was valuable. I now see the job through their eyes as well as mine.” This statement is powerful to us, and it reflects how we now consistently use written reflections in group meetings and trainings. One memorable reflection that we talked about in a group meeting was written by a student who regularly opened the Research Commons in the morning:

As I came in to open, the library was so empty and so quiet. There wasn’t any life to it. While I was doing my routine walk through, I noticed that someone else was here with me. She was the janitor. I just smiled at her, and she smiled back. The library didn’t seem so empty anymore. At this point the library went from lacking life, to being full of life.

They wrote about how their relationship with that custodian evolved over the course of the quarter and how they got to know each other through early-morning conversations and shared work. They reflected on how their relationship with the custodian reminds them that we are all part of a team keeping the Research Commons clean, safe, and usable for our patrons: “while throughout the entire night the library seems so dead, [the custodian] and I bring it back to life in the mornings.” They concluded by saying that they and the custodian:

See each other every morning and we are super kind to each other. I think the main reason as to why this is so significant to me is because she reminds me a bit of my parents. I can also tell she is a hard worker and I value her work ethic. She makes sure our space is clean and she also is super sweet. I am happy that I got to meet [the custodian], and I am happy that we get to work together in the mornings to make sure that the research commons is presentable to the public.

Our discussion of this reflection reminded everyone in the meeting that we all have a role to play in keeping the Research Commons clean, orderly, and “presentable to the public” and led to people discussing how it’s important to respect our colleagues on the custodial staff by doing our part of the work rather than expecting them to do everything in the morning. Some Research Commons employees never or rarely open the Research Commons, so they never encounter the custodial staff. Discussing this reflection as a group gave us all a chance to think about the fact that, while we have professional custodial services at the University of Washington, we also have a responsibility to straighten up our space so that the custodians can do their work.

Another example of how our understanding of the written-reflection program has evolved relates to our use of the accumulated record of reflections. Because we employ undergraduate workers, we have a high and regular staff turnover. Once our student workers leave, they often apply for jobs and internships, and as supervisors, we take our responsibilities as references seriously. With a catalog of a students’ reflections over the course of two years, we find that we are able to write much more effective and personal letters of recommendation. In addition, when applying for work, one student even mentioned that they brought up their experiences with written reflection in an interview. They wrote this:

In the first two interviews I talked a bit about working in the libraries and how that has helped me be detail oriented and extra reliable. I mentioned those written reflections and how we collaborated as a team in creating an inviting space for students through various means.

The Intangible

All these concrete benefits aside, written reflection doesn’t always have to have an immediate, quantifiable benefit to be valuable to the Research Commons and to our team. We want to avoid entirely quantifying and commodifying the value of quarterly written reflections. While we talk about tangible benefits to the organization in this article–improved communication, team-building, bringing student voices into group meetings–those benefits are certainly no more important than the benefit of reflection for reflection’s sake.

Students are not just pieces in the Research Commons operation machine–they are individuals who bring their lived experiences, stories, and worldviews to this space. Respecting the intrinsic value of their reflections allows us to connect on a human level and to question the linear and quantifiable nature inherent in how we often talk about our work. A reflection in which a student writes about a problem with a patron is not inherently more valuable than a reflection in which a student writes about how the plant at our help desk makes them feel. For example, one student writes:

For some reason, whenever I am stuck on a problem or pondering a thought, I tend to stare at the plant that we have at the front desk. I’m not to sure why. I like the plant. I think it’s so cute and it really gives the research commons a sense of life further than the many patrons that use our services every day. Just like we take care of our patrons, we take care of our plant.

The writer of this reflection goes on to talk about how they find the plant “soothing” and concludes by saying, “In my personal opinion, I believe that at this point, the plant isn’t just a plant, it is also a part of the research commons staff.”

We never know how and when a reflection will be used or when it will be able to shed light on an unexpected situation. A reflection about a patron can be easy to apply and interact with on the surface, but it might end up providing no more than surface-level insight. The reflection about the plant could make us all look at our work environment in a new way. Or it could simply be valuable as a piece of expressive writing that helped the reader think about their relationship with the Research Commons.

Thank you:

Our sincere thanks to our peer reviewers–Misty Anne Winzenried and Bethany Messersmith–as well as to Annie Pho and the Lead Pipe editors for your direct, thoughtful feedback about this final paper and earlier drafts. Through your generous comments, we came to see new perspectives and found connections we had missed. We would also like to thank all our Research Commons student employees for exploring written reflection along with us, as well as the assessment team at the UW Libraries for their extensive help in assessing the program.

References

About. (2018). Research Commons. Retrieved from http://www.lib.washington.edu/commons/about

Barry, L. (2008). What it is (1st ed.). Montréal: Drawn & Quarterly.

Bolton, G., Field, Victoria, & Thompson, Kate. (2006). Writing works a resource handbook for therapeutic writing workshops and activities (Writing for Therapy or Personal Development). London ; Philadelphia: Jessica Kingsley.

Brainard, J., Padgett, Ron, & Auster, Paul. (2012). The collected writings of Joe Brainard. New York, NY]: Library of America.

Cyboran, Vincent L. (2005). The Influence of Reflection on Employee Psychological Empowerment: Report of an Exploratory Workplace Field Study. Performance Improvement Quarterly, 18(4), 37-49.

https://alliance-primo.hosted.exlibrisgroup.com/primo-explore/fulldisplay?docid=TN_ericEJ846243&context=PC&vid=UW&lang=en_US\

Dewey, J. (1910). How we think. Boston, Mass.: D.C. Heath &.

Duckworth, E. (1996). “The having of wonderful ideas” & other essays on teaching & learning (2nd ed.). New York: Teachers College Press, Teachers College, Columbia University.

Elbow, P. (1998). Writing without teachers (2nd ed., Oxford paperbacks). New York: Oxford University Press.

Freire, P., Ramos, Myra Bergman, & Macedo, Donaldo P. (2012). Pedagogy of the oppressed (30th anniversary ed.). New York: Bloomsbury Academic.

Greenall, J., & Sen, B. (2016). Reflective practice in the library and information sector. Journal of Librarianship and Information Science, 48(2), 137-150.

Kuh, G., Schneider, Carol Geary, & Association of American Colleges Universities. (2008). High-impact educational practices : What they are, who has access to them, and why they matter. Washington, DC: Association of American Colleges and Universities.

Progoff, I. (1975). At a journal workshop : The basic text and guide for using the Intensive Journal. New York: Dialogue House Library.

Rigano, D., & Edwards, J. (1998). Incorporating Reflection into Work Practice: A Case Study. Management Learning, 29(4), 431-446.

Sykes, C., & Dean, B. (2013). A practice-based approach to student reflection in the workplace during a Work-Integrated Learning placement. Studies in Continuing Education, 35(2), 179-192.

https://alliance-primo.hosted.exlibrisgroup.com/primo-explore/fulldisplay?docid=TN_tayfranc10.1080/0158037X.2012.736379&context=PC&vid=UW&lang=en_US

Tate, G., Rupiper Taggart, Amy, & Schick, Kurt. (2001). A guide to composition pedagogies. New York: Oxford University Press.

UW Tacoma Library and High-Impact Educational Practices. (n.d.). University of Washington Tacoma Library. Retrieved from https://www.tacoma.uw.edu/library/uw-tacoma-library-high-impact-educational-practices

Wood Daudelin, M. (1996). Learning from experience through reflection. Organizational Dynamics, 24(3), 36-48.

https://alliance-primo.hosted.exlibrisgroup.com/primo-explore/fulldisplay?docid=TN_sciversesciencedirect_elsevierS0090-2616(96)90004-2&context=PC&vid=UW&search_scope=all&tab=default_tab&lang=en_US

Yancey, K. (2016). A rhetoric of reflection. Logan: Utah State University Press.

 

Appendix: Assessment Questions

Interview questions:

  1. Tell us a bit about your experiences working at the Research Commons, for example:
    • How long have you worked here, what are some of your responsibilities?
    • What have you found most enjoyable/challenging in your work here?
    • What do you value most about working here?
  2. What have you learned through working at the Research Commons?
    • About library services?
    • About public/customer service?
    • Have you learned any skills (e.g., related to research processes, technology, etc.)?
  3. How has your work at the Research Commons had an impact on your academic work? Your life?
    • If interviewee is about to graduate: have you discussed the Research Commons in your job interviews, applications, etc.?
    • For any skills mentioned in #2 above: how have you applied them to other situations (in academic/personal life)?
    • Have you been able to bring anything you’ve learned in classes to bear on your work here in the Research Commons? This could be direct (subject knowledge to answer a student question) or more indirect (group work/collaborative skills that you’ve been able to apply as part of working in the Research Commons “student squad”).
  4. I understand that as part of your position here, you’ve been working on written reflections. Tell us a bit about what you did over the course of the year (e.g., how many did you do, what was the nature/content, did you talk about them with supervisor/peers, etc.).
    • Could you describe anything you got out of doing these written reflections?
      • If you don’t feel that you got anything out of them, why is that?
      • What would have made them more useful to you?
    • How did the written reflections (including the discussions of them with peers/supervisor) have an impact on your work at the Research Commons? Do you feel that they added value?
    • How did they affect your relationship with your colleagues? Supervisors? Users of the Research Commons?
    • What would you change about the reflections or the discussions about them?

Transition to talking about High Impact Practices:

Intensive, reflective writing can be an element of what is known as a “High Impact Practice”. The concept of High Impact Practices is one that is becoming increasingly important in U.S. higher education. High Impact Practices are defined as “Transformative experiences that ‘require students to connect, reflect on, and integrate what they are learning from their classes with other life experiences” (Markgraf 2015, p. 770).

 

Within the field of librarianship, there has been some effort to expand the definition of high impact practices to include student employment experiences, as student employment can be one way of making connections between academic and extra-curricular activities (such as on-campus work).

 

It sounds like you’ve talked about what High Impact Practices are in the Research Commons over the past year, and it sounds like there were a couple of examples of these practices that you may have participated in, such as the opportunity to present about study abroad experiences and the reflective writing on your experiences as student employee.

 

    1. Beyond the reflective writing activities, have you participated in these kinds of activities/practices at the Research Commons (e.g. presenting about study abroad)?
      • If so, what did you get out of it/them?
    2. For all the activities (including the reflective writing and any other activities you’d define as “high impact”), do you think that participating in these activities has changed your view of what it means to be a student employee and/or the relationship between your work/academic life?
      • Why or why not (and, if so, how has your view changed)?
      • Has your work in the Research Commons contributed in any way to achieving your academic goals?
      • Are there experiences you wish you had while working in the Research Commons that would have been valuable in drawing connections between your academic learning and student work?
        • Is this something you’re interested in (i.e., applying learning from classes in employment)? Why or why not?
    3. I’d like to get your view on what “High Impact Practices” mean. If you had to describe the concept of “High Impact Practices” to a friend, how would you explain it?
    4. Anything else you’d like to share with me about reflective writing activities or experiences as a student employee in the Research Commons?

Legal Tech Student Group Session Brings Quantitative Methods to U.S. Caselaw / Harvard Library Innovation Lab

This September we hosted a Legal Tech Gumbo session dedicated to using quantitative methods to find new threads in U.S. caselaw. The Legal Tech Gumbo is a collaboration between the Harvard Law & Technology Society and Harvard Library Innovation Lab (LIL).

The session kicked off by introducing data made available as part of the Caselaw Access Project API, a channel to navigate 6.4 million cases dating back 360 years. How can we use that data to advance legal scholarship? In this session Research Associate John Bowers shared how researchers can apply quantitative research methods to qualitative data sources, a theme which has shaped the past decade of research practices in the humanities.

This summer, Bowers shared a blog post outlining some of the themes he found in Caselaw Access Project data, focusing on the influence of judges active in the Illinois court system. Here, we had the chance to learn more about research based on this dataset and its supporting methodology. We applied these same practices to a new segment of data, viewing a century of Arkansas caselaw in ten-year intervals using data analytics and visualization to find themes in U.S. legal history. Want to explore the data we looked at in this session? Take a look at this interactive repository (or, if you prefer, check out this read-only version).

In this session, we learned new ways to find stories in U.S. caselaw. Have you used Caselaw Access Project data in your research? Tell us about it at info@case.law.

Kids Today Have No Idea / David Rosenthal

One of the downsides of getting old is that every so often something triggers the Grumpy Grandpa. You kids have no idea what it was like back in the day! You need to watch Rob Pike's video to learn where the hardware and software you take for granted came from!


I'm eight years to the day older than Rob, so I got to work with even earlier technology than he did. As far as I know, Rob never encountered the IBM1401, the PDP-7 with its 340 display, the Titan and its time-sharing system, 7-hole paper tape and Flexowriters, or the horrible Data General Nova mini-computer.  I never used an IBM System /360, but we did both work with CDC machines, and punch cards.

I think Rob and I started on PDP-11s at about the same time in 1975, me on RSX-11M at Imperial and Rob on Unix at Toronto. Rob was always much closer to the center of the Unix universe than I was in the UK, but the Unix history he recounts was mine too, from Version 6 on. Rob's talk is a must-watch video.

Positioning UX as a Library Service – Don’t miss this webinar! / LITA

Sign up now for Positioning UX as a Library Service

University of Toronto Libraries opened a User Research and Usability (UX) Lab in September 2017, the first space of its kind on campus. The UX Lab is open to students, staff, and faculty by appointment or during weekly drop in hours.

In this 90-minute webinar, our presenter will discuss:

  • The rationale behind building a physical usability lab and why a physical space isn’t always needed (or recommended)
  • Experience with community building efforts
  • How to raise awareness of UX as a service to staff and the University community at large
  • The evolution of the lab’s services
  • Next steps

Presenter: Lisa Gayhart, User Experience Librarian, University of Toronto Libraries
Thursday November 15, 2018, 1:00 – 2:30 pm Central Time

View details and Register here.

Check out this additional LITA Fall 2018 continuing education opportunity:

Accessibility for All: Screen Readers
Presenter: Kelsey Flynn
Offered: December 18, 2018

Questions or Comments?

For all other questions or comments related to LITA continuing education, contact us at (312) 280-4268 or lita@ala.org

Fellow Reflection: Jasmine Clark / Digital Library Federation

 

This post was written by Jasmine Clark (@lellyjz) , who received an ARL+DLF Fellowship to attend the 2018 Forum.

Jasmine Clark is a Resident Librarian at Temple University doing rotations in digital scholarship, library administration, digital library initiatives, and is leading a project to recreate the Charles L. Blockson Afro-American Collection into a virtual reality learning module.

Her library work experience has provided experience in a variety of functional areas and departments, including metadata, archives, digital scholarship, and communications and development. She is interested in the ways information organizations can integrate inclusive practices into their services and management practices.

My time at DLF was a wonderful time to meet people who were involved in a broad array of projects. I enjoyed the sessions I attended and was really glad to see so many discussions around ethical labor practices. The first I’ll discuss is the Contingent Laborers Discussion Coffee Break where I learned about the upcoming National Forum on Labor Practices for Grant-Funded Digital Positions. In order to examine the unstable nature of the grant funded positions that underlie many GLAM[1] [2]  positions, two forums will be held in 2019 that:

“… will bring together representatives and stakeholders from the three primary groups involved—workers, funders, and management (to include administrators). In our meetings, we intend to develop a more systematic understanding of the labor conditions created by grants and collaboratively develop benchmarks and recommendations toward the development and evaluation of proposed positions which funders and institutions may adopt.

As someone who is interested in ethical management and hiring practices, it is wonderful to see this conversation happening. I have been employed in a grant-funded position that was done very well, even resulting in a permanent position later down the line, but that still had its drawbacks. Temporary work in small, poorly funded institutions always comes with additional challenges around requests for unpaid labor in areas outside of the position description and poor integration into staff (limiting the amount of peer support available).

Another valuable point that came up in the session entitled Building Community and Solidarity: Disrupting Exploitative Labor Practices in Libraries and Archives was the practice of hiring for temporary positions that were needed on a permanent basis. Add the challenges of being part of an underrepresented group to the challenges of being unable to organize or look for stable peer support and this creates a very exploitable state. We Here, which I was introduced to by a fellow librarian of color previous to this session, was created to offer a space for peer support to information workers of color. Groups like this, and research done by groups like the Working Group on Labor in Digital Libraries, Archives, and Museums, are essential for the greater collective action necessary to pursue ethical, equitable employment practices.

Groups like this, and research done by groups like the Working Group on Labor in Digital Libraries, Archives, and Museums, are essential for the greater collective action necessary to pursue ethical, equitable employment practices.

If you are interested in contributing to the National Forum on Labor Practices for Grant-Funded Digital Positions, there is an online self-nomination form that will remain open until November 30th. We Here maintains a list of grants and fellowships, as well as conferences, that may be of interest to LIS workers of color. Their social media information is available at the bottom of their web page.

Want to know more about the DLF Forum Fellowship Program? Check out last year’s call for applications.

If you’d like to get involved with the scholarship committee for the 2019 Forum (October 13-16, 2019 in Tampa, FL), look for the Planning Committee sign-up form later this year. More information about 2019 fellowships will be posted in late spring.

The post Fellow Reflection: Jasmine Clark appeared first on DLF.

Creating Presentations with Beautiful.AI / ACRL TechConnect

Updated 2018-11-12 at 3:30PM with accessibility information.

Beautiful.AI is a new website that enables users to create dynamic presentations quickly and easily with “smart templates” and other design optimized features. So far the service is free with a paid pro tier coming soon. I first heard about Beautiful.AI in an advertisement on NPR and was immediately intrigued. The landscape of presentation software platforms has broadened in recent years to include websites like Prezi, Emaze, and an array of others beyond the tried and true PowerPoint. My preferred method of creating presentations for the past couple of years has been to customize the layouts available on Canva and download the completed PDFs for use in PowerPoint. I am also someone who enjoys tinkering with fonts and other design elements until I get a presentation just right, but I know that these steps can be time consuming and overwhelming for many people. With that in mind, I set out to put Beautiful.AI to the test by creating a short “prepare and share” presentation about my first experience at ALA’s Annual Conference this past June for an upcoming meeting.

A title slide created with Beautiful.AI.

Features

To help you get started, Beautiful.AI includes an introductory “Design Tips for Beautiful Slides” presentation. It is also fully customizable so you can play around with all of of the features and options as you explore, or you can click on “create new presentation” to start from scratch. You’ll then be prompted to choose a theme, and you can also choose a color palette. Once you start adding slides you can make use of Beautiful.AI’s template library. This is the foundation of the site’s usefulness because it helps alleviate guesswork about where to put content and that dreaded “staring at the blank slide” feeling. Each individual slide becomes a canvas as you create a presentation, similar to what is likely familiar in PowerPoint. In fact, all of the most popular PowerPoint features are available in Beautiful.AI, they’re just located in very different places. From the navigation at the left of the screen users can adjust the colors and layout of each slide as well as add images, animation, and presenter notes. Options to add, duplicate, or delete a slide are available on the right of the screen. The organize feature also allows you to zoom out and see all of the slides in the presentation.

Beautiful.AI offers a built-in template to create a word cloud.

One of Beautiful.AI’s best features, and my personal favorite, is its built-in free stock image library. You can choose from pre-selected categories such as Data, Meeting, Nature, or Technology or search for other images. An import feature is also available, but providing the stock images is extremely useful if you don’t have your own photos at the ready. Using these images also ensures that no copyright restrictions are violated and helps add a professional polish to your presentation. The options to add an audio track and advance times to slides are also nice to have for creating presentations as tutorials or introductions to a topic. When you’re ready to present, you can do so directly from the browser or export to PDF or PowerPoint. Options to share with a link or embed with code are also available.

Usability

While intuitive design and overall usability won’t necessarily make or break the existence of a presentation software platform, each will play a role in influencing whether someone uses it more than once. For the most part, I found Beautiful.AI to be easy and fun to use. The interface is bold, yet simplistic, and on trend with current website design aesthetics. Still, users who are new to creating presentations online in a non-PowerPoint environment may find the Beautiful.AI interface to be confusing at first. Most features are consolidated within icons and require you to hover over them to reveal their function. Icons like the camera to represent “Add Image” are pretty obvious, but others such as Layout and Organize are less intuitive. Some of Beautiful.AI’s terminology may also not be as easily recognizable. For example, the use of the term “variations” was confusing to me at first, especially since it’s only an option for the title slide.

The absence of any drag and drop capability for text boxes is definitely a feature that’s missing for me. This is really where the automated design adaptability didn’t seem to work as well as I would’ve expected given that it’s one of the company’s most prominent marketing statements. On the title slide of my presentation, capitalizing a letter in the title caused the text to move closer to the edge of the slide. In Canva, I could easily pull the text block over to the left a little or adjust the font size down by a few points. I really am a stickler for spacing in my presentations, and I would’ve expected this to be an element that the “Design AI” would pick up on. Each template also has different pre-set design elements, and it can be confusing when you choose one that includes a feature that you didn’t expect. Yet, text sizes that are pre-set to fit the dimensions of each template does help not only with readability in the creation phase but with overall visibility for audiences. Again, this alleviates some of the guesswork that often happens in PowerPoint with not knowing exactly how large your text sizes will appear when projected onto larger screens.

A slide created using a basic template and stock photos available in Beautiful.AI.

One feature that does work really well is the export option. Exporting to PowerPoint creates a perfectly sized facsimile presentation, and being able to easily download a PDF is very useful for creating handouts or archiving a presentation later on. Both are nice to have as a backup for conferences where Internet access may be spotty, and it’s nice that Beautiful.AI understands the need for these options. Unfortunately, Beautiful.AI doesn’t address accessibility on its FAQ page nor does it offer alternative text or other web accessibility features. Users will need to add their own slide titles and alt text in PowerPoint and Adobe Acrobat after exporting from Beautiful.AI to create an accessible presentation. 

Conclusion

Beautiful.AI challenged me to think in new ways about how best to deliver information in a visually engaging way. It’s a useful option for librarians and students who are looking for a presentation website that is fun to use, engaging, and on trend with current web design.

Click here to view “My first ALA”presentation created with Beautiful.AI.


Jeanette Sewell is the Database and Metadata Management Coordinator at Fondren Library, Rice University.

Open in order to ensure healthy lives and promote well-being for all at all ages / Open Knowledge Foundation

The following blog post is an adaptation of a talk given at the OpenCon 2018 satellite event hosted at the United Nations Headquarters in New York City. Slides for the talk can be found here.

When I started medical school, I had no idea what Open Access was, what subscriptions were and how they would affect my everyday life. Open Access is important to me because I have experienced first hand, on a day to day basis, the frustration of not being able to keep up to date with recent discoveries and offer patients up-to-date evidence-based treatment.

For health professionals based in low and middle income countries the quest of accessing research papers is extremely time consuming and often unsuccessful. In countries where resources are scarce, hospitals and institutions don’t pay for journal subscriptions, and patients ultimately pay the price.

Last week while I was doing rounds with my mentor, we came across a patient who was in a critical state. The patient had been bitten by a snake and was treated with antivenom serum, but was now developing a severe acute allergic reaction to the treatment he had received. The patient was unstable, so we quickly googled different papers to make an informed treatment decision. Unfortunately, we hit a lot of paywalls. The quest of looking for the right paper was time consuming. If we did not make a quick decision the patient could enter anaphylactic shock.

I remember my mentor going up and down the hospital looking for colleagues to ask for opinions, I remember us searching for papers and constantly hitting paywalls, not being able to do much to help. At the end of the day, the doctor made some calls, took a treatment decision and the patient got better. I was able to find a good paper in Scielo, a Latin American repository, but this is because I know where to look, Most physicians don’t. If Open Access was a norm, we could have saved ourselves and the patient a lot of time.This is a normal day in our lives, this is what we have to go through everytime we want to access medical research and even though we do not want it to, it ends up affecting our patients.

This is my story, but I am not a one in a million case. I happen to read stories just like mine from patients, doctors, and policy makers on a daily basis at the Open Access Button where we build tools that help people access the research they need without the training I receive.

It is a common misconception to think that when research is published in a prestigious journal, to which most institutions in Europe and North America are subscribed, the research is easily accessible and therefore impactful, which is usually not the case.

Often, the very people we do medical research to help are the ones that end up being excluded from reading it.

Why does open matter at the scale of diseases?

A few years ago, when Ebola was declared a public health crisis, the whole world turned to West Africa. The conventional wisdom among public health authorities believed that Ebola was a new phenomenon, never seen in West Africa before year 2013. As it turned out, the conventional wisdom was wrong.

In 2015, the New York Times issued a report stating that Liberia’s Ministry of Health had found a paper that proved that Ebola existed in the region before. In the future, the authors asserted, “Medical personnel in Liberian health centers should be aware of the possibility that they may come across active cases and thus be prepared to avoid nosocomial epidemics” This paper was published in 1982, in an expensive, subscription European journal.

Why did Liberians not have access to the research article that could have warned about the outbreak? The paper was published in a European journal, and there were no Liberian co-authors in the study. The paper costs $45, which is the equivalent of 4 days of salary for a medical professional in Liberia. The average price of a health science journal is $2,021, this is the equivalent of 2.4 years of preschool education, 7 months of utilities and 4 months of salary for a medical professional in Liberia.

Let’s think about the impact open could have had in this public health emergency. If the paper had been openly accessible, Liberians could have easily read it. They could have been warned and who knows? Maybe they could have even been able to catch the disease before it became a problem. They could have been equipped with the qualities they needed to face the outbreak. They could have asked for funds and international help way before things went bad. Patients could have been informed and campaigns could have been created. These are only a few of the benefits of Open Access that we did not get during the Ebola outbreak.

What happens when open wins the race?

The Ebola outbreak is a good example of what happens when health professionals do not get access to research.However, sometimes Open Access wins and great things happen.

The Human Genome Project was a pioneer for encouraging access to scientific research data. Those involved in the project decided to release all the data publicly. The Human Genome data could be downloaded in its entirety, chromosome by chromosome, by anyone in the world.

The data sharing agreement required all parts of the human genome sequenced during the project to be distributed into the public domain within 24 hours of completion. Scientists believed that these efforts would accelerate the production of the human genome. This was a deeply unusual approach , with scientists by default not publishing their data at the time.

When a private company wanted to patent some of the sequences, everyone was worried, because this would mean that advances arising from the work, such as diagnostic tests and possibly even cures for certain inherited diseases, would be under their control. Luckily, The Human Genome Project was able to accelerate their work and this time, open won the race.

In 2003, the human genetic blueprint was completed. Since that day, because of Open Access to the research data, the Human Genome Project has generated $965 billion in economic output, 295 billion in personal income, 4 billion in economic output and helped developed at least 30% more diagnostic tools for diseases (source). It facilitated the scientific understanding of the role of genes in specific diseases, such as cancer, and led to the development of a number of DNA screening tests that provide early identification of risk factors of developing diseases such as colon cancer and breast cancer.

The data sharing initiative of the Human Genome Project was agreed after a private company decided to patent the genes BRCA1 & 2 used for screening breast and colon cancer. The company charged nearly $4,000 for a complete analysis of the two genes. About a decade after the discovery, patents for all genes where ruled invalid. It was concluded that gene patents interfere with diagnosis and treatment, quality assurance, access to healthcare and scientific innovation. Now that the patent was invalidated, people can get tested for much less money.

The Human Genome Project proved that open can be the difference between a whole new field of medicine or private companies owning genes.

Call to action

We have learned how research behind a paywall could have warned us better about Ebola 30 years before the crisis. In my work, open would save us crucial minutes while our patients suffer. Open Access has the power to accelerate advancement not only towards good health and well being, but towards all sustainable development goals.

I have learned a lot about open because of excellent librarians, who have taken the time to train me and help me understand everything I’ve discussed above. I encourage everyone to become leaders and teachers in open practices within your local institutions.

Countries and organizations all over the world look up to the United Nations for leadership and guidance on what is right, and what is practical. By being bold on open, the UN can inspire and even enable action towards open and accelerate progress on SDGs. When inspiration doesn’t cut it, The UN and other organizations can use their power as funders to mandate open .

We can make progress without Open Access, and we have for a long time, but while we make progress with closed, with open as a foundation things happen faster and equality digs in.

Health inequality and access inequality exists today, but we have the power to change that. We need open to be central, and for that to happen we need you to be able to see it as foundational as well.

 

Written by Natalia Norori with contributions by Joseph McArthur, CC-BY 4.0.

 

Sources:

Going Static Part 3 - Blog images for lazy people with writenow / Hugh Rundle

In the first post in this series I promised I'd write in a future post about automating social media images for blog posts, and that day has now arrived đŸŽ‰ . What started off as a seemingly simple additional feature ultimately turned into an npm package for a CLI app - but let's not get ahead of ourselves, I'll come to that in a moment.

The problem

I gave some background on what I wanted to do with images in my first post about Eleventy. I wanted to reduce file size and improve loading times, and prioritise the real content of posts: the text. But I also recognise that an embedded link on social media is much more likely to attract attention and interest if it has a relevant image.

Blogging tools like WordPress and Ghost generally take the 'feature image' or, failing that, the first image in a post if there is one, and inject that into Open Graph and Twitter meta tags in the <head> of the page. We explored this process with other types of metadata like the title and subject/s of a post, in Going Static Part 1. Both Open Graph and Twitter Cards have a meta tags for an image as well as a separate one for a description of the image (as opposed to the description of the article). The description is turned into alt text when a link is embedded in Twittter, Facebook, Mastodon or something else that uses Open Graph, enabling people browsing with screen readers to 'see' the embedded image. Taking this full circle a bit, when I made my last theme for Ghost and my WordPress theme for the newCardigan website I tried to pull the existing alt text from post feature images into the Open Graph and Twitter Card image description tags automatically, but I couldn't work out how to do it.

With Eleventy, I have a lot more control over how everything is put together. What I wanted to do was, conceptually, fairly straightforward:

  1. Use an API to programatically retrieve a URL for a freely-licensed image for each post, based on relevant keywords
  2. Inject that image url into the Open Graph and Twitter meta tags
  3. If possible, inject a description of the image into the Open Graph and Twitter image description tags.

Using the Unsplash API

Initially, being a librarian, I looked at Trove and British Library, but I concluded that I wasn't really going to get what I wanted, and my experience with APIs like these is that the images can be a bit hit and miss in terms of their suitability for blog post feature images. Indeed, the British Library image API is talked about all over the web, but none of the links seem to work anymore, and the British Library Labs project seems to have been reduced to three people so under-resourced that they have to document their existence in a Google Doc. As I explored my options, I realised that I'd already been using the service I wanted, because Ghost integrates with Unsplash. I'm not really sure about their business model, so it's likely I'll have to find something else in a few years when the vulture capital runs out, but in the meantime Unsplash offers high quality photos, freely licensed (attribution appreciated but not required), and accessible via a well-documented, free API. It was exactly what I wanted.

I wrote a simple nodejs script using inquirer to build frontmatter for a post (asking for title, subtitle, tags, and summary text), then call the Unsplash API using a randomly selected word from the title as the query, and insert the URL as the 'image'. The Unsplash has an incredibly convenient call for this purpose: you can call photos/random?query=puppies for a random photo of puppies, or photos/random by itself to just grab a completely random photo. This allows us to use the second option (without a query keyword) as a fallback if the first call comes back with nothing - which is entirely possible when using a random word as the query! The other cool thing about the Unsplash API is that it automatically returns a description of the photo, as well as three different image URLs depending on what size you want. Putting all of this together, I was able to make a script that will always return a photo - it just isn't guaranteed to always be relevant to the post. Here's the frontmatter for this post, for example, which was generated by my script:

---
layout: post
title: Going Static Part 3
subtitle: Blog images for lazy people with writenow
author: Hugh Rundle
tags: ['eleventy','coding','metadata','post']
summary: How I solved the problem of showing images in social media links without rendering them on my blog pages.
image:
photo: https://images.unsplash.com/photo-1462157948078-cbc0cd80e4d7?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=400&fit=max&ixid=eyJhcHBfaWQiOjM3NzgyfQ&s=6861c273b9a5ce72e0f7c34663549be6
description: person in green grass field
---

In this case, in an unintentially meta example, the keyword used to retrieve an image was images.

Using images in social media cards

So now we have all this stuff in the frontmatter, what do we do with it? I showed you what happens with most of these values when we looked at what goes in the <head>. There were only a couple of meta tags missing from that post, and they're the ones we add now:

<meta name="twitter:image" property="og:image" content="{{image.photo}}">
<meta name="twitter:image:alt" property="og:image:alt" content="{{image.description}}">

Conveniently, because Twitter uses the standard html name attribute and Open Graph uses the rdf property attribute, we can deal with images for both of them in the same element. Effectively what we're doing is hotlinking to the image stored on Unsplash's server - which is actually what Unsplash prefers. Here's the resulting image embedded in the Twitter post when I tweeted a link to my last blog post:

<img class="u-block" data-src="https://pbs.twimg.com/card_img/1058936770010722304/XnBQx9y6?format=jpg&amp;name=144x144_2" alt="sliced strawberries on pan cake" src="https://pbs.twimg.com/card_img/1058936770010722304/XnBQx9y6?format=jpg&amp;name=144x144_2">

Twitter has changed the URL for the image, but you can see that they use the alt text provided in my meta tag. Problem solved!

writenow

Eleventy is really great for processing markdown and turning it into full html pages using templates, but it has no built-in way to actually publish those pages. That's absolutely fine, because it's not Eleventy's job to be a publishing platform. But it still left me with a problem: how to get my shiny new blog post drafts from my laptop onto my blog server. I'd heard of a tool called rsync, so I had a look at it and immediately wondered why I hadn't been using it for years. rsync synchronises the files between two different places (directories on the same machine, locations in the same network, or in this case a local directory and another directory on a remote machine). It does this using various fancy techniques to minimise the amount of data moving between the two locations: so it's really good for doing regular backups where you probably only want to change a few files, or for publishing just the latest changes to a website - which is what we do at work to synchronise the staging and production websites, and what I want to do in this case as well. The other convenient thing for me was that rsync comes standard with MacOS so I didn't even need to download it.

Initially I just used rsync by itself, but I now had two command-line tasks related to my blog (in addition to running eleventy pre-processing), and started to think about pulling them into the one tool. I was also using static-serve to check my processed posts before publishing them. Even so, if I was going to push things to my server, I was a bit worried I might accidentally type the wrong command and wipe everything. Maybe I needed a backup system đŸ¤” . Eventually all of this turned into a command line utility I called writenow. Publishing it to npm yesterday was terrifying, but also exciting: it's not a hugely complicated application, but since it's so helpful for me, I'm sure it will be helpful for others. You can get started by running npm i writenow -g, or check out the code on GitHub.

Converis and VIVO Working Together / DuraSpace News

From Ann Beynon, Clarivate Analytics  

At the third VIVO workshop in Germany, held September 17-18, 2018 in Hanover, WWU Munster, one of the largest universities in Germany, presented on how they connect Converis to VIVOConveris is a CRIS system provided by Clarivate Analytics used to assemble the complete professional profiles for a complete and up-to-date collection of all teaching, research, and service related activities.  By linking Converis to VIVO, organizations can curate and link together research information within Converis and showcase it for external stakeholders via VIVO.

Benjamin Gross, VIVO specialists at Clarivate Analytics, recently trained the Converis implementation staff in Karlsruhe, Germany on VIVO to enable them to build VIVO sites for Converis customers across the globe who want to take advantage of this combination of tools.

Clarivate Analytics is as Certified DuraSpace Partner for VIVO services, helping organizations to implement and customize their VIVO.

The post Converis and VIVO Working Together appeared first on Duraspace.org.

“Against software development” / Jonathan Rochkind

Michael Arntzenius writes:

Beautiful code gets rewritten; ugly code survives.

Just so, generic code is replaced by its concrete instances, which are faster and (at first) easier to comprehend.

Just so, extensible code gets extended and shimmed and customized until under its own sheer weight it collapses, then replaced by a monolith that Just Works.

Just so, simple code grows, feature by creeping feature, layer by backward-compatible layer, until it is complicated.

So perishes the good, the beautiful, and the true.

In this world of local-optimum-seeking markets, aesthetics alone keep us from the hell of the Programmer-Archaeologist.

Code is limited primarily by our ability to manage complexity. Thus,

Software grows until it exceeds our capacity to understand it.

HackerNews discussion. 

Fellow Reflection: Erika Weir / Digital Library Federation

 

This post was written by Erika Weir, who received a DLF Students & New Professionals Fellowship to attend the 2018 Forum.

Erika Weir is an MSLIS candidate at the University of Illinois where she works as a graduate assistant with the Slavic Reference Service. Her interests include digital collections, archives, and special collections for distributed and marginalized communities.

In addition to providing reference services, she is currently working with the Slavic Reference Service on programs and digitization efforts occurring in the Baltic States: Latvia, Estonia, and Lithuania. She also interns at the Museum of the Grand Prairie, assisting with their cataloging and digitization efforts of the Doris K. Wylie Hoskins Archive for Cultural Diversity.

I was incredibly grateful to receive DLF support to attend this year’s forum. Overall, the experience was incredibly eye-opening to the breadth of work taking place in regard to digital libraries. The ability to speak to so many different professionals about their day-to-day was incredible and left me with the thought that “I want to do it all!” However, besides my boundless excitement for all of the new thoughts, ideas, and projects I was exposed to, I was also struck by the many ways the DLF community is tackling the notion of “neutrality” in GLAM institutions. From Anasuya Sengupta’s open plenary to the very last panels, the themes of decolonialization and self-improvement as a community ran deep. Moreover, I was particularly inspired by the ways in which members of the DLF community are addressing the history of underrepresentation and misrepresentation of marginalized communities in GLAM institutions through research and design.

Slide from Sengupta’s keynote talk

Coming from a Sociology background, I was struck by how much library community is drawing from the research practices that the social science community has adopted to work through their own checkered history marked by exploitation and ethical issues in research. Moving directly from the theory that characterizes many of the discussions about ethical library practices to actual research practices, the Developing a Framework for Measuring Reuse of Digital Objects Project spoke a great deal about their use of grounded theory to guide very important research. Although not directly related to decolonial theories, I believe our choice of research methodologies directly influences research outcomes and therefore to what extent traditional structures of power in our institutions are upheld. Not to mention, the Design for Diversity project provided very real guidelines and case studies for the use of participatory research design in the development of digital library projects, which also directly challenges traditional ideas of whose voice has power in the research process. I also found that their project provides very helpful resources of how to transition that research into the actual design of projects.

In terms of design, there were so many projects that were acutely aware of and accessible to the communities represented in their collections. The Radio Haiti project from Duke University directly challenged my own idea of a “digital library” through their use offline technologies to bring the digital collections physically to their community of users in Haiti without internet access. Even more impressive was their dedication to providing multilingual metadata which is no small feat in terms of labor. Imagining what could exist even beyond our current structures for digital collections, Scout Calvert’s Oikos Ontology project challenged my ideas of what ontologies can accomplish. I also found it particularly significant that they were addressing genealogies- a research topic that is incredibly important for community archives, often ignored by academic institutions, and can be a source of trauma for many marginalized communities. These projects (along with many others) provided much needed challenges to my academic perspective and inspiration as I think about my next steps entering the library profession.

More than anything, these sources of inspiration have reinforced my respect for the value of assessment. For without self-assessment, we cannot improve and there is obviously a great need for improvement in our institutions.

Leaving DLF, I took with me an incredibly long list of bookmarks for resources, case studies, and articles to add to my reading list. For me, the experience was marked by the exposure I had to projects and current research taking on Anasuya Sengupta to decolonize our collections.  More than anything, these sources of inspiration have reinforced my respect for the value of assessment. For without self-assessment, we cannot improve and there is obviously a great need for improvement in our institutions.

Want to know more about the DLF Forum Fellowship Program? Check out last year’s call for applications.

If you’d like to get involved with the scholarship committee for the 2019 Forum (October 13-16, 2019 in Tampa, FL), look for the Planning Committee sign-up form later this year. More information about 2019 fellowships will be posted in late spring.

The post Fellow Reflection: Erika Weir appeared first on DLF.

OCLC WSKey Management – Upcoming Changes / OCLC Dev Network

OCLC has made a series of improvements to WSKey Management, our API credentialing system, to give libraries more control over their WSKeys and to increase security.

Call for Proposals – Open Repositories 2019 / Samvera

The 14th International Conference on Open Repositories, OR2019, will be held June 10-13th, 2019 in Hamburg, Germany. The organizers are pleased to invite you to contribute to the program. This year’s conference theme is “All the User needs”.

Full details of the CfP can be found at http://or2019.net/cfp.  The deadline for submissions is 9 January 2019.

The post Call for Proposals – Open Repositories 2019 appeared first on Samvera.

Jobs in Information Technology: November 8, 2018 / LITA

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

University of Arkansas, Dean of Libraries, Fayetteville, AR

University of North Florida, Thomas G. Carpenter Library, Assistant Program Director of Library Systems, Jacksonville, FL

Marquette University Libraries, Discovery and Metadata Librarian, Milwaukee, WI

Memorial Sloan Kettering Cancer Center, Associate Librarian, Data Management Services, New York, NY

California State University Channel Islands, Digital Archivist Librarian – Tenure Track (Senior Assistant Librarian), Camarillo, CA

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

What's Happening To Storage? / David Rosenthal

My only post about storage since May, was October's Betteridge's Law Violation, another critique of IDC's Digital Universe, and their constant pushing of the idea that the demand for storage is insatiable. So its time for an update on what is happening in the real world of storage media, instead of IDC's Universe. Below the fold, some quick takes.

We last heard about roadmaps from my good friend Dr. Pangloss more than two years ago when I wrote:
But curmudgeons like me remember that back in 2013 the Doctor was rubbing his hands over statements like:
Seagate is projecting HAMR drives in 2014 and WD in 2016.
In 2016 we hear that:
Seagate plans to begin shipping HAMR HDDs next year.
Seagate 2008
So in three years HAMR has gone from next year to "next year". Not to mention the graph I keep pointing to from 2008 showing HAMR taking over in 2009 and BPM taking over in 2013. So actually HAMR has taken 8 years to go from next year to next year. And BPM has taken 8 years to go from 5 years out to 5 years out.
Now, to the good Doctor's joy, Chris Mellor reports Seagate is out with a new roadmap:
Seagate's 2018 HAMR roadmap
Seagate has set a course to deliver a 48TB disk drive in 2023 using its HAMR (heat-assisted magnetic recording) technology, doubling areal density every 30 months, meaning 100TB could be possible by 2025/26. ... Seagate will introduce its first HAMR drives in 2020. ... The chart ... shows Seagate started developing its HAMR tech in 2016 and that a 20TB+ drive will be rolled out in 2020.
"Seagate started developing its HAMR tech in 2016" is just wrong. In March 2012 the same Chris Mellor reported:
Seagate has demonstrated heat-assisted magnetic recording technology with 1 trillion bits per square inch
So the real-time slip of HAMR technology continues. It has now taken ten years to go from "next year" to "the year after next".

In NAND so it begins: Micron mounts head-on attack against 10K disks, Chris Mellor reports that:
Micron has started shipping its 2.5-inch 5210 ION flash drive, positioning it as a 10,000rpm disk drive replacement offering much better read access performance for more or less the same price. ... The low cost (low for flash, at least) comes from its use of 64-layer 3D NAND in QLC (4bits/cell) form, and it has 1.92TB, 3.84TB, and 7.68TB capacity points. ... It is heavily optimised for reads over writes, with up to 90,000/4,500 random read/write IOPS; it does random writes at just 5 per cent of the random read speed.
It isn't clear how much the drive will cost in volume. It certainly performs better on read-intensive workloads, but QLC has much worse write endurance even than TLC, so these drives cannot replace 10K 2.5" hard drives if writes are even moderately frequent. And they definitely won't be cheap enough to threaten 3.5" nearline drives for bulk storage.

If the demand for storage were truly insatiable and flash were destined to kill off hard drives, we wouldn't be seeing headlines from Tom's Hardware like this SSD Prices Could Drop Over 50 Percent In 2019 - Report:
According to a DigiTimes report citing "industry sources" this week, NAND flash prices are expected to continue to drop in 2019 after already seeing a 50 percent drop this year. Earlier reports said that SSD prices could fall to as low as $0.08 per GB in 2019.

The DigiTimes report noted that the continued drop in prices seems to be primarily due to SSD manufacturers expanding their production capacity to increase profitability, as well as the adoption of 96-layer NAND technology.
and from Chris Mellor at The Register with Flash price-drop pops Western Digital's wallet: Surprise revenue fall with worse to come:
Revenues in the first quarter of fiscal 2019 ended 28 September, were 3 per cent down year-on-year to $5bn. Profits were $511m, down 24 per cent. Operating cash flow was $705m, compared to $1.133bn last year, a 38 per cent drop.

What fouled up was flash. CEO Stephen Milligan said strength in capacity enterprise, surveillance hard drives and embedded flash products, which each grew revenue more than 30 per cent on the year, "was offset by ongoing declines in flash pricing".
...
[Milligan] added:
This softening demand, in combination with increased flash supply, has led to a market imbalance resulting in a deteriorating near-term flash pricing environment.

We are making an immediate reduction to wafer starts and delaying deployment of capital equipment. These actions will reduce our wafer output beginning in fiscal Q3 2019 [April 2019].
Clearly, the increase in the supply of flash bits resulting from new fabs coming on-line and all the major suppliers transitioning to 96-layer 3D has coincided with a significant reduction in the rate of growth of demand for flash bits. To balance supply and demand, the price of flash bits has fallen, reducing the return on the investment in the new fabs and the 96-layer technology.

This less-than-insatiable demand for bits of storage isn't confined to flash. WD's Milligan reported that:
Client compute disk drives also performed poorly.

HDD revenues in the quarter were $2.49bn while flash revenues were $2.53bn. A year ago the numbers were $2.6bn and $2.57bn respectively, so both disk and flash revenues fell on the yearly compare.

WD shipped 34.1 million disk drives in the quarter, compared to 42.2 million a year ago. Client compute drives dropped to 16.3 million from 20.9 million. Non-compute (retail and consumer electronics) units fell to 11.2 million from 15.2 million last year, while data centre units were 6.1 million last year and rose to 6.6 million – a bright spot.
Many of the markets that consume flash are suffering decreasing unit shipments:
The problem isn't just declining unit shipments, it is also slower increase in GB/unit. As I've pointed out, for example in Betteridge's Law Violation, data isn't uniformly valuable; some data is more worth storing than other data and it gets priority for storage. This implies decreasing returns to increasing storage per unit. Who really needs half a Terabyte in their iPhone Xs? Is the extra 448GB over the base model really worth $350? (That's $0.78/GB compared with DigiTimes' prediction of $0.08/GB next year). Whether the device is a smartphone, a tablet, or a PC the market has saturated both in device and GB/device terms.

The one market with increasing demand is "the cloud". It needs some flash for speed, but the bulk of its demand for GB is for hard disk. Here are the hard disk numbers:
  • WD shipped 19% fewer drives in Q3 2018 as compared to Q3 2017, but 8% more data center drives.
    CategoryQ3 2017Q3 2018Change
    Data center6.16.6+8%
    Client compute20.916.3-22%
    Non-compute15.211.2-26%
  • Seagate has stopped reporting their declining unit shipment numbers. They now only report Exabyte shipment numbers:

    CategorySub-categoryQ1 fy'18Q1 fy'19Change
    EnterpriseMission-critical2.13.0+43%
    Nearline25.142.5+69%
    Edge non-ComputeConsumer Electronics13.523.4+73%
    Consumer11.111.2+1%
    Edge ComputeDesktop + Notebook18.618.7+0.5%
    Note the dominance of nearline drives in Exabyte terms. Given the increase in drive capacity, the consumer, desktop & notebook shipments must have decrease dramatically.
  • Toshiba shipped 23.4M units:
    CategoryQ3 2017Q3 2018Change
    Nearline/hi-cap1,205,0001,299,000+8%
    2.5" enterprise1,290,0001,595,000+24%
    2.5" mobile/CE16,600,00015,720,000-5%
    3.5" desktop/CE5,605,0004,820,000-14%
    Source
    The graph shows that, over time, nearline drives have been comprising the whole of Toshiba’s growth in the enterprise market.

Editorial Edit / Code4Lib Journal

A few words about our editors. A farewell to one editor. A solicitation for new editors.

EnviroPi: Taking a DIY Internet-of-Things approach to an environmental monitoring system / Code4Lib Journal

Monitoring environmental conditions in cultural heritage organizations is vitally important to ensure effective preservation of collections. Environmental monitoring systems may range from stand-alone data-loggers to more complex networked systems and can collect a variety of sensor data such as temperature, humidity, light, or air quality measures. However, such commercial systems are often costly and limited in customizability and extensibility. This article describes a do-it-yourself network of Bluetooth Low Energy-based wireless sensors, which seeks to manage earlier-identified trade-offs in cost, required technical skill, and maintainability, based on the Raspberry Pi™ single-board computer and a series of microcontroller boards. This builds on the author’s prior work exploring the construction of a low-cost Raspberry-Pi™-based datalogger, iterating upon reviewer and practitioners’ feedback to implement and reflect upon suggested improvements.

Improving Enterprise Content Findability through Strategic Intervention / Code4Lib Journal

This paper highlights work that information specialists within the Jet Propulsion Laboratory have done to strategically intervene in the creation and maintenance of JPL’s intranet. Three key interventions are discussed which best highlight how work in enterprise "knowledge curation” fits into emergent knowledge management roles for institutional librarians (Lustigman, 2015). These three interventions are: 1) guided document creation, which includes the development of wiki portals and standard editing processes for consistent knowledge capture, 2) search curation, which includes manual and organic enterprise search relevancy improvements, and 3) index as intervention, which describes how metadata mapping and information modeling are used to improve access to content for both local and enterprise-wide applications.

Wayfinding Serendipity: The BKFNDr Mobile App / Code4Lib Journal

Librarians and staff at St. John’s University Libraries created BKFNDr, a beacon-enabled mobile wayfinding app designed to help students locate print materials on the shelves at two campus libraries. Concept development, technical development, evaluation and UX implications, and financial considerations are presented.

Automated Playlist Continuation with Apache PredictionIO / Code4Lib Journal

The Minrva project team, a software development research group based at the University of Illinois Library, developed a data-focused recommender system to participate in the creative track of the 2018 ACM RecSys Challenge, which focused on music recommendation. We describe here the large-scale data processing the Minrva team researched and developed for foundational reconciliation of the Million Playlist Dataset using external authority data on the web (e.g. VIAF, WikiData). The secondary focus of the research was evaluating and adapting the processing tools that support data reconciliation. This paper reports on the playlist enrichment process, indexing, and subsequent recommendation model developed for the music recommendation challenge.

Piloting a Homegrown Streaming Service with IaaS / Code4Lib Journal

Bridgewater State University’s Maxwell Library has offered streaming film & video as a service in some form since 2008. Since 2014 this has been done through the use of the Infrastructure as a Service (IaaS) cloud provider Amazon Web Services (AWS) and their CloudFront content delivery network (CDN). This has provided a novel and low-cost alternative to various subscription and hosted platforms. However, with CloudFront’s reliance on external media players and Flash via Adobe’s Real-Time Messaging Protocol (RTMP) to stream content, the upcoming end of support for Flash in 2020, and other security and accessibility concerns of library staff, an alternative method of delivery for this extremely popular and successful service was sought in summer and fall of 2017. With budget limitations, a flawed video streaming service currently in place, and University IT’s desire to move much of its infrastructure to the IaaS and cloud provider, Microsoft Azure, a pilot of a secure, multi-bitrate HTML5 streaming service via Azure Media Services was conducted. This article describes the background of Maxwell Library’s streaming service, the current state of streaming services and technologies, Azure IaaS configuration, implementation, and findings.

Preparing Existing Metadata for Repository Batch Import: A Recipe for a Fickle Food / Code4Lib Journal

In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this ‘low-maintenance’ method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user’s ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.

OneButton: A Link Resolving Application to Guide Users to Optimal Fulfillment Options / Code4Lib Journal

Like many consortia, institutional members of the Private Academic Library Network of Indiana (PALNI) provide multiple fulfillment options to obtain requested items for their users. Users can place on shelf holds on items, or they can request material that isn’t held by their institution through a group circulation resource sharing network (dubbed PALShare) or through traditional InterLibrary Loan (ILL) (through WorldShare ILL or ILLiad). All of these options can be confusing to users who may not understand the best or fastest way to get access to needed materials. A PHP application, OneButton, was developed that replaces multiple fulfillment buttons in institutional discovery interfaces with a single OpenURL link. OneButton looks up holdings and availability at a user’s home institution and across the consortium and routes the user to the optimal fulfillment option for them. If an item is held by and available at their institution, the user can be shown a stack map to help guide them to the item on the shelf; if an item is held by and available at the consortium, the user is routed to a group circulation request form; otherwise, the user is routed to an ILL request form. All routing and processing are handled by the OneButton application – the user doesn’t need to think about what the ‘best’ fulfillment option is. This article will discuss the experiences of one institution using OneButton in production since fall 2017, analytics data gathered, and how other institutions can adopt the application (freely available on GitHub: https://github.com/PALNI/onebutton).

Analyzing EZproxy SPU Logs Using Python Data Analysis Tools / Code4Lib Journal

Even with the assortment of free and ready-made tools for analyzing EZproxy log files, it can be difficult to get useful, meaningful data from them. Using the Python programming language with its collection of modules created specifically for data analysis can help with this task, and ultimately result in better and more useful data customized to the needs of the library using it. This article describes how Our Lady of the Lake University used Python to analyze its EZproxy log files to get more meaningful data, including a walk-through of the code needed to accomplish this task.

Alma Enumerator: Automating repetitive cataloging tasks with Python / Code4Lib Journal

In June 2016, the Warburg College library migrated to a new integrated library system, Alma. In the process, we lost the enumeration and chronology data for roughly 79,000 print serial item records. Re-entering all this data by hand seemed an unthinkable task. Fortunately, the information was recorded as free text in each item’s description field. By using Python, Alma’s API and much trial and error, the Wartburg College library was able to parse the serial item descriptions into enumeration and chronology data that was uploaded back into Alma. This paper discusses the design and feasibility considerations addressed in trying to solve this problem, the complications encountered during development, and the highlights and shortcomings of the collection of Python scripts that became Alma Enumerator.

Using Static Site Generators for Scholarly Publications and Open Educational Resources / Code4Lib Journal

Libraries that publish scholarly journals, conference proceedings, or open educational resources can use static site generators in their digital publishing workflows. Northwestern University Libraries is using Jekyll and Bookdown, two open source static site generators, for its digital publishing service. This article discusses motivations for experimenting with static site generators and walks through the process for using these technologies for two publications.

Analysis of 2018 International Linked Data Survey for Implementers / Code4Lib Journal

OCLC Research conducted an International Linked Data Survey for Implementers in 2014 and 2015. Curious about what might have changed since the last survey, and eager to learn about new projects or services that format metadata as linked data or make subsequent uses of it, OCLC Research repeated the survey between 17 April and 25 May 2018.

A total of 143 institutions in 23 countries responded to one or more of the surveys. This analysis covers the 104 linked data projects or services described by the 81 institutions which responded to the 2018 survey—those that publish linked data, consume linked data, or both. This article provides an overview of the linked data projects or services institutions have implemented or are implementing; what data they publish and consume; the reasons given for implementing linked data and the barriers encountered; and some advice given by respondents to those considering implementing a linked data project or service. Differences with previous survey responses are noted, but as the majority of linked projects and services described are either not yet in production or implemented within the last two years, these differences may reflect new trends rather than changes in implementations.

DuraSpace Appoints Executive Director / DuraSpace News

DuraSpace is pleased to announce Erin Tripp has been appointed the Executive Director of DuraSpace, effective November 1, 2018. Ms. Tripp has been serving as the Interim CEO for the organization since June 1, 2018.

As Executive Director, Ms. Tripp will focus on broadening engagement with the international user communities of DSpace, Fedora, VIVO, and DuraCloud. She will be working on growing and developing new strategic partnerships and improving the sustainability of DuraSpace’s products and services.

“DuraSpace is looking ahead to the future and we feel Erin is the right person to lead us there,” said Tyler Walters, Board President of DuraSpace. “The DuraSpace team and Board of Directors is enthusiastic about working with Erin and collaborating to strengthen the strategic direction for DuraSpace in our rapidly evolving landscape.”

Since joining DuraSpace in 2017 as the Business Development Manager, Ms. Tripp’s work to date has been focused on building granting opportunities, creating new models for supporting small or burgeoning open source projects, and  fostering collaboration with open source service providers.

“We have an incredible team of staff who will continue to deliver on our mission, engage our communities, and promote openness” Ms. Tripp said. “I’m looking forward to engaging with our members to meet the needs of today and tomorrow. We have so much to look forward to as a community.”

All members will have an opportunity to meet and talk with Ms. Tripp during virtual office hours on Wednesday, November 14 between 9 am and 4 pm EDT or at the upcoming CNI fall meeting in Washington, DC on December 10-11.  Please reach out to Ms. Tripp to say hello and book a time to talk by emailing her at etripp@duraspace.org.

The DuraSpace team and Board of Directors are grateful to our members for their continued support throughout this transition process. As a result of your ongoing interest, engagement and participation, DuraSpace is well-positioned to continue to serve our communities by providing leadership and innovation in the development and deployment of open technologies and managed services that promote enduring access to the world’s digital heritage.

The post DuraSpace Appoints Executive Director appeared first on Duraspace.org.

Making PIEs Is Hard / David Rosenthal

In The Four Most Expensive Words In The English Language I wrote:
Since the key property of a cryptocurrency-based storage service is a lack of trust in the storage providers, Proofs of Space and Time are required. As Bram Cohen has pointed out, this is an extraordinarily difficult problem at the very frontier of research.
The post argues that the economics of decentralized storage services aren't viable, so the difficulty of Proofs of Space and Time isn't that important. All the same, this area of research is fascinating. Now, in One File for the Price of Three: Catching Cheating Servers in Decentralized Storage Networks Ethan Cecchetti, Ian Miers, and Ari Juels have pushed the frontier further out by inventing PIEs. Below the fold, some details.

Here's the problem the Cornell team is working on:
Decentralized Storage Networks (DSNs) such as Filecoin want to store your files on strangers spare disk space. To do it properly, you need to store the file in multiple places since you don't trust any individual stranger's computer. But how do you differentiate between three honest servers with one copy each and three cheating servers with one copy total? Anything you ask one server about the file it can get from its collaborator.
This is most of the problem of Proofs of Space. Their solution is:
the world's first provably secure, practical Public Incompressible Encoding (PIE). A PIE lets you encode a file in multiple different ways such that anyone can decode it, but nobody can efficiently use one encoding to answer questions about another. Using known techniques, you can ask simple questions to see if someone is storing an encoded replica, giving you a Proof of Replication (PoRep).
In other words, instead of storing N identical replicas of the original file, N different replicas are stored. From any one of the different replicas, the original file can be recovered. But an auditor can ask questions that are specific to each individual replica.

Anyone can already do this for private data:
Sia, Storj, MaidSafe, etc. do this by encrypting your pictures three times and distributing those encrypted copies. This strategy solves the problem perfectly if you're the one encrypting your photos and someone else is storing them.
But not for public data. The Cornell team's goal is to do it for public data such as blockchains:
What happens if it's a public good like blockchain state? Blockchains are getting huge—Ethereum is over 75 GB and growing fast—and it's really expensive to have everyone store everything. But who encrypts the blockchain state? Everyone needs to read it. And worse, we don't trust anyone to encrypt it properly. We need something totally different that anyone can check and anyone can decode.
The process of encrypting each of the "copies" needs to be slow, and previous proposals about how to make it slow have been broken. The Cornell team's proposal describes their computation as:
a directed acyclic graph (DAG). Each vertex represents an operation with two steps: first we derive a key using KDF [Key Derivation Function - think hash], and then we encrypt some data using that key. Edges are either data edges, representing the inputs and outputs of the encryption, or key edges, representing the data used by KDF to derive the key.
In order to ensure that a copy is intact, the computation must be:
A depth-robust graph (DRG) is a DAG with ... this property: even if a (potentially sizable) fraction of the vertices are removed, it retains a long path.
Thus, if the computation is a DRG, not storing some of the data doesn't avoid the worst case. But that's not enough. What's needed is that not storing all of the data doesn't avoid the best case. The Cornell team ensure this by layering alternate DRGs with:
a second type of graph called butterfly graph, which has the property that there's a path from every input node to every output node. This graph, with its dependency of every output on every input, helps ensure, roughly speaking, that Mallory must compute every input node, and therefore cannot avoid computing along the long sequential path in the depth-robust graph.
The result is what they call a Dagwood sandwich. Now they have a Public Incompressible Encoding, they propose to use it to build a Decentralized Storage Network:
Pretty much any PIE-based DSN architecture involves two steps for a given file:
  1. Prove once that the file is encoded correctly.
  2. Audit by verifying continuously that the file is intact.
Let's start with (1). Storage providers in the DSN must prove to somebody—the file owner or the network—that an encoding G of a file F is a correct PIE. Given an authenticated version of F, such as a hash stored in a trusted place, it's easy to verify that a PIE is correct.
I hope to return to the issues raised by "Given an authenticated version of F" in a later post. As with LOCKSS:
As for (2), it's not much help for G to be correct if it goes missing. It's critical to continuously check that storage providers are still storing G and haven't thrown data away. There are a number of efficient, well established techniques for this purpose.
So far, I may be a bit hazy on the details but I think I understand the big picture. They then propose:
A blockchain can then perform the auditing. This could be an existing blockchain like Ethereum or, as with schemes like Sia, a new one whose consensus algorithm is independent of its storage goals. A particularly intriguing option, though, is to create a new blockchain where audit = mining.
The idea of audit = mining is where they lose me. Here follows the explanation for my confusion.

In conventional blockchains such as Bitcoin's, a miner (or more likely a mining pool) decides which transactions are in the block it will mine. Among them will be one coinbase transaction:
A special kind of transaction, called a coinbase transaction, has no inputs. It is created by miners, and there is one coinbase transaction per block. Because each block comes with a reward of newly created Bitcoins (e.g. 50 BTC for the first 210,000 blocks), the first transaction of a block is, with few exceptions, the transaction that grants those coins to their recipient (the miner). In addition to the newly created Bitcoins, the coinbase transaction is also used for assigning the recipient of any transaction fees that were paid within the other transactions being included in the same block.
If the block the miner created wins, the coinbase and other transactions in the block take effect. The miner has created their own reward for the resources they devoted to the primary function of the blockchain, in this case, verifying transactions.

In DSNs, the major costs are incurred by the storage nodes. Audit is a minor, albeit necessary, cost. Presumably, a storage node cannot audit itself. Thus audit is a function that some other node has to perform on behalf of each storage node. Audit = mining, as I see it, gives rise to a number of issues:
  • Auditors will create the blocks they mine, deciding which transactions are included, and thus which nodes they audit for this block. Among the transactions will be the coinbase transaction, rewarding the auditor for auditing. How are the storage nodes rewarded for storing data? Presumably, the idea is that the auditor will also include transactions that transfer fees paid for storage from the owners of the data they store to each of the nodes audited in this block. This means that storage nodes will depend on the kindness of strangers to get paid.
  • So, like wallets that want to transact Bitcoin, storage nodes will need to pay fees to auditors. Like transactions in Bitcoin, storage nodes will be in a blind auction for auditing, leading to both over-bidding and long delays from under-bidding.
  • For the same economic reasons as in Bitcoin, auditors will form pools. Auditing will be dominated by a few large pools. They will be able to collude in various forms of anti-competitive behavior, such as auditing only storage nodes which are members of the pools, or charging higher audit fees to non-members. Doing so would increase the costs of competing storage nodes.
  • But the key point is that if the economics are to work out, audit fees and the inflation of the cryptocurrency by the issuance of audit rewards must be only a small part of the cost of running a storage node. In The Four Most Expensive Words In The English Language I pointed out that DSN storage nodes can't charge more than Amazon's S3, and realistically have to charge much less. Thus the income from auditing has to be minuscule.
  • But the idea is that audit = mining. It may seem like a laudable goal to make mining cheap, but it is problematic. Eric Budish writes:
    From a computer security perspective, the key thing to note ... is that the security of the blockchain is linear in the amount of expenditure on mining power ... In contrast, in many other contexts investments in computer security yield convex returns (e.g., traditional uses of cryptography) — analogously to how a lock on a door increases the security of a house by more than the cost of the lock.
    Bram Cohen's Chia Network is using Proofs of Space and Time to force miners to waste resources by storing large amounts of otherwise useless data, so running a miner is expensive. But if audit = mining in a DSN storing useful data, running a miner (= auditor) has to be cheap, and thus the network will be vulnerable.
Cecchetti et al don't elaborate on the details of their audit = mining concept, and a Google search doesn't reveal other sources for details. So it is possible that I have misinterpreted their ideas. But unless they have some way to make mining (= auditing) expensive, they are on the wrong track.

Fellow Reflection: Martina Dodd / Digital Library Federation

 

This post was written by Martina Dodd (@Tracingpavement), who received a DLF HBCU Fellowship to attend the 2018 Forum.

Martina is an Atlanta based art historian, writer, and the Museum Education Curator for the Galleries, Libraries, Archives and Museums (GLAM) Center for Collaborative Teaching and Learning at the Atlanta University Center (AUC) Robert W. Woodruff Library.

Her concept driven shows have touched on topics relating to race, gender and power dynamics. Including her most recent exhibition Black Interiors, which is an exploration of the Black aesthetic and psyche through artistic renderings of the home and stylized representations of the human form, opening this fall at Clark Atlanta University Art Museum.

She has presented research, spoken on panels and curated exhibitions at the Urban Institute of Contemporary Art, Prince George’s African American Museum and Cultural center, DC Arts Center, Transformer, Flux Factory and Common Field Convening.  She has published articles, exhibition reviews, and catalogue essays with DIRT, BmoreArts, Common Field’s Field Perspectives, Clark Atlanta University Art Museum, and Morton Fine Art.

Dodd holds a M.A. in the Arts of Africa, Oceania and the Americas from the University of East Anglia and a B.A. in Anthropology and International Studies from Johns Hopkins University.

It was an honor and pleasure to be one of the DLF HBCU Fellows for this year’s DLF Forum.  As someone who just recently entered the library field, the forum offered me direct access to veterans in the field and ample opportunity for hands-on development in digital pedagogy practices. The Teaching Primary Sources Through a Digital Lens: Challenges and Opportunities panel resonated with me the most.  The session provided several very different examples of how professors utilized a digital repository in their classes to promote archival literacy and digital scholarship.  This was especially valuable information to bring back to my institution, Atlanta University Center (AUC) Robert W. Woodruff Library, and share with other library staff and faculty who are curious about alternative ways of incorporating primary source materials into the classroom.

Through a generous grant from The Andrew W. Mellon Foundation, the AUC established the GLAM Center for Collaborative Teaching and Learning last year to introduce faculty to object-based pedagogical models and visual thinking strategies to stimulate cross disciplinary teaching and learning.  A major part of this initiative has been to increase visibility, discoverability, and usage of the archival materials and artworks from Spelman College Museum of Fine Art, Clark Atlanta University Art Museum, and the AUC Archives Research Center, through the creation of our digital portal (which launched earlier this year). Hearing about the creation of a digital platform for the US Holocaust Memorial Museum which allows students to actively engage with artifacts and original historical documents related to the Holocaust has inspired me to think even more creatively in how the GLAM Center Digital portal (http://glam.auctr.edu/) can act as a digital teaching and learning tool.

Hearing about the creation of a digital platform for the US Holocaust Memorial Museum which allows students to actively engage with artifacts and original historical documents related to the Holocaust has inspired me to think even more creatively in how the GLAM Center Digital portal can act as a digital teaching and learning tool.

I also appreciated hearing some of the challenges professors faced when using digital repositories in their classes, as well as hearing the concerns from archivists in the audience who voiced their fear of digital platforms becoming “archives without archivists.” Since the forum, I have reflected on my experience and I am confident that the information gathered from each session I attended will contribute to the continued success of the AUC GLAM Center for Collaborative Teaching and Learning.

Want to know more about the DLF Forum Fellowship Program? Check out last year’s call for applications.

If you’d like to get involved with the scholarship committee for the 2019 Forum (October 13-16, 2019 in Tampa, FL), look for the Planning Committee sign-up form later this year. More information about 2019 fellowships will be posted in late spring.

The post Fellow Reflection: Martina Dodd appeared first on DLF.