I haven’t written anything in a while because it’s been a really hard few months. My health has taken a turn for the worse with a new condition – also caused by my broken immune system that seems to greatly prefer attacking my body than foreign invaders – and I honestly haven’t had a decent night’s sleep since late-February. Since March, I’ve been to seven MDs and PAs, have received five different diagnoses for the same malady, and have been on three different courses of prednisone, with the current one set to last through most of the summer. The whole thing also set off a major flare of my connective tissue disease and the prednisone has left me barely able to tolerate eating more than pasta and crackers. With each wrong diagnosis, I had to change my mental model of myself and my life as each would have required major, but different, lifestyle changes, but the final and most definitive diagnosis (given to me by a very knowledgeable sub-specialist and professor at an academic center) is by far the most life-limiting. I’ve barely left the house in the 5 weeks since I got the diagnosis – which is essentially a severe immune reaction to UV light – and even have to avoid windows and fluorescent lights. I’ve been working from home, mostly with the blackout curtains drawn. And even with all that and the immunosuppression, I’ve not recovered yet. It’s frankly been a waking nightmare.
In the middle of all of this, I had a colleague take their frustration with the overwork they have been experiencing in their job out on me in a deeply mean and sarcastic email that left me literally shaking and in tears. While, upon reflection, I could see that their outburst wasn’t really about me (and they did apologize later – well, for the tone of their email), it was clear from the email that they thought they were the only one with a high workload, the only person struggling, while people like me weren’t actually working hard at all. I don’t doubt that their workload is unreasonably high, but that isn’t the fault of non-managers like me since I can’t dictate other people’s workloads. But also, that day I was working a full day on two hours of sleep after well over a month of nights like that. I’d also gotten one of the (possibly wrong, though still in the mix) diagnoses just hours before and this one was an incurable and life-threatening condition. I felt like I was falling apart, but was still showing up and trying to do my best. This colleague didn’t know that, but they shouldn’t have had to. We’ve had a friendly relationship for eleven years and that should have earned me some charitable reading.
We should never, ever, assume that other people with whom we work have it better than us or are doing less than us. It’s nearly impossible to know what our colleagues’ workloads look like. Maybe they are working harder than you are. Maybe they’re doing a lot less. Maybe they are also an overburdened caregiver, or are dealing with a chronic illness, or are going through a messy divorce, or have a parent who is dying, or are dealing with severe depression and are barely keeping their head above water. Some people loudly share their busyness like it’s a badge of honor, humble-bragging about the number of meetings they have in a day or the number of projects they are working on or the number of appointments they have with students, but most people just do their jobs as best they can and don’t broadcast any of it. Most of our work is invisible. No one but me and the members of the committee I chair (and now all of you!) know that the report I’ll be sending out on Monday describing the results of a survey we did of Spanish speakers at my college was 98% my work (even though there are six people total on the committee and I did ask for help at several points in the process – ok, now I am complaining) so that massive lift is largely invisible. The reality in libraries and in most knowledge work is that you’ll never really know for sure how hard everyone around you is working and, if you’re their peer, there’s no reason to even contemplate it. Comparison is toxic. It makes you brittle and resentful. It also feeds into scarcity thinking. Work isn’t a competition. Focus on doing your best work and set boundaries that keep you from taking on more than you can handle.
But I do get it. I remember when I was addicted to overwork, I felt resentful towards colleagues who I felt were not working as hard as I was. But it wasn’t their fault that I was overworking. Overworking is both a personal choice and a management failure. Given the tremendous organizational costs of burnout, managers should be protecting their direct reports from overwork, but I’ve never once had a boss who did. I’ve never had a boss tell me I was doing enough or question my taking on another project or committee, but I’ve certainly had bosses ask me to do more when I already felt overloaded or refuse to help me prioritize or jettison things when I had too much on my plate. Having more on your plate than you could reasonably do, known as time poverty in the literature, has actually been shown to increase the risk of depression and anxiety in employees. Even if you have good boundaries, time poverty is a stressful and erosive thing because we all naturally want to please people and meet deadlines. We don’t want to fail. But giving people more than they can reasonably do in their job sets them up for constant feelings of personal failure and resentment towards others whom they perceive as having less on their plates (whether that is true or not). If you’re trying to burn your employees out, ensuring they are in a constant state of time poverty is the perfect recipe.
When I was a manager, I saw part of my role as making sure my direct reports weren’t taking on too much – and I had a really passionate bunch of direct reports, so the struggle was real! I remember talking with one employee about a committee she wanted to join and asking her if she really felt being on it was worth the additional workload. In the end, she realized it wasn’t. So many of us are always haunted by a nagging feeling that we aren’t doing enough, even when we’ve probably already taken on too much. For me, it feels like the ghost of my work addiction calling to me, and I find myself constantly battling against its siren song. A good manager should be there to support you in that. If you have too much on your plate and your manager is not helping you lighten the load, they have failed you. Don’t take it out on your non-manager peers who probably have their own workload stressors.
I have a friend the library whose workload has ballooned, but she is working to set healthy boundaries. She has let go of some things or told people that things will just take longer. She’s managing expectations brilliantly. When the work day is done for her, it’s done. She and I are both people-pleasers who are trying to set better boundaries at work and I have found her approach really inspiring. I still do worry too much about letting people down, even as my body falls apart around me. But I look back on pre-pandemic me and I am proud of the progress I’ve made. And I can see now that having healthier boundaries made me a better colleague because I don’t compare workloads and I don’t feel any resentment toward anyone. We’re all contributing as best we can to supporting students at our college. We’re all doing good work. We all deserve grace.
But now my relationship with that colleague who took their frustrations out on me is totally broken. I do forgive them because I feel compassion for their situation, but I don’t know how I can ever feel safe around them again. We used to have a cheerful, friendly bantering kind of working relationship, but I’m always going to be scared and trepidatious in our interactions now. I’m always going to worry about setting them off. I still feel sick over the whole thing. And that sucks.
Let’s remember that when there are workload inequities or when the load on us feels too great, it’s either our fault for taking on more than we can handle, the fault of our managers for not protecting us from overwork and burnout, or a combination of the two. It is not the fault of colleagues who have better boundaries. It is not the fault of colleagues who know when they are doing enough and know how to say “no.” It is not the fault of colleagues who put work aside at the end of the workday, even if they didn’t get things done. Those people should be admired and emulated. The only people who benefit from us sniping at our colleagues are the managers who are neglecting their duty of care to their employees by not ensuring they don’t have more on their plates than they can reasonably handle.
In May, we brought the community together again for a special training session to enable trainers to deliver the ‘Quality and Consistent Data with Open Data Editor’ course locally.
We’re excited to announce the release of Zotero for Android, the best way to work with your Zotero library on an Android device.
Zotero for Android lets you work with your Zotero data no matter where you are:
Sync your personal and group libraries
View and edit item details
Organize items into collections
Take notes on your research
Read PDFs and add highlight, note, image, and ink annotations (EPUB/snapshot support coming soon)
Save journal articles, newspaper articles, books, webpages, and more by sharing a URL from your browser or other apps
Automatically download article PDFs to read (currently limited to PDFs accessible without a login due to Android limitations)
Add items by DOI, ISBN, PMID, or other identifiers
Automatically retrieve bibliographic details for PDFs saved directly to the app
Quickly add physical books to your Zotero library by scanning book barcodes with your device camera
Back on your computer, you can add the PDF annotations you’ve made on your Android device to Zotero notes and insert those notes into your word processor document with active Zotero citations or export them to Markdown.
Please post to the Zotero Forums with any bug reports or feature requests. Be sure to mention “Android” in your thread title.
We first shared our efforts for leveraging machine learning to improve de-duplication in WorldCat in this 2023 blog post on “Machine Learning and WorldCat.”
De-duplication has always been essential to maintaining the quality of WorldCat by enhancing cataloging efficiency and streamlining quality. But with bibliographic data pouring in faster than ever, we need to address the challenge of keeping records accurate, connected, and accessible at speed. AI-powered de-duplication offers an innovative way to scale this work quickly and efficiently, but its success depends on human expertise. At OCLC, we’ve invested resources into a hybrid approach, leveraging AI to process vast amounts of data while ensuring catalogers and OCLC experts remain at the center of decision-making.
From paper slips to machine learning
Long before I joined OCLC, I worked in bibliographic data quality when de-duplication was entirely manual. As part of a “Quality Improvement Program,” libraries would mail us paper slips detailing suspected duplicates, each with a cataloger’s rationale. We’d sort thousands of these color-coded slips into stationary cabinets: green for books, blue for non-books, pink for serials. We even repurposed stationery drawers to store the overflowing duplicate slips—pens and notepads were impossible to find.
This image was generated using AI to recreate my memory of the cluttered corridors where we kept the duplicate slips. AI makes it look much neater than it really was.
In hindsight, it was a forward-looking community effort. But it was slow, methodical work that reflected the painstaking nature of our efforts at that time. Each slip was a decision, a piece of human judgment shaping how records in our system were merged or maintained. And for all its effort, this process was inherently limited by scale. We were always chasing duplicates rather than getting ahead of them.
Now, working on AI-powered de-duplication at OCLC, I’m struck by how far we’ve come. What once took years now takes weeks, with more accuracy, across more languages, scripts, and material types than ever before. The heart of the work remains the same: human expertise matters. AI is not a magic solution. It learns from our cataloging standards, our professional judgment, and our corrections.
By taking a hybrid approach to de-duplication, we can use machine learning to do the heavy lifting while ensuring that human oversight guides and refines the process.
Balancing innovation and stewardship in WorldCat
For decades, catalogers, metadata managers, and OCLC teams have worked to maintain the integrity of WorldCat, ensuring that it remains a high-quality, reliable resource for libraries and researchers. De-duplication has always been central to this effort, eliminating redundant records to improve efficiency, discovery, and interoperability.
Now, AI is allowing us to approach de-duplication in new ways, dramatically expanding our ability to identify and merge duplicate records at scale. The key challenge, however, is not simply how to apply AI but how to do so responsibly, transparently, and in alignment with professional cataloging standards.
This approach to scaling de-duplication is an extension of our longstanding role as stewards of shared bibliographic data. AI presents an opportunity to amplify human expertise, not to replace it.
The fundamental shift in de-duplication
Historically, de-duplication has relied on deterministic algorithms and manual effort on the part of catalogers and OCLC staff. While effective, these methods have limits.
OCLC’s AI-powered de-duplication methods enable us to:
Expand beyond English and Romance languages—Our machine learning algorithm can accurately and more efficiently process non-Latin scripts and records across all languages, improving rapid de-duplication at scale across global collections.
Address a vast array of record types—AI enables us to identify duplicates across a broad spectrum of bibliographic records and affords new insights into certain material types that are challenging to address.
Preserve rare and special collections—We do not currently touch rare materials with AI de-duplication processes, ensuring we preserve unique records in archives and special collections.
These advancements mean more accurate metadata across a broader range of materials and languages, helping us to scale metadata quality efforts in WorldCat responsibly.
What “responsible AI” means in practice
The term “AI” is broad and often met with skepticism. Rightly so—many AI applications raise concerns about bias, accuracy, and reliability.
Our approach has been guided by a few key ideas:
AI should extend human expertise, not replace it. We have integrated human review and data labeling to ensure that AI models are trained with cataloging best practices in mind.
Efficiency should not come at the expense of accuracy. AI-powered de-duplication is designed to optimize computing resources, ensuring that automation does not compromise the quality of records.
Sustainability matters. Our approach is designed to be computationally efficient, reducing unnecessary resource use while maintaining high-quality results. By optimizing AI’s footprint, we ensure that de-duplication remains cost-effective and scalable for the long term.
This approach to de-duplication is not about reducing the role of people—it’s about refocusing their expertise where it matters most. Catalogers can focus on high-value work that connects them to their communities instead of spending hours resolving duplicate records.
Moreover, catalogers and our experienced OCLC staff are active participants in this process. Through data labeling and feedback, professionals are helping to refine and improve AI’s ability to recognize duplicates.
AI as a collaborative effort and the road ahead
I don’t miss the piles of paper slips or quarterly cabinet purges, but I respect what they stood for. AI isn’t replacing that care—it’s scaling it. While tools evolve, our principles don’t. OCLC has long used technology to help libraries manage their catalogs and collections, and now we’re applying that same mindset to AI: deliberate, effective, and grounded in our shared commitment to metadata quality. This approach to innovation empowers libraries to meet changing needs and deliver value to their users.
Join OCLC’s data labeling initiative today and help refine AI’s role in de-duplication. AI-powered de-duplication is an ongoing, shared effort that will continue to evolve with community input and professional oversight. Your contributions will directly impact the quality and efficiency of WorldCat, benefiting the entire library community.
The biennial NDSA Excellence Awards were established in 2012 to recognize and encourage exemplary achievement in the field of digital preservation stewardship at a level of national or international importance. Over the years many individuals, projects, and organizations have been honored for their meaningful contributions to the field of digital preservation.
The time has come again to recognize and celebrate the accomplishments of our colleagues! Nominations are now being accepted for theNDSA 2025Excellence Awards. Nominations can be submitted using the2025 NDSA Excellence Awards Nominations form until June 23, 2025.
Last year, we asked the 2023 award winners to share their reflections on what being a recipient of an Excellence Award means to them. For a bit of nomination inspiration, we’re sharing their reflections again this week.
Here’s what they had to say:
Sophia van Hoek, Winner of the Excellence Award for Future Stewards
“In 2023 I received the Future Steward Excellence award for my efforts in sustainability in the digital preservation field. I am truly honoured to have received this award, which represents not only a recognition of my efforts in digital preservation but also a validation of commitment this field has for sustainability. Being acknowledged in this way has been incredibly meaningful to me, serving as a reminder of the importance of our work and the impact it can have on future generations. The recognition has motivated me to continue exploring the topic of sustainability through my work as an archivist, and advisor in the Dutch archival sector.
As I reflect on this recognition, I see it as an opportunity to encourage and support the next generation of early-career professionals and students in digital preservation. I am committed to sharing my insights and experiences that have shaped my journey with others in the field, and I am excited about the potential for collaboration. To anyone considering nominating themselves for an award, I encourage you to take that step. Recognition can be a powerful catalyst for growth and innovation, not only for yourself but also for the digital preservation community. Don’t hesitate to put yourself out there!”
Ashley Blewer, Winner of the Excellence Award for Educators
“I really feel honoured when I look at all the other awardees over the years and see myself alongside some absolutely brilliant people and project teams. We are an incredibly talented field with so many types of strengths, and it’s so rewarding to be a part of the digital preservation community. Recently, I co-taught an online workshop course at the University of Illinois Urbana-Champaign about using audiovisual preservation tools, which has been helping me in my understanding of how to best inform large groups of students about the challenges of digital preservation for time-based media assets, leveraging some of the work noted in the NDSA 2024 award. My goal is to continue forward with lowering technical barriers and helping students have confidence in their abilities through building tools that facilitate learning. To those considering nominating themselves for an award…never hesitate to hype yourself or your work! Even if you don’t win an award this time, it’s an excellent opportunity for self-reflection and how you can continue to grow your expertise in the direction of your choosing.”
Stephen Abrams, Winner of the Excellence Award for Individuals
“Practitioners have long recognized that effective and sustainable digital preservation is possible only by complementing local effort with communal support. No one individual or institution can ever hope to have the expertise and capacity needed to respond to ever-increasing growth in the number, size, diversity, and complexity of digital content needing proactive preservation stewardship. In my experience over the past 25 years, the digital preservation community has been uniformly welcoming, nurturing, innovative, and collegial. I have profited so much, personally as well as professionally, from interactions large and small across the community over the years. So, it was extremely gratifying to have received public recognition by valued colleagues and peers in the form of an NDSA Excellence Award, particularly as an Individual Award reflects on the span of a career. I’ve always tried to give back to the community to some small degree relative to what I’ve gained, perhaps this Award is a sign of progress in that direction. What I’ve been able to achieve has come from “standing on the shoulders of giants.” I hope my experience encourages others to continue to explore, to experiment, to achieve, to share, and to advance the grand challenges and opportunities of long-term digital stewardship.”
Michelle Donoghue of the Nuclear Decommissioning Project, Winner of the Excellence Award for Projects
“My team and I were delighted to have been awarded the NDSA Excellence Project Award in 2023. It was an amazing opportunity to champion our work both internally and across the digital preservation sector. The issues we face in nuclear decommissioning with regard to preserving our legacy data for the very (very) long term are often overshadowed by more immediate priorities. Winning this award afforded us the opportunity to celebrate our progress, highlight its value, and demonstrate our contribution to this field. I strongly encourage anyone to nominate themselves – it’s a quick and easy process – and the benefits are wide ranging!”
Dr. David S.H. Rosenthal & Victoria Reich, Winners of the Excellence Award for Sustainability
“We were delighted and honoured to be recognized by our peers. Working in digital preservation is like sailing against the wind. The justifications are difficult – why pay money to safeguard materials that might be useful in the future? Being recognized by peers, is metaphorically “wind in sails”. That extra push is most useful to those currently helming the boat. We recommend current practitioners nominate themselves and their programs for recognition and benefit from the boost being an award recipient provides.”
All are welcome to make a nomination to the NDSA Awards – you do not have to be affiliated with NDSA or member institutions. Nominations, and self-nominations, from all parts of the world and all digital preservation disciplines are welcomed and encouraged, as are submissions reflecting the needs and accomplishments of historically marginalized and underrepresented communities.
Help us highlight and reward distinctive approaches to digital preservation practice – nominate a colleague, project, or organization today! Nominations can be submitted using the2025 NDSA Excellence Awards Nominations form until June 23, 2025.
This episode
of the Frontiers in
Commoning is an interview with Jack
Kloppenburg about the Open Source
Seed Initiative, which is building, or helping rebuild rather, a
commons for sharing seeds. I think it contains some useful observations
for open source software, and commoning more generally.
Humans have shared seeds for millennia. This sharing has been
responsible for all the cultivars we know and love today around the
world. Because of the nature of seed production, where one seed can
beget hundreds more, there is a built in abundance model that
historically made seed sharing a bad fit for mass-commodification. But
this changed in 1980 when the US Supreme Court bowed to pressure from
corporations and allowed life to be patented.
As a software developer, what interested me most was Kloppenburg’s
description of how the idea for OSSI took direct inspiration from the
Open Source Software movement, specifically the GNU
GPL and the idea of CopyLeft more
generally. He describes how the software freedoms map onto seed sharing
and use. But also, he talks about why the OSSI chose to use a Pledge rather than a License
model.
You have the freedom to use these OSSI-Pledged seeds in any way you
choose. In return, you pledge not to restrict others’ use of these seeds
or their derivatives by patents or other means, and to include this
Pledge with any transfer of these seeds or their derivatives.
The OSSI chose to focus more on helping build a community of like minded
seed producers and distributors, rather than on establishing a legal
apparatus that would 🤞 protect seeds from capture by corporate
interests. As Kloppenburg notes, relying on licenses means one must
participate in a legal infrastructure that is de facto dominated by
large corporations and their interests. A pledge is oriented around
growing all the tangled, relational connections between the users of
seeds. It’s about strengthening, or reviving, an ancient and separate
system of seed sharing. It’s also about growing awareness and resistance
to corporate practices, especially in the global south, as big
agriculture tries to change the laws in other countries to suit their
needs.
Of course, not having a license means that these openly pledged seeds
could be “captured” by corporations. However, given their scale, large
corporations actually control a small number of types of seeds that have
use around the world. They are oriented around maximizing profit.
Corporations aren’t generally interested in the types of seeds that are
being shared more openly in the OSSI, because these are more regionally
specific. There is a local angle to these networks of sharing. He admits
that corporate capture could happen if an innovation like a rust-proof
crop were invented, but really that sort of innovation isn’t what the
OSSI is primarily interested in.
Instead the OSSI is interested in building community for:
Rediscovering native and indigenous forms of plants.
Sharing and preserving existing seeds.
Creating new cultivars that are of use.
Kloppenburg acknowledges that this approach to “openness” and “sharing”
can be viewed skeptically by indigenous communities, who have been told
similar things in the past, only to have their traditional practices
taken without their permission for profit, where none of that profit is
returned to them. Indeed, if you zoom out, colonialism and capitalism
have been responsible for so much damage and destruction to indigenous
ways of life – and food sources have been a very significant part of
that. So some seed producers have put modified versions of the OSSI
Pledge in place. I’m reminded here Ostrom’s idea that sustainable
commoning doesn’t necessarily mean that everything is open to everyone –
there are often clearly defined boundaries around commons, and that
there are often practices for managing them (Wall, 2017).
This naturally also reminded me about the challenges that Open Access has
faced when working with indigenous communities, where intellectual
property has been stolen and profited off of (Kimberly Christen, 2012). Frameworks for
reciprocity that help build, strengthen and diversify networks of
reciprocal relations are what is needed more than just openness
(Kim Christen &
Anderson, 2019; Punzalan & Marsh, 2022). It seems to
me that OSSI’s use of a Pledge rather than a legal tool like a License,
provides another way forwards that honors the long and multi-threaded
history of seed sharing.
References
Christen, Kimberly. (2012). Does Information Really Want to be Free?
Indigenous Knowledge Systems and the Question of Openness.
International Journal of Communication, 6, 2870–2893.
Retrieved from https://ijoc.org/index.php/ijoc/article/view/1618/828
Join the Open Knowledge Network in a roundtable to discuss the meaning of openness today, starting with a presentation of the recent study “From Software to Society — Openness in a changing world” by Dr. Henriette Litta and Peter Bihr.
Win free books from the June 2025 batch of Early Reviewer titles! We’ve got 233 books this month, and a grand total of 4,613 copies to give out. Which books are you hoping to snag this month? Come tell us on Talk.
The deadline to request a copy is Wednesday, June 25th at 6PM EDT.
Eligibility: Publishers do things country-by-country. This month we have publishers who can send books to the US, Canada, Australia, the UK, New Zealand, Netherlands, Germany, Israel, Finland, Belgium and more. Make sure to check the message on each book to see if it can be sent to your country.
Thanks to all the publishers participating this month!
Hello June! Now that the spring semester has wound down and summer is here, there’s lots happening in our community. From free or inexpensive high-quality online events to major milestones for convenings coming up this fall (hello Forum registration!), we’re excited to bring you a Digest chock-full of great happenings. Read on, be well, and enjoy your month.
— Aliya from Team DLF
This month’s news:
Registration open, program announced: Registration for the DLF Forum and Learn@DLF is now open. Peruse the Learn@DLF schedule and register at the earlybird rate to join us in Denver in November. DLF Forum program will be released later this month.
Coming soon: Registration for iPRES 2025 in Wellington, NZ opens on June 3.
Opportunity: H-NET Spaces invites applications for its Spaces Cohort Program, which supports early-stage projects and/or scholars in need of support and hands-on training in DH methods. Applications due July 1.
Office closure: CLIR and DLF are closed on Thursday, June 19, in observance of Juneteenth.
This month’s open DLF group meetings:
For the most up-to-date schedule of DLF group meetings and events (plus NDSA meetings, conferences, and more), bookmark the DLF Community Calendar. Meeting dates are subject to change. Can’t find the meeting call-in information? Email us at info@diglib.org. Reminder: Team DLF working days are Monday through Thursday.
The event, co-organised by INFO CDMX and OKFN, is a unique opportunity to explore tools, publications and materials developed under the model of openness by design, with a focus on human rights, gender perspective and inclusion of vulnerable groups.
Something I’m painfully aware of, but I think a lot of people don’t realize, is that safety regulations are not created prematurely, to prevent hypothetical problems. Every regulation out there is, at best, the story of someone’s (usually many someones’) very worst day — more often, regulations are only made after there are deaths, sometimes many deaths. And even then, the survivors are often required to fight, sometimes for years, to get regulations in place. Many fields have a variation on the saying, “our regulations are written in blood.” Some devastating, but related, stories along these lines:
As someone with a compromised immune system, I’ve known for years that, even with a nominally functional (badly underfunded) FDA and USDA, our food system is deeply imperfect, and participating in it is not without risks. I’ve watched E. coli, salmonella, and listeria show up on lettuces, onions, and packaged meats (that’s just off the top of my head from the last year or two), sickening and sometimes killing people. There was also that lead in cinnamon issue, not that long ago. Our food safety organizations and enforcement mechanisms traced these contaminants to their origins, made sure items were pulled from shelves, announced recalls, and saved lives; I’m glad they were there, even as I mourn that they weren’t fast enough to save everyone.
But now we are losing these safeguards, imperfect though they were. Because of staff and budget cuts, the FDA has stopped testing dairy products and dropped a lot of their routine food safety checks. States (some states? most of them? very unclear) are still doing testing, but if I’m reading this food safety law publication correctly, they are dependent on national budgets. It’s also unclear what happens in the states that aren’t on the FDA’s contract programs list (second accordion), like Maine. Either way, as the history articles linked earlier in this post point out, our federal food safety departments were created because state testing varied wildly, industry couldn’t be trusted, and people were dying as a result. This article from Risk Management Magazine does a great job of laying out what the current risks are, albeit from a business, rather than a consumer, point of view.
This is already going to be a monster-sized post, so I don’t plan to talk about tariffs, except to acknowledge that I stocked up on rice, beans, coffee, pasta, flour, powdered milk, some canned foods, and sugar before the cuts at the USDA and FDA began. The financial situation in this country is a constant background noise to all of this, though, and will certainly impact what is available and what things cost. Right now (at least where I live) it costs more to buy locally-grown foods, but I get the sense that non-local is catching up rapidly.
A few other rules that inform my thinking:
Industry cannot be trusted to self-regulate, doubly so when they realize nobody’s checking on them.
“The more manipulation you do, certainly the more places there are for things to go wrong,” Don Schaffner, food science professor, in an article about avoiding E. coli. Put another way, as a very broad and general rule: each step of processing, shipping, and storage increases the risk of contamination or adulteration.
I know from my herbalist training that using herbs is better than using capsules. There are several reasons—ask me about it in the comments if you care—but the relevant one here is that, even dried and somewhat crumbled up, we can identify a plant by sight, smell, and/or taste, so we stand a good chance of recognizing whether we’ve got the right plant. A capsule could have literally anything in it and may not contain any of the plant we think we’re using at all. (The FDA has never proactively tested supplements, so they are famously risky.) I apply this not only to herbs, at this point, but to other products: the closer something is to what it looked like in nature, the easier it is to verify it is what the seller claims.
How have I been surviving, so far?
At this point I need to point out that I am not a doctor, a food scientist, a microbiologist, or a public health expert; I’m just some internet rando, albeit one who is fairly well-read on this stuff. Especially when I admit to taking shortcuts, I’m not suggesting that what I do is best practice or safe; I’m mitigating risk as best I can with limited time, energy, and money, not avoiding it completely. When in doubt about safety, the best approach is to follow the most cautious guidance you can find (and if you find people being more cautious than I am on any of this, I’d love to learn from them; please feel free to share links!), but literally any steps you take toward reducing risk are positive and probably worth taking.
General food safety and copious application of heat
As someone higher-risk than average, I’ve learned out of self defense that E. coli can show up at any point in the food system, including at its origin—I already knew that upstream factory farms are a pathogen risk for any crop (via watering), but my 2023-2024 Master Gardener training taught me that E. coli and other pathogens can live in healthy soil and be splashed onto outer plant parts even with fresh rainfall. I’ve gathered that listeria is more likely to show up in industrial manufacturing scenarios, or at the deli counter, but it’s a risk in raw milk and soft cheeses, too. I associate salmonella with poultry and eggs, but I know it has also shown up in bagged vegetables via cross-contamination. That’s a lot and scary, but on the bright side, I’ve also learned that heating food to a consistent internal temperature of at least 165° F is sufficient to kill listeria, E. coli, campylobacter, salmonella, and H5N1 (“bird flu”). So I don’t tend to worry overmuch about pathogens in hot foods. As a result, these days, I’m mostly on the inverse of a raw food diet. I also—and this is actual advice that you should take, too—carefully follow safe food handling practicesin my kitchen.
I’m repeating myself just the tiniest bit in saying this, but I want to introduce this resource: I’ve dropped nearly all of the foods off the “riskier choice” column of the CDC’s “Safer Food Choices for People with Weakened Immune Systems” guide. Full disclosure: 1) in theory, I haven’t fully dropped sushi; because I know it’s a risk, I’m picky about where I get it and do so rarely. I usually make my own maki rolls with cooked ingredients when I get a hankering. 2) Also, I’ve eaten brie as recently as 2024; I just baked it first.
With H5N1 causing outbreaks in poultry and dairy herds, even nominally immunonormative folks have been advised to avoid raw milk products (don’t ever drink raw milk, folks; pasteurization has no down sides) and to cook eggs until they’re solid all the way through. I already refused to eat any ground meat with pink parts left in it, so I haven’t followed how hot a burger needs to be, these days; if you eat them, you might want to do some investigation.
Vegetables – mostly local, mostly hot
I’m chronically ill, which limits my available time and energy. To make cooking my own food more achievable, I’ve tended to use frozen and pre-cut vegetables (and then I cooked them). With less safety inspection happening now, I’ve broken that habit, with the exception of frozen broccoli I can get in my farm box. Well, and for soup-friendly vegetables like kale, I’ll sometimes buy extra, wash and chop it, and freeze it myself. Fresh broccoli freezes well enough for use in a stir fry later, too.
I haven’t run out of frozen corn or peas yet. There’s a chance that, when I do, I’ll replace them. Because they are so uniform, in theory, I could dump out the whole bag, make sure there are no pieces of metal or whatever in there, and re-freeze the lot. I like to use both all year, so if I decide I’m not up for that (and if I don’t move somewhere with a functioning food safety apparatus), I will probably try freezing my own from fresh.
Sweet potatoes, potatoes, and dried beans make up my default dinner, these days (sometimes with some cheese sprinkled on). They’re easy to acquire, store, and cook — the beans in a countertop pressure pot, the tubers roasted in an oven or, at need, microwaved until soft — and they make decent leftovers. I buy spinach, chard, and other greens locally, wash them well, and cook them thoroughly. There’s a local mushroom-growing operation, so sometimes we’ll have greens and mushrooms on either rice (if we plan ahead) or couscous (if not) for dinner. The farm box usually has garlic and shallots, so those go in, as well.
I do still eat some raw vegetables, too. I’ll buy cucumbers when they’re growing locally and wash them well before cutting; if I weren’t planning to move this year, I’d also be growing some of my own. Lettuces are particularly risky—I had already learned to avoid bagged lettuce and most restaurant salads, and to throw away outer and bruised leaves—so I only buy local hydroponic lettuce or grow it in an Aerogarden. And if I saw it for sale somewhere, I’d probably still buy whole jicama, since that can be washed and peeled, and it’s such a nice summer munching vegetable.
Fruit with accountability, or at least a hard rind
We tend to eat a lot of granola + yogurt on fruit, and frozen berries are the easiest fruit option for that. I had already cut way back on frozen fruit after a spate of recalls in 2023; at this point, unless I’m sure I’m going to cook them (like in a crumble or a pie), I never buy frozen fruits or berries except from local farms. Even that’s a risk, but with less shipping and storage, plus the knowledge that the producer would have to look me in the eye if they made me sick, I figure I’m willing to take it. The local frozen blueberries are expensive, but also amazing.
My primary non-berry fruit, at this point, is citrus, mostly oranges. I wash them with soap (look, I already said you don’t have to do everything I’m doing) before cutting or peeling them.
In the autumn I’ll probably buy apples from a local orchard and wash them well before eating.
If I were staying in one place, I’d plant strawberries, to have something to look forward to next year. It’s probably my favorite fruit and not that hard to grow, even in zone 5b. I’ll probably buy (and clean with vinegar?) some local organic strawberries this year, if I can get them.
And, aside from dried fruit, that’s kind of my entire participation in that food group.
Protein – lots of beans, small amounts of local meat, well-cooked local eggs, and salmon from Alaska
We already didn’t eat a ton of meat, in part because my tendency for many years has been to buy grass-fed and local, which are more expensive and vary somewhat in availability. We’ve historically bought frozen salmon from Alaska every couple of years, which I totally recommend if you like that kind of thing. I did have a habit of throwing a turkey kielbasa into my virtual grocery cart, because it’s a nice addition to a bean soup or a tray of roasted potatoes (add sauerkraut, some Dijon mustard? :chef’s kiss:), but that’s over now. Kielbasa aside, I don’t have to change habits much, I think, beyond dropping canned items (tuna, beans).
Someone on Tumblr recommended buying Halal and/or Kosher, if you’re buying meat, because there are cleanliness rules that have to be met, for meat to be certified. I don’t know whether there are ethical concerns around people outside these religious communities buying their meat — I’d say at least “don’t empty the shelves; buy only what you need immediately.” But I’d also think ethical butchers would be glad for added business. We used to buy Kosher rotisserie chicken when Giant Eagle had it, in Pittsburgh, and it was consistently better than their non-Kosher alternative. There seem to be very limited options for either in Maine, so I haven’t really investigated deeply.
I mentioned that we eat a lot of beans. We’re just starting consistently from dried, now, instead of the more convenient canned option. I hear? you can make your own tofu? out of a variety of beans? But I have not tried it yet. I still have some tofu in the freezer. (Hush, it makes it a better consistency.)
I’m out on a little bit of a limb with all of this, in that I’m trusting two companies to do the right thing: I’m still buying peanuts, pumpkin seeds, sesame seeds, chia seeds, and dried fruit from Bob’s Red Mill and from Nuts.com. The latter is a pretty arbitrary choice, if I’m honest. (I know dried fruit isn’t a protein; it goes into granola with these other proteins, though.) We haven’t run out of peanut butter yet, and I’m not certain how I’ll handle it when we do.
We buy eggs from local farms and cook them thoroughly. (The “local,” here is more about chicken welfare than about avoiding disease, ourselves. The farms let them wander; that means they’re happy, and we get better-quality eggs.) When they don’t have eggs to sell, we don’t buy them. It’s fine.
Dairy – local, and as pasteurized as we can get, otherwise we do without
Dale is still drinking milk, but because he lives with four birds and me, we buy him ultra-pasteurized, out of an abundance of caution while H5N1 is circulating.
I admit that I miss milk, mostly for use in tea and coffee. Because I like a buffer against the acid in my first coffee of the day, I’m currently using a combination of powdered oat milk (another arbitrary choice: Now Foods is the brand I have right now) and coconut milk powder from Nuts.com. I’ll be honest: this isn’t great, and I’ll probably start drinking my coffee black, rather than bother with these.
I bake with my supply of powdered milk, for now, and when that runs out, I guess I’ll use powdered coconut milk in baking and curries.
“Why not buy milk locally, Coral?” Fair question. Our farm box only offers raw milk, and even though it would be fine if I pasteurized it myself, and buying it would allow me to make mozzarella (an easy cheese to make and also one that nobody local sells), I refuse to reward anyone for selling raw milk. If I run across some local pasteurized milk at a farmer’s market or something, I’ll probably pick it up as a treat.
Our farm box includes a (different) local dairy that sells excellent yogurt; I emailed them to see what their processing entails, and I believe it’s sufficient for my needs. I miss the less expensive fat-free Greek yogurt I used to buy, but it helps that this stuff is more delicious.
I also buy locally-made cheese, as long as the milk going into it was pasteurized. As I said, I can’t get mozarella, but weirdly (and deliciously), I can totally get paneer! I tried some queso tencho on pizza, and it worked OK — it was a lot oilier than the part-skim mozzarella I’m used to, but it was also pretty delicious. So we’re OK on cheese.
I bought a lot of butter around the turn of the year and froze it. I only use it for things where butter can’t be substituted out, like pie crusts. The farm box only offers smoked butter, and they don’t sell cream, so I assume I won’t be able to replace my butter stash when I run out. We’ll see.
Grains from (mostly) local granaries
I bought 25 pounds of flour in January. It makes bread (note the bread-making post before lockdown, truly I am a hipster of carbs), scones, probably crackers (not linking that one with the recipes, because I haven’t tried it myself yet), and a lot of other useful things. When I run out, I’ll do the same thing I do for oats and buy flour from a local mill. Up here, Maine Grains is good and has most of what I need. If I lived on the west coast, I’d be ordering regularly from Bluebird Grain Farms; in the past, I’ve bought emmer and einkorn flour from them, and I enjoyed both.
In the warm months, I make overnight oats for my breakfast, using thinned-down yogurt, kefir, or milk substitute, depending on my mood. When I forget to make oats the night before, I put some peanut butter on some bread or get fruit-yogurt-and-granola. In the cold months, I’ll probably continue to have hot oatmeal.
I’m still buying commercial pasta. And couscous. Eventually, I’ll accept that I need to stop doing that, but … I can only do so much at a time.
We have a ton of rice, because I believe in being prepared for emergencies, and white rice can be stored nearly indefinitely. When we run out, I’ll buy that direct from Maine Grains or similar.
We used to buy Triscuits a lot, and I miss them. Like I said, I’ll probably make crackers. I hope I can make something we like just as much.
We also used to buy a lot of Mission “Carb Balance” tortillas, because they tasted like soft flour tortillas but had added fiber in them. I don’t have a great substitute for that yet, though I know I can theoretically make flour tortillas of my own when I have the need.
Miscellaneous other foods, some unknowns, and an admission
We live in Maine; maple syrup is pretty easy to come by. You can also get maple sugar, which is useful since white sugar is pretty easy to bulk up with added powders.
I like to make a chai concentrate with fresh ginger (which is part of the reason Dale keeps drinking milk, he likes it in chai); since I’m boiling it, I don’t worry too much about the shipping and storage of the ginger (or the tea leaves, which I haven’t had to replace yet, but I think I’d still trust Prestogeorge).
I don’t know where I’ll get nori when I run out, but probably one of the online Asian grocery stores, same as last time I needed it.
And now, I have to admit: all of these rules? I follow them at least 80% of the time, maybe even 90+%. But it isn’t 100%. We went and got ice cream last week, despite knowing dairy isn’t being inspected. I have recently eaten a Wendy’s burger (specifically, the “son of baconator,” because it comes without the raw vegetables that would make it riskier) and, on a different day, Popeye’s chicken (which was fantastic, no regrets, they ruined me for other fast food chicken). We did a whole bunch of housework and then ordered Chinese delivery yesterday. We don’t order delivery, curbside, or drive-through as often as we did in the past, but it isn’t “never,” either.
Honestly, I’m just too tired, especially these days, with everything going on, to be a total purist. Despite the risks inherent in being anything else. So I want to reiterate: if you’re doing things differently than I am, cutting more and/or different corners, perhaps just following the same patterns as you did in the past, I’m not here to judge you for it. I’m sharing what I do and how I think in case it’s useful, not because I think we all have to be approaching everything the same way.
And now for some fun stuff
Recipes
Granola – I halve this, swap in a little oat bran in place of some of the oats, use maple syrup, and use a seed mix instead of nuts, due to allergies. I do pepitas, sesame seeds, and chia seeds. For the fruit, I usually use dried blueberries. And I put in the optional 1/2 cup of unsweetened coconut flakes.
Scones – My aunt gave me her recipe (along with some scones she’d made ), which is very similar to the linked one, with the following changes: we only heat the oven to 425° F; we use quick oats; we cut it into 8 instead of 12; and we use 1 tsp of baking soda instead of the 1 Tbsp baking powder, which a conversion chart claims is equivalent. This is an unapproved change, but I also use 1/2 cup olive oil instead of 2/3 cup melted butter. I’m delighted to learn that rolled oats will work just as well as the quick oats, since they’re easier for me to get. To make them into ginger scones, I add 1 tsp cinnamon, 1 Tbsp ground ginger, and swap the 1/2 cup raisins for 1/2 cup tiny candied ginger chunks, which I can also get from Nuts.com.
Salmon chowder – I’m allergic to carrots, so I use sweet potatoes, which are better anyway. I also halve the bacon and double the salmon, and still I drain out most of the bacon grease. A little nutmeg and cayenne (instead of hot sauce) add some depth to the flavor, too.
Chai concentrate – I use more cinnamon and ginger, sometimes throwing in a little dried ginger root for a “hotter” flavor than the fresh. I also use 30 grams of loose tea leaves instead of 10 bags. I can’t find which herbal chai I borrowed this from, but sometimes I also add ashwagandha, astragalus, star anise, and (just a little pinch) white pepper.
The original image I wanted to put on this post was Samwise Gamgee saying “po-tay-toes,” but it was an animated gif and somewhat distracting. So you get a still of the same hobbit, in a corn field.
the elliptic curve signature scheme used by Bitcoin is much more at risk, and could be completely broken by a quantum computer as early as 2027, by the most optimistic estimates.
The initial public draft of the NIST internal report on the transition to post-quantum cryptography standards states that vulnerable systems should be deprecated after 2030 and disallowed after 2035. Our work highlights the importance of adhering to this recommended timeline.
The point of Gidney and Schmieg's post and their paper is that:
2048-bit RSA encryption could theoretically be broken by a quantum computer with 1 million noisy qubits running for one week. This is a 20-fold decrease in the number of qubits from our previous estimate, published in 2019. Notably, quantum computers with relevant error rates currently have on the order of only 100 to 1000 qubits
So there's nothing to worry about, right? NIST has specified the algorithms, quantum computers need to get 1000 times better before they can crack a single RSA key in a week, and NIST says we have 5 years before there's a problem.
At least as regards cryptocurrencies, I think this is a rather pessimistic estimate. The point of The $65B Prize was that at least Bitcoin's transition to post-quantum cryptography faced a particular problem:
Senator Everett Dirksen is famously alleged to have remarked "a billion here, a billion there, pretty soon you're talking real money". There are a set of Bitcoin wallets containing about a million Bitcoins that are believed to have been mined by Satoshi Nakamoto at the very start of the blockchain in 2008. They haven't moved since and, if you believe the bogus Bitcoin "price", are currently "worth" $65B. Even if you're skeptical of the "price", that is "real money".
I assume that the fact that Nakamoto's stash hasn't moved means that he no longer has access to the keys, either through death, destruction or accident. As I write, the stash is "worth" about $107B. I also assume that the stash is included in Chainalysis' estimate that:
about 20% of all Bitcoins have been "lost", or in other words are sitting in wallets whose keys are inaccessible. ... These coins need to be protected from theft by some public-sprited person with a "sufficiently large quantum computer" who can transfer them to post-quantum wallets he owns.
The point is that, without access to the keys for the vulnerable wallets, there is no way to transfer their contents to new wallets protected by post-quantum cryptography. Thus 20% of all Bitcoins or 4.2M BTC, currently "worth" almost $450B, is the reward for the first to build a "sufficiently powerful quantum computer". It is generally thought that VCs need to see the prospect of at least a 10x return on their investment, so that is enough for $45B of R&D.
There may now be a viable runner in this race. Psi Quantum is a Palo Alto based startup that is building a million-qubit optical quantum computer. They had raised $1.2B by 2021 and "at least $750M this year" from investors including Nvidia. Their website claims:
In 2024, PsiQuantum announced two landmark partnerships with the Australian Federal and Queensland State governments, as well as the State of Illinois and the City of Chicago, to build its first utility-scale quantum computers in Brisbane and Chicago. Recognizing quantum as a sovereign capability, these partnerships underscore the urgency and race towards building million-qubit systems. In 2025, PsiQuantum will break ground on Quantum Compute Centers at both sites, where the first utility-scale, million-qubit systems will be deployed.
Investors have put in about $2B so far. They stand to make a notional 225x return just from Bitcoin, apart from all the other uses of a "utility-scale quantum computer".
But wait! There is an even better way to monetize a "sufficiently powerful quantum computer". Matt Levine has been writing about Crypto Perpetual Motion Machines for some time, for example:
MicroStrategy Inc. is, among other things, a proof of concept. The concept is: “If you buy $100 of Bitcoin and put it in a pot, you can slice the pot into shares and sell them for $200.” (MicroStrategy owns about $49 billion of Bitcoin and has a market capitalization of about $94 billion, because people will buy its shares for more than the value of the underlying Bitcoin.) This is a very appealing concept, because: free money! A “perpetual motion machine,” I sometimes call it: The more shares you sell, the more Bitcoin you can buy, and the more your shares are worth.
If you have a big pot of Bitcoin or Ethereum or Solana or Dogecoin or Trumpcoin or anything else, you should wrap it in a US public company and sell it to stock investors for twice its actual value. But to wrap it in a public company, you need a public company. There are only so many of those, and they are busy. If you called, like, Apple Inc. and said “hey we’d like to merge our big pot of Dogecoin with you so that our coins are worth more,” Apple would say no. The trick is to call a company that is (1) a public company but (2) only barely. Those companies’ phones are ringing off the hook.
So the monetization strategy for the owner of the first "sufficiently powerful quantum computer" is:
Buy a cheap public company.
Lend it enough money to pay for the quantum computer time to crack the keys of the 20% of frozen BTC.
Transfer the 20% of BTC to post-quantum wallets.
Announce that your company now controls 20% of BTC and can prove it by signing messages with the post-quantum keys of your wallets.
Since MicroStrategy holds about 580K BTC and MSTR is valued at 1.6 times their "price", by analogy your 4.2M BTC would give your company's stock a "market cap" of around $740B,
Now you use (Michael) Saylor's algorithm:
float btc = 4,200,000.0; // Initial HODL-ing
float factor = 1.6; // Market Cap inflator
float fraction = 1.0; // % Market Cap to use as collateral
float over = 200; // % Over-collateralization
while (factor > 1.0) {
float price = btc_price();
float pre_mkt_cap = btc * price * factor;
float cash = borrow((pre_mkt_cap * fraction) / over);
btc += cash / price;
// Each time round Market Cap increases by cap_gain
float cap_gain = cash * factor;
}
It is really hard to think of a better way to monetize a "sufficiently powerful quantum computer" than this way, with its at least 370x return!
You may think that the market is irrational in valuing MSTR at 1.6 times its BTC HODL-ings. But, Levine writes, that's small potatoes:
SharpLink’s planned $425 million stash of Ethereum is worth $2.5 billion on the stock market.
Note that SharpLink apparently doesn’t own any Ether. The investors are contributing $425 million in dollars, not Ethereum. This is not “we’ve got a stash of Ethereum and might as well sell it on the stock exchange”; it’s “man the stock exchange is paying $2 for $1 of Ethereum, we’d better do that arb.” Or, in this case, $6 for every $1 of Ethereum.
SharpLink is an el-cheapo public company that Consensys bought to run Saylor's algorithm. It is a shame that an actual pot of BTC isn't valued like a planned pot of ETH. If it were the post-quantum company would be valued at around $2.8B, almost as much as APPL. Levine takes this lesson:
This is not investment advice but honestly what am I doing with my life. Right now, if you have a few hundred million dollars lying around, you can buy any crypto you like with it, and the US stock market will give you an immediate 500+% paper profit. All you need — besides the startup cash — is a little public company to put your crypto in.
And, of course, you need to remember to cash out without crashing the stock price before someone with a "sufficiently powerful quantum computer" steals the little public company's stash. Helpfully, Levine suggests a way to do it:
In crypto, if you have magic beans that are currently priced at $1 billion, maybe someone will lend you $500 million of real money against them, with no recourse to you. In the stock market … look you’re going to have a hard time borrowing 50%, or 10%, of the market value of a 97% stake in a crypto treasury company whose market cap has increased 100,000% in a week, but, man, I would try.
We just have to hope that the current infatuation with crypto treasury companies lasts long enough for PsiQuantum to build the "sufficiently powerful quantum computer".
After DTLJ Thursday Threads issues on digital privacy and surveillance camera systems, I'm focusing this week on the more general topic of government-sponsored or -enabled surveillance.
In an era defined by ubiquitous data collection and ever-advancing technologies, the line between public safety and individual privacy is growing alarmingly thin.
From President Trump’s executive order to dismantle inter-agency “data silos” and Elon Musk’s DOGE initiative weaving federal databases together, to Oracle co-founder Larry Ellison’s vision of AI-powered cameras and drones monitoring citizens, the U.S. surveillance apparatus is expanding at breakneck speed.
Meanwhile, programs like the Pentagon’s “Locomotive”—which turns innocuous dating-app location pings into real-time tracking tools—and the data broker–driven sharing of driving and personal records with law enforcement underscore how private and public interests have converged to create a modern panopticon.
So that is the focus of this week's Thursday Threads issue:
Apple sues U.K. government over a secret order for backdoor access to encrypted data on phones, and it removes the Advanced Data Protection from U.K. market rather than giving in.
Feel free to send this newsletter to others you think might be interested in the topics. If you are not already subscribed to DLTJ's Thursday Threads, visit the sign-up page.
If you would like a more raw and immediate version of these types of stories, follow me on Mastodon where I post the bookmarks I save. Comments and tips, as always, are welcome.
Trump’s Executive Order and Musk-Led DOGE Initiative Fuel Fears of a U.S. Surveillance State
In March, President Trump issued an executive order aiming to eliminate the data silos that keep everything separate. Historically, much of the data collected by the government had been heavily compartmentalized and secured; even for those legally authorized to see sensitive data, requesting access for use by another government agency is typically a painful process that requires justifying what you need, why you need it, and proving that it is used for those purposes only. Not so under Trump. This is a perilous moment. Rapid technological advances over the past two decades have made data shedding ubiquitous—whether it comes from the devices everyone carries or the platforms we use to communicate with the world. As a society, we produce unfathomable quantities of information, and that information is easier to collect than ever before.
This article examines the growing surveillance capabilities of the U.S. federal government under the Trump administration, particularly through the actions of Elon Musk's DOGE.
It highlights how various government agencies are pooling vast amounts of data on citizens, which raises concerns about privacy and potential abuses of power.
The effort starts with an executive order from Trump to eliminate data silos, allowing for easier access and sharing of sensitive information across agencies.
That is followed up by the web of DOGE-placed staff in various government departments that are weaving the silos together.
Experts warn that this could lead to a surveillance state where personal data is weaponized for political purposes, targeting individuals based on their attributes or actions.
Hence the title of the article: the American Panopticon:
The panopticon is a disciplinary concept brought to life in the form of a central observation tower placed within a circle of prison cells. From the tower, a guard can see every cell and inmate but the inmates can’t see into the tower. Prisoners will never know whether or not they are being watched.
DOGE Builds DHS Immigrant Surveillance Database with SSA, IRS Data
Operatives from Elon Musk’s so-called Department of Government Efficiency (DOGE) are building a master database at the Department of Homeland Security (DHS) that could track and surveil undocumented immigrants, two sources with direct knowledge tell WIRED. DOGE is knitting together immigration databases from across DHS and uploading data from outside agencies including the Social Security Administration (SSA), as well as voting records, sources say. This, experts tell WIRED, could create a system that could later be searched to identify and surveil immigrants.
This article can be paired with the one above...this one has more details about what DOGE itself is doing.
Under the guise of surveilling and tracking undocumented immigrants, this comprehensive database at the Department of Homeland Security (DHS) is integrating data from various agencies, including the Department of Homeland Security, Social Security Administration, and the IRS.
They are also reportedly adding other data sources, including biometric information and voting records.
This initiative raises significant privacy concerns, as it may lead to unprecedented surveillance capabilities; although starting with immigrants, what is being built enables real-time tracking of everyone.
Experts are warning that such data consolidation can increase the risk of misuse and violate privacy rights.
Spy Agencies to Centralize Commercial Data Purchases in a New One-Stop Portal
The ever-growing market for personal data has been a boon for American spy agencies. The U.S. intelligence community is now buying up vast volumes of sensitive information that would have previously required a court order, essentially bypassing the Fourth Amendment. But the surveillance state has encountered a problem: There’s simply too much data on sale from too many corporations and brokers. So the government has a plan for a one-stop shop. The Office of the Director of National Intelligence is working on a system to centralize and “streamline” the use of commercially available information, or CAI, like location data derived from mobile ads, by American spy agencies, according to contract documents reviewed by The Intercept.
Based on the previous two articles, we learned that the U.S. government is breaking down its data silos and gathering all of its information into a large central pool.
But that isn't nearly everything that can be known about us.
Now the U.S. intelligence community is developing a centralized system, the Intelligence Community Data Consortium (ICDC), to streamline the acquisition of commercially available information, including sensitive personal data.
This initiative aims to address the overwhelming volume of data available from various corporations and brokers, allowing agencies to bypass traditional legal requirements for obtaining such information.
The ICDC will provide a web-based platform for 18 federal agencies to efficiently purchase access to sensitive data, potentially undermining privacy protections.
Critics express concern that this approach could lead to misuse of sensitive information, as agencies may continue to operate under a "just grab all of it" mentality without sufficient oversight.
Oracle’s Larry Ellison Proposes Orwellian AI Camera-and-Drone Surveillance Network, Stoking Privacy Fears
Oracle co-founder Larry Ellison shared his vision for an AI-powered surveillance future during a company financial meeting, reports Business Insider. During an investor Q&A, Ellison described a world where artificial intelligence systems would constantly monitor citizens through an extensive network of cameras and drones, stating this would ensure both police and citizens don&apost break the law.
In case you haven't been following along, the dystopian world depicted in George Orwell's 1984 is now quite possible.
Some even seem to desire it.
Ellison envisions a future where AI-powered surveillance systems constantly monitor us through a network of cameras and drones.
Similar automated surveillance systems are already being deployed in places like China, leading to what some call a "road to digital totalitarianism."
No, thank you.
LexisNexis Parent Relx Lobbies Amid FISA Section 702 Reauthorization Clash Over Warrant Requirement for Data Brokers
Lawmakers’ negotiations over FISA’s reauthorization became so contentious that House Speaker Mike Johnson withdrew the bill from consideration in February. The biggest source of conflict was an amendment introduced by Rep. Warren Davidson (R-OH) that would prohibit data brokers from selling consumer data to law enforcement and would require a warrant to access Americans’ information, Politico’s Influence newsletter reported in February.
Section 702 of the Foreign Intelligence Surveillance Act (FISA) is a program that allows the U.S. federal government to conduct targeted surveillance of people outside the U.S.
Not only is this invading the privacy of non-U.S. citizens, but data about U.S. citizens is also swept into the database.
LexisNexis became involved in the ongoing debate over privacy and data broker regulations as Congress considered reauthorizing Section 702 last year.
The company has faced scrutiny for its data collection practices, particularly its partnerships with automakers to sell driving data to insurance companies.
Needless to say, it wants a part of the government spending on the Section 702 program.
Former President Biden signed a two year extension of FISA last April.
Dating App Location Data Powers Pentagon’s “Locomotive” Program to Track Phones Worldwide
Working with Grindr data, Yeagley began drawing geofences—creating virtual boundaries in geographical data sets—around buildings belonging to government agencies that do national security work. That allowed Yeagley to see what phones were in certain buildings at certain times, and where they went afterwards. He was looking for phones belonging to Grindr users who spent their daytime hours at government office buildings. If the device spent most workdays at the Pentagon, the FBI headquarters, or the National Geospatial-Intelligence Agency building at Fort Belvoir, for example, there was a good chance its owner worked for one of those agencies. Then he started looking at the movement of those phones through the Grindr data. When they weren’t at their offices, where did they go? A small number of them had lingered at highway rest stops in the DC area at the same time and in proximity to other Grindr users—sometimes during the workday and sometimes while in transit between government facilities. For other Grindr users, he could infer where they lived, see where they traveled, even guess at whom they were dating.
Location data collected from mobile apps is bought and sold by data brokers, and that data is increasingly used by government agencies for surveillance purposes.
It describes how a man named Mike Yeagley demonstrated to the Pentagon how precisely one could track the movements of government employees through a dating app.
This led to the creation of a program called Locomotive that could track the location of phones globally in near real-time...including that of world leaders like Vladimir Putin.
Having that device in our pocket know precisely where it is—and, by extension, where we are—is a very useful tool, but it fuels unprecedented and covert surveillance abilities.
Apple Sues UK Government Over Secret Order for Backdoor Access to Encrypted Data, Removes Advanced Data Protection from UK Market
Apple is taking legal action to try to overturn a demand made by the UK government to view its customers&apos private data if required. The BBC understands that the US technology giant has appealed to the Investigatory Powers Tribunal, an independent court with the power to investigate claims against the Security Service.... In January, Apple was issued with a secret order by the Home Office to share encrypted data belonging to Apple users around the world with UK law enforcement in the event of a potential national security threat.
Apple is pursuing legal action against the UK government over a demand to access its customers' private data.
The company appealed to the Investigatory Powers Tribunal after receiving a secret order that requires Apple to share encrypted data with UK law enforcement in cases of national security threats.
While Apple can still access data protected by its standard encryption with a warrant, its Advanced Data Protection (ADP) feature, which offers stronger privacy, cannot be accessed even by Apple itself.
In response to the UK order, Apple has removed ADP from the UK market rather than create a "backdoor" for access.
The situation has sparked tension between Apple and the UK government, with the US administration expressing concern over the UK's actions.
The Home Office maintains that privacy is only compromised in exceptional cases related to serious crimes.
This Week I Learned: "Leeroy Jenkins!!!!" was staged
It was one of the first memes ever, a viral sensation that went mainstream back when people still used dial-up internet. Yet the cameraman behind “Leeroy Jenkins” still seems stupefied that anyone fell for it.
First posted on May 10, 2005, this month marks the 20th anniversary of this bit of internet folklore.
I remember when this first came out, and I totally believed it was real until earlier this year.
What did you learn this week? Let me know on Mastodon or Bluesky.
I just walked back from the lovely Cookie Bar in Ford City, Ontario, where I was one of the six "fun" speakers at the Bike Windsor Essex AGM.
The theme was transportation and the format was pecha kucha: 20 slides that auto forward every 20 seconds.
This is what I was supposed to have said.
Traditionally, the big risk in HODL-ing cryptocurrencies has been their volatility. Fortunately, now the US government is all-in on cryptocurrencies, this risk is greatly reduced. Progress moon-wards is virtually guaranteed, so it is reasonable to invest a small part of your portfolio into Lamborghinis. HODL-ers can rest easy while the rest of the coins in their wallets appreciate because they are protected by strong cryptography (at least until the advent of a sufficiently powerful quantum computer). But progress moon-wards exacerbates some other risks to HODL-ers, as I explain below the fold.
There is no need to wait while the semiconductor industry develops quantum computers in order to defeat the cryptography protecting the HODL-ers wallets. Several effective techniques are already available. North Korea's recent record-breaking $1.5B heist illustrates that deploying malware via a Software Supply Chain Attack is capable of compromising even industrial-strength multi-signature wallets.
3,250 BTC (~$330 million) were apparently stolen from a bitcoin holder and then quickly moved through multiple exchanges and swapped for the Monero privacycoin. Such a massive swap into Monero was apparently enough to cause the Monero price to spike from around $230 to as high as around $330, before retracting somewhat.
Another remarkably effective technique is social engineering. A group of scammers used it last August 18th, posing as members of Google's and Gemini's security teams to social-engineer "an early investor in cryptocurrency" into downloading malware. He lost more than 4,100BTC, then "worth" about $230M. Mitch Moxley has a fascinating, detailed account of what happened over the next month in They Stole a Quarter-Billion in Crypto and Got Caught Within a Month. The scammers immediately started laundering the loot through a series of mixers and sketchy exchanges. These transactions attracted attention from a famed cryptocurrency sleuth:
Minutes after the D.C. resident’s funds were liquidated, ZachXBT was walking through the airport on his way to catch a flight when he received an alert on his phone about an unusual transaction. Crypto investigators use tools to monitor the global flows of various coins and set alerts for, say, any transaction over $100,000 that goes through certain exchanges that charge a premium for having few security safeguards. The initial alert that day was for a mid-six-figure transaction, followed by higher amounts, all the way up to $2 million. After he cleared airport security, ZachXBT sat down, opened his laptop and began tracing transactions back to a Bitcoin wallet with roughly $240 million in crypto. Some of the Bitcoin in the wallet dated back to 2012. “At that point it didn’t make sense,” he told me. “Why is a person who held their Bitcoin for this long using a sketchy service that typically sees a lot of illicit funds flow through it?”
He added the wallets associated with the transactions to his tracking and boarded the plane. Once he connected to in-flight internet, more alerts arrived. Throughout the day, the Bitcoin traced to the wallet was being liquidated through more than 15 different high-fee cryptocurrency services.
It turned out that one of the group of scammers couldn't resist showing off:
The source sent ZachXBT several screen-share recordings, which he said were taken when one of the scammers livestreamed the heist for a group of his friends. The videos, which totaled an hour and a half, included the call with the victim. One clip featured the scammers’ live reaction when they realized they’d successfully stolen $243 million worth of the D.C. resident’s Bitcoin. A voice can be heard yelling: “Oh, my god! Oh, my god! $243 million! Yes! Oh, my god! Oh, my god! Bro!”
In private chats they used screen names like Swag, $$$ and Meech, but they made a crucial mistake. One of them flashed his Windows home screen, which revealed his real name in the start icon pop-up at the bottom of the screen: Veer Chetal, an 18-year-old from Danbury
Chetal lived in Danbury, CT. His father Sushil Chetal was a vice-president at Morgan Stanley. He was an incoming freshman at Rutgers. In his senior year he had developed a lavish lifestyle:
Classmates remember Chetal as shy and a fan of cars. “He just kind of kept to himself,” says Marco Dias, who became friends with Chetal junior year. According to another classmate named Nick Paris, this was true of Chetal until one day in the middle of his senior year, when he showed up at school driving a Corvette. “He just parked in the lot. It was 7:30 a.m., and everyone was like, What?” Paris says. Soon Chetal rolled up in a BMW, and then a Lamborghini Urus. He started wearing Louis Vuitton shirts and Gucci shoes, and on Senior Skip Day, while Paris and many of his classmates went to a nearby mall, Chetal took some friends, including Dias, to New York to party on a yacht he had rented, where they took photos holding wads of cash.
On 25th August, a week after the $230M heist, Chetal's parents were house-hunting in Danbury in the Lamborghini Urus he had driven to school when:
the Lamborghini was suddenly rammed from behind by a white Honda Civic. At the same time, a white Ram ProMaster work van cut in front, trapping the Chetals. According to a criminal complaint filed after the incident, a group of six men dressed in black and wearing masks emerged from their vehicles and forced the Chetals from their car, dragging them toward the van’s open side door.
When Sushil resisted, the assailants hit him with a baseball bat and threatened to kill him. The men bound the couple’s arms and legs with duct tape. They forced Radhika to lie face down and told her not to look at them, even as she struggled to breathe, pleading that she had asthma. They wrapped Sushil’s face with duct tape and hit him several more times with the bat as the van peeled off.
Years ago, Randall Munroe explained a much simpler but only slightly less effective technique for defeating strong cryptography in XKCD #538. In 2021's Can We Mitigate Cryptocurrencies' Externalities? I referred to Jamison Lopp's list of people applying XKCD's technique entitled Known Physical Bitcoin Attacks, which started in 2014 and is still going strong. Already this year Lopp documents 21 attacks, or more than one a week. Among last year's entries we find the kidnapping of Veer Chetal's parents.
Several witnesses saw the attack and called 911. Some of them, including an off-duty F.B.I. agent who lived nearby and happened to be at the scene, trailed the van and the Honda, relaying the vehicles’ movements to the police. The F.B.I. agent managed to obtain partial license plate numbers.
Danbury police officers soon located the van. A patrol vehicle activated its emergency lights and tried to make a stop, but the driver of the van accelerated, swerving recklessly through traffic.
About a mile from where the chase began, the driver careered off the road and struck a curb. Four suspects fled on foot. The police found one hiding under a bridge and apprehended him after a brief chase. Within a couple of hours, the other three were located hiding in a wooded area nearby. The police, meanwhile, found the shaken Chetals bound in the back of the van.
In an affidavit from an unrelated case, an F.B.I. agent described the Com as “a geographically diverse group of individuals, organized in various subgroups, all of whom coordinate through online communication applications such as Discord and Telegram to engage in various types of criminal activity.”
...
When the price of Bitcoin began to rise rapidly in 2017, Com members made an easy shift from Minecraft fraud to crypto theft.
targeted the Chetals to hold them ransom for the money their son had. Independent investigators think that at least one member of the group, Reynaldo (Rey) Diaz, who they say went by the alias Pantic, was a member of the Com; ZachXBT speculates that the thieves might have made themselves targets by sharing stories of their spending with other Com members.
Chetal's accomplices are alleged to include "Malone Lam, a known figure in the Com" and Jeandiel Serrano. They also couldn't resist enjoying the fruits of their labors:
On Sept. 10, after a 23-day party spree in Los Angeles, Lam headed to Miami on a private jet with a group of friends. There, he rented multiple homes, including a 10-bedroom, $7.5 million estate. Within a few days, Lam had filled the driveway with more luxury cars, including multiple Lamborghinis, one with the name “Malone” printed on the side.
ZachXBT and others were easily able to track Lam's activities on social media:
Malone was filmed wearing a white Moncler jacket and what appeared to be diamond rings and diamond-encrusted sunglasses. He stood up on the table and began showering the crowd with hundred-dollar bills. As money rained down, servers paraded in $1,500 bottles of Champagne topped with sparklers and held up signs that read “@Malone.” He spent $569,528 in one evening alone.
neglecting to use a VPN when he created an account with TradeOgre, a digital currency exchange, which connected to an I.P. address that was registered to a $47,500-per-month rental home in Encino, Calif. It was leased to Jeandiel Serrano, ... By the time the authorities identified Serrano, he was on vacation in the Maldives with his girlfriend.
On Sept. 18, Serrano flew back from the Maldives to Los Angeles International Airport, where the authorities were waiting for him. He was wearing a $500,000 watch at the time of his arrest. ... Serrano admitted that he owned five cars, two of which were gifts from one of his co-conspirators, given to him with proceeds from a previous fraud. He also confessed to having access to approximately $20 million of the victim’s crypto on his phone and agreed to transfer the funds back to the F.B.I.
Later that day, a team of F.B.I. agents working with the Miami police raided a mansion near Miami Shores. Agents blew open the front metal gate while another group entered by boat via a small saltwater canal in the rear. The sound of flashbangs rang in the neighborhood as the agents entered the home.
French gendarmes have been busy policing crypto crimes, but these aren't the usual financial schemes, cons, and HODL! shenanigans one usually reads about. No, these crimes involve abductions, (multiple) severed fingers, and (multiple) people rescued from the trunks of cars—once after being doused with gasoline.
This previous weekend was particularly nuts, with an older gentleman snatched from the streets of Paris' 14th arrondissement on May 1 by men in ski masks. ... The abducted man was apparently the father of someone who had made a packet in crypto. The kidnappers demanded a multimillion-euro ransom from the man's son.
According to Le Monde, the abducted father was taken to a house in a Parisian suburb, where one of the father's fingers was cut off in the course of ransom negotiations. Police feared "other mutilations" if they were unable to find the man, but they did locate and raid the house this weekend, arresting five people in their 20s.
Anderson fails to credit Lopp, who has been tracking the problem for more than a decade. He does note the root of the problem:
Or there's the Belgian man who posted online that "his crypto wallet was now worth €1.6 million." His wife was the victim of an attempted abduction within weeks.
HODL-ers need to understand that the speed, immutability and (pseudo) anonymity of cryptocurrency transactions eliminates many of the difficulties in applying the "$5 wrench" technique. Once it is known that you (or your son) hold the key to a cryptocurrency wallet with even a few tens of Bitcoin, you (or your son) become a target for theft. You (or your son) should hope that the threat comes from social engineers like Veer Chetal and his accomplices, in which case your loss will be expensive but painless. But, as Jamison Lopp records, it may well come from people like Rey Diaz.
The solution is "security through obscurity". If you (or your son) rarely transact and maintain a modest lifestyle, lacking Lamborghinis and $569,528 bar bills, it isn't likely that your wallet address will be deemed worth deanonymizing. But what is the point of HODL-ing for HODL-ings sake alone? The temptation to "buy the Lambo" is really hard to resist, and the risk seems remote.
This article mobilizes critical librarianship and critical/decolonial pedagogical strategies for disrupting and reconceiving collection practices in academic libraries. The authors—an academic librarian and a curriculum/pedagogy professor—argue that librarians can contend with the political tensions that underlie their collection management practices by actively questioning—or puzzling—with students and opening up library collections to students. The authors (a) highlight how undergraduate students were invited to engage with their library’s collection managementpractices, (b) discuss examples of student-curated collections from a recent initiative, and (c) consider how the initiative informs current and future possibilities for student involvement in library work and knowledge management. In opening up the library collections to students, this work decenters the librarian-as-expert paradigm while also illustrating both the challenges and possibilities of demystifying and shifting our approach to information science.
Several Student-Curated Featured Collections on the Library Shelf
When writing this article, Sarah Keener (she/her) was the library director for a small, rural college in the northeast. She’s had one foot in education and academia and the other in outdoor and hands-on trades for the entirety of her working life. This duality influences her interdisciplinary and expeditionary approach to the academic library, an approach that has also been shaped by the years she spent as a middle school teacher, school librarian, craft educator and student, and coach. As a white educator in a remote area that is socioeconomically diverse but predominantly white, the persistent sense of discomfort and uncertainty she confronts in this role arises largely from her inevitable participation in oppressive practices and colonial systems, and from the uneven power dynamics that seem inherent to being a teacher of any kind. This question, to paraphrase Maluski and Bruce (2022), has become central to her work and mission in education: What is my role in dismantling oppressive practices?
Cee Carter is a fifth-generation Black woman educator and an Assistant Professor at the University of Vermont. Her previous work as a middle school educator and non-profit educational leader exposed the larger political economy of race that facilitates educational investments for reform. That is, how race is leveraged for policy intervention and profit in public education. In response, her scholarly work aims to shift normative curricular and pedagogical practices (Sykes, 2011)—asking more of how we construct and pursue our conceptions of justice in the era of neoliberal public education reform (Carter, 2024). A question that animates her educational inquiry is: How can educators, leaders, policy makers, and researchers rethink strategies for pursuing educational justice?
This collaborative article is an outgrowth of the authors’ studies in Cee’s Curriculum Theory course that Sarah took while pursuing her M.Ed. and working as an academic librarian. Cee designed the course to unsettle curriculum as a purely practical pursuit and open “disciplinary codes… so that they [students] may learn … their modes of operation, their rules and conventions, so that they may see knowledge and meaning as the products of ongoing discourses” of which they are part (Ashbee, 2021, p. 11). The authors emphasize how questioning rules and conventions with students and other faculty can create an interdisciplinary critical librarianship praxis for disrupting and reconceiving academic library collections. Furthermore, their co-authorship of this article highlights the benefits of continued collaborative learning beyond the classroom informed by critical theory, or studying and exposing the power relations that underlie our work in academic institutions. Illustratively, the initiative outlined below opened the library and encouraged a pedagogical approach that challenges binaries between librarian-patron, teacher-student, librarian-professor, library-classroom, and theory-practice.
Literature Review
Disrupting and Reconceiving Collection Practices
Library collections and the collection practices that shape print and electronic resources and services are influential throughout the knowledge cycle. Selection and deselection criteria determine what we include and exclude, and these processes are inherently biased, even when careful measures are taken to promote neutrality and inclusivity. The Association of College and Research Libraries (ACRL) (2024) suggests re-examining collections through a social justice lens, defining diversity, and setting progress goals as key activities for reimagining collection practices. We also argue that critical and decolonial theory are key for mobilizing practices that intervene on hegemonic collection practices. Yet, we also heed Tuck and Yang’s (2012) argument that decolonization is not a metaphor and posit that decolonizing work takes more than implementing simple quick fixes (Mutonga & Okune, 2022; Stahl, 2024; Watkins et al. 2021).
Libraries offer, and limit, information and information services for institutions of higher learning. Therefore, in libraries, the ongoing project of disrupting and reconceiving collections practices requires a holistic and systemic approach to re-examining “acquisition practices and systems, including approval plans and demand-driven acquisition programs” (Research Planning and Review Committee, 2024, p. 234) whose subtle ideologies are often more accountable to the market than to other critical and participatory goals we might pursue in libraries. In their autoethnographic exploration of disability in academia, Dreeszen Bowman and Dudak (2025) discuss how the tyrannical neoliberal foundations of academia extend to the library and consider how we might “advance knowledge and offer possibilities for new practices” (para. 10). Disrupting and reconceiving collection practices with students offers a generative space for examining the librarian’s role in knowledge management as well as practicing asset-based approaches and open pedagogy through a lens that confronts epistemicide or the active and violent eradication of knowledge systems. Working with students and initiating critical conversations between librarians and other faculty also responds to the demand to diversify and “decolonise libraries and other knowledge infrastructures” (Mutonga & Okune, 2022, p. 190).
Asset-Based Praxis: From the Classroom to the Library
Asset or strength-based approaches to teaching and learning emphasize the strengths (Heinbach et al., 2021; Maluski & Bruce, 2022), cultural experiences (Gay, 2002; Ladson-Billings, 1995; Nasir et al., 2021), and navigational practices (Yosso, 2005) that learners and educators bring to libraries and classrooms. An asset-based pedagogical perspective pushes against deficit-oriented practices, which view students in terms of perceived academic weaknesses and lead to “compensatory educational interventions” (Ladson-Billings, 1995, p. 469). Thus, critical pedagogies scholars theorize asset-based approaches through examining the historical and sociopolitical contexts of education to bolster learning encounters that affirm, critically notice, and open possibilities for intervening on myriad forms of injustice (Muhammad, 2023).
Heinbach et al. (2021) extend strengths-based praxis into library stewardship and highlight the many learning experiences students draw on to meet their academic needs. Their asset-based praxis prompts them to ask more of institutionalized knowledge systems, and provokes librarians to consider how academic library services can acknowledge, and better serve, the needs and complexities of students’ lives. Toward reconceiving librarians’ roles in supporting students, Maluski and Bruce (2022) heighten our consciousness around the dangers of using deficit-based assumptions to diagnose perceived student needs. They compel us to reconsider how we “evaluate student needs, understand their strengths, and make a better learning environment for them and work environment for us” (para. 38). We draw on these critical approaches to asset-based pedagogies as foundations for inviting students to grapple with practices for counteracting exclusion and bias in knowledge management practices.
Open Pedagogy and Participatory Instructional Design
In higher education, there is a growing demand for more, and more diverse, opportunities for substantive student engagement and empowerment. This participatory approach aligns with open pedagogy, which thrives on practices and “renewable assignments” that support “students to see themselves as active creators of information rather than passive consumers” and invite them to “contribute to the production and dissemination of knowledge” (Research Planning and Review Committee, 2024, p. 232). In the academic library, this work can take many forms, such as participatory instructional design, critical approaches to pedagogy and curriculum, student employment and leadership, peer mentorship and collaboration, co-construction of knowledge, advocacy and activism, and outreach and programming to encourage students’ sense of belonging.
Student-centered learning encounters foreground “autonomy, competence, and relatedness” (Werth & Williams, 2021, p. 48) as well as inclusivity and practical application of learning. Additionally, critical information literacies center the need for epistemic justice, particularly within places and processes dominated by colonial content systems and management strategies, where “information is understood as a product shaped by cultural, historical, social, and political forces” (Laverty & Berish, 2022, p. 1). We underscore lessons about the colonial and sociopolitical nature of academic libraries as “academic librarians function within this hegemony and reflect a culture of whiteness” (Laverty & Berish, 2022, p. 3). So, in addition to ACRL’s (2024) concerns about workload and sustainability of open pedagogies, we encourage librarians to bear in mind how the “implicit assumption that open pedagogy is inclusive may further stall the critical work needed to examine, revise, and reimagine educational resources as a way to create new and underrepresented forms of knowledge in the academy ” (Brown & Croft, 2020, p. 1). Open pedagogical practices should consistently be interrogated with a critical lens, and interdisciplinary study can support instructor revisions. These considerations shaped Sarah’s approach to opening up her academic library’s collections. The Student-Curated Featured Collections initiative showcases students’ unique strengths and perspectives by inviting them to thoughtfully curate and share their own collections with the community. The initiative also creates space for students to grapple with the problematic histories and institutional priorities that shape the academic library by critically exploring complexities in library collection policies and practices.
Conceptual framework: Puzzlement
“Informed by prior theoretical readings or other sources, these expectations [from prior research] are often surprised by experienced social realities in the field, and the ‘puzzlement’ provoked by the tensions between expectations and lived experiences becomes the starting point for theorizing.”
Haverland and Yannow (2012, p. 405) use the term puzzlement to address the disconnect between social researchers’ preconceptions and the lived realities they encounter when conducting research. While their work focuses on social research and public administration, they describe key dynamics in the ongoing pedagogical conundrums we grapple with as we integrate theory into our practices. Heinbach et al. (2021) define praxis as the “sometimes ambiguous space between the practical and the theoretical” whereby we can leverage “theory to inform our practice, and practice to inform the theory we read and internalize, in order to develop the most powerfully equitable libraries and educational experiences we can” (p. 2). Puzzlement connotes the perplexity and bewilderment we feel in this liminal space, and it also highlights the iterative nature of the process.
Myriad tensions emerge from any substantive engagement in praxis, even in the more mundane aspects of our practice. Pointedly, curriculum is often conceived as a systematic and practical design pursuit that emphasizes sequencing and progression (Ashbee, 2021), sanctioned disciplinary knowledge approved by the ruling technocratic class (Apple, 1999), as well as disciplinary literacies (Gebhard, 2019). When critical and decolonial perspectives open curricular discussions beyond the purely practical, theory can seem inaccessible or impractical to practitioners working in education or library and information sciences. They may feel overwhelmed when asked to engage in dense theoretical work while consumed by daily stresses and responsibilities of caring for themselves and their communities amid flaring partisan tensions. However, as educators, we cannot ignore “how and why something as simple as a curriculum,” as well as knowledge management practices, “contains the same violence of colonialism, imperialism, slavery, capitalist exploitation, genocide and empire” (Rose, 2019, p. 27). So, Cee’s graduate course is designed to actively help students puzzle over their specialized educational practices by pointing to how we (un)intentionally accept and participate in curricular and pedagogical work that is deeply flawed and unjust while also encouraging them to design learning engagements that critique power. Apple (1999) would characterize this approach to puzzling over curriculum and pedagogy as engaging with “the difficult … and contentious ethical and political questions of content, of whose and what knowledge is of most worth” and how these questions “have been pushed into the background in our attempt to define technically oriented methods” (p. 34).
The process of unsettling taken-for-granted practices (Ghaddar & Caswell, 2019) that align with neoliberal and colonial logics is never easy nor complete, a reality expressed by Tuck and Gaztambide-Fernández (2013) in their call for a refusal to:
require that new works in curriculum studies soothe settler anxieties. There must be work inside curriculum studies that dis-invests in settler futurity, that refuses to intervene, that observes a writ of ‘do not resuscitate.’ […] Meanwhile, settlers in curriculum studies must hold one another accountable when they invade emergent work by requiring it to comfort their dis-ease. (p. 86)
Such complexities in relation to power, knowledge, and settler colonialism—or the ways that we continue to occupy unceded land—take our ethical and political questioning further. On one hand, settler colonial critique illuminates the ongoing violence of land occupation that sustains the physical institutions where our work takes place. It reminds us that the grounds we occupy are more than a mere setting for critical pedagogical and curricular practice (Grande, 2004). We work on sites of ongoing decolonial struggle. On the other hand, Tuck and Gaztambide-Fernández’s (2013) settler colonial critique orients us toward the political task of contesting sanctioned disciplinary knowledges and narratives by “rethinking the aims of research… to forward” Indigenous “sovereignty and wellbeing” (p. 85). Our work is fraught with ethical tensions that don’t have simple solutions. Yet, acknowledging these tensions do help us continuously question as we revise and improvise our pedagogical practices to intervene on hegemonic collection practices. With such far-reaching ethical and political implications for contesting dominant knowledge and narratives, theory should be accessible to everyone for the purposes of puzzling and practice—especially students. So, we turn to a discussion about how Sarah made theory accessible in her collections work to begin disrupting hegemonic collection practices.
Puzzling Over Collection Management with Students
Students explored the complexities of library collections and policies through their curated featured collections and other collaborations, such as an independent study, student library worker projects, and an Environmental Humanities workshop in archives and collection maintenance. They considered overall goals for decolonization and diversity audits, deselection, continuity, and future implications of their decision-making. This work created direct opportunities for students to thoughtfully puzzle over their choices and the potential consequences as they engaged with the big questions that “libraries in diverse postcolonial and settler-colonial sites around the world, are grappling with—what to remember and what to forget in attempts to decolonise” (Mutonga & Okune, 2022, p. 189). Students who curated collections struggled with their final choices and realized that the publishing industries are highly influential, representation of diverse perspectives and authors is limited in spite of their best efforts, and personal bias is inevitable in the selection process.
In one instance, a colleague invited Sarah to facilitate a workshop for her course titled The Meaning of Things, an Environmental Humanities seminar exploring the significance of both human-made and natural objects. Sarah invited students to help her think about the project of weeding the library’s dated reference section during their unit about museum collections and archives. Students voiced many thoughtful questions and observations as they considered the volumes on the shelves and their tasks alongside the goals and implications of collection maintenance, “decolonization” and therightsizing approach for systematically shaping responsive, customized library collections (Miller & Ward, 2021). These students’ insights further illustrate the benefits of including students, as learners and as stakeholders, in library collections practices:
If something is outdated, we need to consider keeping it as a throughline [for existing collections] versus making room for newer stuff.
The decisions seem to be case by case: Where we have an extensive and unique collection on a specific topic, should we keep it?
There are many overtly racist books, but they show something important [about misinformation as historical artifacts of racism and the need to question “experts”], so what do we do with them?
What if we stored the outdated books in the storage? Especially where they are part of a special theme/collection. But, how would this work?
Sure, we want people to be able to research niche topics if they have a PhD; but, we can also think about people who will have an easier time using the library [that has been “rightsized” and is simpler to navigate]; more people will benefit from having easy access and engaging, up-to-date collections.
Once you remove things and it becomes less cluttered, it’s more user friendly.
How should the library be organized? It’s useful to have books on the same topic together, but also nice to have featured collections that are curated for you.
The conversation expanded as they considered the historic and present role of the reference section, and students shared their thoughts about online versus print resources:
I use online resources a lot. I like reading for leisure but I find it hard using a book as a resource; online is much more accessible.
They’re a valuable tool for classes—discussion and independent projects. There’s a lot more we have access to online, which is really nice; I’m kind of sad about losing college access when I graduate.
You can search by keyword to access vast, diverse resources.
If you can’t afford required texts that are in print, you have to rely on course reserves, which is challenging.
For recreational materials, you want a book; research, you want online.
[There are limitations to print resources] Even the ecology field guide books—which are admittedly nice to have in the field—are extremely expensive, and many are outdated (due to shifting habitat and migration patterns, removing scientist’s names from birds, etc.).
What print resources do students want/need in the library? pleasure reading, guide books, course reserves, graphic novels/comics, maps, large format, foreign language.
Each of these comments demonstrates the nuances of students’ thinking about these tough issues, which don’t have clear-cut answers or simple fixes.
Opening Up the Collections
Introducing Student-Curated Featured Collections
The college library where Sarah worked promotes patron-driven acquisitions as a means to invite and respond to employee and student requests and recommendations. In an effort to open collection development to students in a more substantive and comprehensive manner, she experimented with a student-curated featured collection initiative. The concept for the student collections arose in response to several library objectives: to acquire more books for the library that might not fit neatly into our selection criteria—such as works of fiction and poetry, and subjects outside the scope of the college’s core disciplines; to highlight the library’s vast existing physical holdings; and to create more meaningful ways for students to engage with and contribute to the library space and broader local community.
As a member of the Work Colleges Consortium, the college receives financial support for a wide variety of work and service-learning opportunities. The library made creative use of this funding to support student-curated collection development because the initiative aligns with the service, work, and educational goals of the Work Program. Additionally, the initiative supports both the library’s efforts to diversify and unsettle collections, and the college’s institutional commitment to student agency and anti-racist practices. The Work Program sponsored and helped promote the opportunity. It paid for book purchases and compensated students for their time, although students did not receive any academic school credit for their participation. Students whose work assignments were in the library were permitted to use their weekly required work hours on the project, and students who did not work in the library were hired for paid independent contracts.
How it Worked
The job posting for Library Featured Collections, as advertised to the entire student body, outlined the following responsibilities:
Learn about accessibility and DEI in library collection development
Choose a featured collection theme that is of personal interest and would also benefit the library/greater college community
Develop and implement selection criteria to research and curate a focused book collection for the library
Assist librarian with book orders, online and in person
Assist librarian with book processing, as needed (cataloging, covering, adding spine labels, etc.)
Create a display and present the collection in some format (poster, video, social media post, slide deck, book talk, etc.)
In addition to being in good standing with the Work Program, the posting lists the following requirements:
Committed to including diverse voices, perspectives and formats
Excited to share interests with the community
Love of books and reading
Reliable, organized and able to work independently (with support from librarian)
At the beginning of the semester, Sarah met with interested students to discuss their ideas and questions and to review the collection development guide, which provides a comprehensive set of instructions, selection criteria, and expectations. They were also encouraged to browse past student collections for inspiration. After the initial meeting, Sarah was available to support students as needed, both in person and via email. These ongoing conversations provided Sarah and the students opportunities to puzzle together while they engaged with the theoretical and practical complexities that arose throughout the process.
After settling on a theme, students developed their own collections, selecting five to ten new titles each. These were ordered, whenever possible, through the local bookstore. The library has a strong working relationship with the bookstore, which created an additional benefit of encouraging students to engage with knowledgeable and passionate community members, outside the college bubble, who provide a valuable local resource. Students then helped process—and in some cases, catalog—their books and materials. They were responsible for assembling their own displays, which consisted of a collection poster, a description, and a list of titles and authors, and any supplemental materials, such as existing library holdings, printed articles, magazines, and links or QR codes for podcasts or websites. The collections are publicly available to the college and local community, and the library compiled a pamphlet of all collection materials to display with the collections in the library and to share digitally with students and employees.
After briefly introducing their collections at a weekly all-school community meeting, students planned, advertised and hosted an event or other form of presentation. Past events and presentations have included a bonfire with s’mores and spooky excerpts from “Into the Woods: Folklore, Folk Horror, and The Stories We Tell”; a coloring and collage night to celebrate “Feel Good Reads”; a TikTok reel featuring the books in “Uplifting BIPOC Authors”; and a book talk and related film screening for “Magical Women.”
Figure 2
Fall 2023 Collections poster
Figure 3
Spring 2023 Collections poster
Figure 4
Informational Collections poster
Examples of student collections. The best way to illustrate this initiative is to let the students’ projects speak for themselves. Each of the three examples below has been shared with student consent and includes the description, a book list, and associated media, followed by some student reflections about the process. These collections show three topics from students in three different majors. They were selected as examples that demonstrate the students’ intentionality in both selecting featured books and writing descriptions that articulate their intended goals for the collections.
Example 1: Magical WomenCollection. Alice, Fourth-Year Ecology Major
Figure 5
Poster Alice illustrated for their “Magical Women” Collection
Description.Sirens, Witches, and Cloudwatchers. We have all held magic within us since the beginning of time. This collection aims to look specifically at women, their stories, and their interactions with magic. Each takes a different perspective on magic, whether it is the world seen through the eyes of a child, the flicker at the edges of our vision, or fantastical powers. From classics such as Wise Child to recent releases like Fifty Beasts to Break Your Heart, these stories move across time and space, from Scotland to New Jersey to Nigeria. We hear from magical beasts, witches, and burgeoning artists. These stories deal with the ways that patriarchy, racism, homophobia, and transphobia have worked to oppress women of all identities and the magic they hold, all while still finding moments of hope, joy, and liberation. Each of them ask the same question: Where do we find magic, and, within that, where do we find ourselves?
Book List. Wild Seed, Octavia Butler; Fifty Beasts to Break your Heart, Gennarose Nethercott; Chlorine, Jade Song; The Icarus Girl, Helen Oyeyemi; We Were Witches, Ariel Gore; Her Body and Other Parties, Carmen Maria Machado; Wise Child, Monica Furlong; The Human Origins of Beatrice Porter and Other Ghosts, Soraya Palmer; Maiden, Mother, Crone: Fantastical Trans Femmes, Gwen Benaway; Woolgathering, Patti Smith; The Women Could Fly, Megan Giddings
Student Reflections. Alice described the experience as a “capstone” and a highlight of their final semester:
I think the student collections has been a beneficial … way to connect to the library, and I know that a lot of the people who don’t even work in the library were excited to get involved. … It’s been a way to make the space more welcoming. When someone checks out one of my books, it’s exciting and I can talk to them about it. It’s nice to know that I’m helping the library grow.
Example 2: Rural American History Collection, Rory, Third-Year Natural Resource Management Major
Figure 6
Rory beside his “Rural History” collection
Description.I chose to create a selection concerning rural history because of how significantly that history impacts us all. In this collection, I’ve tried to weave together a wide (though ultimately limited) range of experiences in rural America so that readers can understand the complicated and diverse conditions that rural life entailed. In choosing the works for this selection, I’ve done my best to balance a selection of works that will cover local history and works that illustrate the regional differences in America’s rural communities. In addition to trying to grapple with the different regions in America, I’ve looked to incorporate works on groups often underrepresented in previous historical scholarship. Despite the wide range of these subjects, I hope that those interested will get valuable insight into what life was like and what that means for us now.
Book List. The Last of the Hill Farms, Richard Brown; Tall Trees, Tough Men, Robert Pike; Ramp Hollow, Steven Stoll; Whaling Captains of Color, Skip Finley; In Pursuit of Gold: Chinese American Miners and Merchants in the American West, Sue Fawn Chung; The Voice of the Dawn, Frederick Matthew Wiseman; Beyond Forty Acres and a Mule, Debra A. Reid; Farm Boys, Will Fellows; Beloved Land, Patricia Martin; South to America: A Journey Below the Mason-Dixon to Understand the Soul of a Nation, Imani Perry; We Are Each Other’s Harvest: Celebrating African American Farmers, Land, and Legacy, Natalie Baszile
Example 3: Feeling and Dreaming During the Climate Crisis Collection, Meredith, Fourth-Year Environmental Humanities Major
Description. Living through environmental crises and change is an emotional experience. How can we hold and care for ourselves and each other as we face existential planetary shifts? This collection intends to speak to this emotional connection to Earth to foster connection and resilience individually and communally. Our feelings for nature and our place within it are powerful—how are you listening to that power?
I chose this featured collection because I’ve witnessed myself and those around me navigate grief, hopelessness, anger and more when experiencing, learning about, and fighting Earth’s deterioration. These books and resources aim to help us connect with our feelings and express our dreams through the climate crisis.
Book List. Entering the Ghost River, Deena Metzger; A Rain of Night Birds, Deena Metzger; Earth Emotions: New Worlds for a New World, Glenn A. Albrecht; Parable of the Sower, Octavia E. Butler; Braiding Sweetgrass, Robin Wall Kimmerer; Borealis, Aisha Sabanti Sloan; Staying with the Trouble, Donna J. Haraway; Joyful Militancy: Building Thriving Resistance in Toxic Times, Carla Bergman and Nick Montgomery; The Language of Emotions: What Your Feelings are Trying to Tell You, Karla McLaren; Depression: A Public Feeling, Ann Cvetkovitch; The Last Beekeeper, Julie Carrick Dalton; Meltwater, Claire Wahmanholm
Articles and Podcasts. Climate Doom to Messy Hope, UBC Climate Hub. The Revolution Will Not Be Psychologized, The Emerald Podcast. Big Planet Big Feels: Grief, Mental Health, and Community, Madi, Earth First
Student Reflections. Meredith worked in the library for their on-campus job, so they were involved in all aspects of the collections project, as well as other library activities. They shared several things they liked and learned about:
Overall, the process of choosing a theme, the media, and how to structure the display was a great hands-on experience in learning what the collection policy is and how the books are organized within the library. … For instance, our discussions on how graphic novels should be organized in [our] library (since the fiction section is small etc.) seems representative of students’ desires moving collection policies into the modern age. … I think that semester’s range of featured collections showed what topics and media forms students want in the library. The fact [that] I was learning about deaccessioning books at the same time that I was making my collection definitely informed [me of] a big-picture view on collection policy! … It was important in my process to choose different categories of books so that something new could be added to each part of the library (new fiction novels, new poetry, new social/environmental justice essays, etc). This also gave people options in how they wanted to engage with my topic.
Benefits
The classroom community has the potential to become a space for generating “excitement” and “recognizing one another’s presence” (hooks, 1994, pp. 7-8). Beyond helping students feel heard, seen, and excited about their tangible contributions to the library spaces and collections, this initiative provided a pathway for direct participation in the college’s stated efforts to unsettle some of the oppressive systems and structures deeply ingrained in higher education and present in our own academic library. As emphasized in recent ACRL trends reports and previously mentioned in this article, academic libraries are leaning into co-creation of knowledge, open pedagogies, and re-imagined collection practices as key strategies for this work. The special collections project incorporates all of these with asset-based strategies that promote student agency and celebrate students’ multidimensional and expansive interests, backgrounds, ideas, and areas of expertise.
Furthermore, the collection development process itself fosters critical thinking by teaching students about selection policies and criteria, which is particularly relevant today. Through direct engagement with library themes, students gain greater perspective on epistemic justice, bias and neutrality, racism and exclusion in collections and cataloging, profit and distribution with the publishing industry, and various accessibility considerations. Student participation helps expand and enhance library collections. As students tend to be in touch with different networks and publishing pipelines than librarians, it has the potential to counteract library worker bias. They are often plugged into more independent and radical voices, and more contemporary trends and movements. Bysharing dynamic, creative, and well-researched collections on a variety of topics, students contribute to the co-production and dissemination of cross-disciplinary knowledge. This process and the product has the potential to inform their own and others’ learning moving forward.
As a librarian, Sarah enjoyed chatting with students about their collections, and she loved observing library visitors as they noticed and commented on them as well. The students’ engaging descriptions, questions, and selections captured her imagination and curiosity, as she found herself gravitating toward their picks when looking for a new book. These collections take up space and command attention. The library looks and feels different, perhaps because it’s unusual for students to be so directly and transparently responsible for content and displays; the prominently displayed materials don’t always fit preconceived notions about what college library books and resources are supposed to look like. We also have the potential to conceive of library books in new ways when we have a personal connection to the people and processes responsible for acquiring them.
Coalescing around any given student’s theme, visitors encounter graphic novels, pamphlets, books of poetry, imaginative children’s books, and young adult fiction intermingled with new nonfiction, memoirs and first-person historical accounts, heady and artistic independent press publications, and dense anthologies. Unsurprisingly, the “Feel Good Reads” collection about sex and body positivity and education—which proudly displayed titles like The Post-Structuralist Vulva Coloring Book, by Elly Blue; The Bump’n Book of Love, Lust and Disability, by J. Tarpey et al.; and It’s My Pleasure: Decolonizing Sex Positivity, by Mo Asebiomo—garnered some shy giggles but also a lot of sincere interest and appreciation.
In some cases, it seemed students used their platform as an opportunity to advocate for causes and to resist and disrupt library and academic conventions. Perhaps students were also less concerned with appropriateness of selection because of the project’s relatively open parameters and because they are not as informed or constrained by the academic library paradigm that trained library workers work within.
Challenges
While we did not encounter this problem in the first two semesters, we can anticipate that the growing popularity of student-curated featured collections will necessitate a more formal and competitive application process, which would take additional time to develop and conduct. Furthermore, the eligibility requirement that students must be in “good standing” in their school academics and work responsibilities in order to take on new extracurricular projects seems reasonable; however, it has the unwanted consequence of excluding a portion of the student body who nevertheless deserves to have a voice in library collections. This deserves further consideration, and it highlights the need to maintain multiple avenues for student involvement in the library.
Although this issue has not yet been brought to the library’s attention, it’s foreseeable that someone might object to the content of a collection or to the perspectives expressed by the creator of a collection and lodge a formal complaint. In anticipation of this possibility, the library should be prepared for how to engage with this situation in a manner consistent with how the library and school would manage similar instances of challenges, keeping in mind the nuances of censorship, bias, free speech, and discrimination.
Funding presents another potential challenge. Because this project was sponsored by the college Work Program, which is funded directly by the federally-funded Working Colleges Consortium, its budget did not come out of the library or college budget. Book purchases were not made “at the expense of” regular library materials, and importantly, it was easier to create objectives and selection guidelines that did not strictly align with all of the academic library selection policies and procedures. The Work Program also compensated participating students for the time they spent developing and sharing their collections. This did not seem to be a key motivation for participants, but undoubtedly sweetened the deal and perhaps supported more thoughtful selections and extension activities. It’s unclear whether students would have signed up if they hadn’t been paid, and this is something to follow up with. Furthermore, the continuity of any grant-funded program, particularly given today’s political climate, is uncertain. While the student collections initiative could be integrated into library practices in different ways, it would need to be significantly re-imagined if this funding model shifts.
Perhaps the greatest challenge is the amount of time and energy needed to support an initiative like this in a consistent and responsive manner, from launch to finish, every semester. As mentioned in the section about open pedagogy and instructional design, the scope of these renewable, participatory, individualized projects invites questions about their long term sustainability. Library staff juggle many priorities, so the featured student collections initiative must be seen as a worthy priority in order for it to succeed as a routine practice, semester after semester. Without a committed staff member, and without the approval and budgetary support of library supervisors and college administration, it would be difficult for the project to continue in a meaningful and visible manner, if at all. Carving out time for these collaborative library initiatives is difficult for all partners in the work, regardless of their mutual appreciation for these partnerships and desire to work together—as any educator or busy college student can attest to.
Next Steps
Along with the additional benefits that came out of this initiative, opening library collections and collection practices are a primary objective. Some initial ideas for expanding or adapting this project, and related collection management practices, include the following:
Encourage peer collaboration within the collection development process;
Find ways to include students in routine library collection development;
Invite classes to curate collections related to course topics;
Provide a space in the library for visitors to share informal feedback about the individual collections, their presentations and/or the project as a whole.
Make more explicit metacognitive and generative connections to critical curriculum theory and decolonial studies;
Explore how to assess resources for value and quality without being gatekeepers;
Experiment with online resource collections featuring digital primary sources, films, podcasts, journal articles, and blogs;
Feature the collections, with their descriptions, on a page of the library’s online catalog;
Enrich the research and selection process with bookstore visits, author and educational workshops and webinars; and
Present the initiative and share goals with faculty and administration, and invite more future collaboration with projects that engage and empower students with critical theory.
Sarah’s emergent work with instructors and students on other library projects, some mentioned in this article, would also clearly benefit from continued critical puzzlement and experimenting. As we’ve repeatedly acknowledged, collections are only one facet of the library, and there are rich connections to be explored by opening interdisciplinary collaborations through the library. In the authors’ own readings and conversations, we’ve encountered many other possibilities to center students so they can shake up the library and higher education.
Puzzling Together
“How can we as library workers [and educators] dismantle the oppressive practices of the spaces that we are embedded within and very much a part of?” – Maluski and Bruce, 2022, para. 9
As we close, we heed Stahl’s (2024) warning about universalizing solutions, which are often “unwilling to brook surprise, to imagine the possibility that a flawed system’s shortcomings can sometimes produce positive outcomes” (p. 32). Instead we offer closing points about how our ongoing puzzlement, with each other and with students, returns us to important questions and theories that frame our work and pushes us to do it beyond a politics of correction and exposure. Sarah’s collections development project, supported by Cee’s student-centered critical curriculum theory course, echoes Drabinski (2013) (as cited by Stahl, 2024) who encourages critical and queerly informed librarians to “transform … moments [of encountering bias or idiosyncrasies in the catalog] into another point where the ruptures of classification and cataloging structures can be productively pulled apart to help users understand the bias of hegemonic schemes” (p. 107). That is, the study of critical library theory and curriculum theory among librarians, faculty, and students informs how they use the library and how they can demystify the politics and disciplinary practices of information science.
Collaborative study is just one key approach for transforming libraries, classrooms, and our engagements within them (Rashid et al., 2023). We have found it fruitful to have each other as inquiry partners who are navigating the same critical foundations with different backgrounds and disciplinary perspectives. We do this work as part of reimagining the academy, which prompts us to be critically aware of how disciplinarity functions as we contend with its colonial foundations. We have much more to do, and for now, we are keen on building our “instinct to question” (Stahl, 2024, p. 32) rather than providing a list of quick fixes that have the potential to be absorbed into the larger hegemonic structure we are tasked with deconstructing.
Acknowledgements
Thank you to our publishing editor, Ian Beilin, and the editorial board for your encouragement of this contribution. We would also like to acknowledge the labor and professional service of Jeannette Ho and Betsy Yoon, our peer reviewers. We appreciate your thoughtful engagement throughout the publication process. Finally, thank you to the students whose work, reflections, and enthusiasm brought life to the projects and this article.
Ashbee, R. (2021). Curriculum: Theory, culture and the subject specialisms. Routledge.
Brown, M., & Croft, B. (2020). Social annotation and an inclusive praxis for open pedagogy in the college classroom. Journal of Interactive Media in Education, 2020(1), 8, 1-8. https://doi.org/10.5334/jime.561
Carter, C. (2024). Radically re-reading youth feedback with anticolonial black feminist critique. International Journal of Qualitative Methods, 23. https://doi.org/10.1177/16094069241282842
Drabinski, E. (2013). Queering the catalog: Queer theory and the politics of correction. The Library Quarterly, 83(2), 94-111. https://doi.org/10.1086/669547
Dreeszen Bowman, R., & Dudak, L.T. (2025). Cripping conferences: An autoethnographic exploration of disability in academia. In the library with the lead pipe.
Gebhard, M. (2019). Teaching and researching ELLs’ disciplinary literacies: Systemic functional linguistics in action in the context of U.S. school reform. Routledge.
Grande, S. (2004). Red pedagogy: Native American social and political thought. Rowman & Littlefield Publishers.
Haverland, M., & Yanow, D. (2012). A hitchhiker’s guide to the public administration Research universe: Surviving conversations on methodologies and methods. Public Administration Review, 72(3), 401-408. http://www.jstor.org/stable/41506782
Heinbach, C., Mitola, R., & Rinto, E. (2021). Dismantling deficit thinking in academic libraries: theory, reflection, and action. Library Juice Press.
hooks, b. (1994). Teaching to transgress: Education as the practice of freedom. Routledge.
Ladson-Billings, G. (1995). Toward a theory of culturally relevant pedagogy. American Educational Research Journal, 32(3), 465-491. https://doi.org/10.3102/00028312032003465
Laverty, C., & Berish, F. (2022). Decolonizing librarians’ teaching practice: In search of a process and a pathway. Canadian Journal of Academic Librarianship, 8, 1-29. https://doi.org/10.33137/cjalrcbu.v8.37780
Mutonga, S., & Okune, A. (2022). Re-membering Kenya: Building library infrastructures as decolonial practice. In J. Crilly and R. Everitt (Eds.), Narrative expansions: Interpreting decolonisation in academic libraries (pp. 189-211). J. Facet Publishing.
Nasir, N. S., Lee, C. D., Pea, R., & McKinney de Royston, M. (2021). Rethinking learning: What the interdisciplinary science tells us. Educational Researcher, 50 (8), 557-565. https://doi.org/10.3102/0013189X211047251
Rashid, M., Scherrer, B. D., Carter, C., McIntee, K., Jean-Denis, A., Correa, O., & Jocson, K. M. (2023). Praxis of the undercommons: Rupturing university conviviality and coded formations of diversity. Globalisation, Societies and Education. https://doi.org/10.1080/14767724.2023.2190876
Research Planning and Review Committee. (2024). 2024 Top trends in academic libraries: A Review of the Trends and Issues. College & Research Libraries News, 85(6), 231-246. https://doi.org/10.5860/crln.85.6.231
Werth, E., & Williams, K. (2021). What motivates students about open pedagogy?: Motivational regulation through the lens of self-determination theory. International Review of Research in Open and Distributed Learning, 22(3), 34-54. https://doi.org/10.19173/irrodl.v22i3.5373
Watkins, L., Madiba, E., & McConnachie, B. (2021). Rethinking the decolonial moment through collaborative practices at the International Library of African Music (ILAM), South Africa. Ethnomusicology Forum, 30(1), 20-39. https://doi.org/10.1080/17411912.2021.1938628
Yosso, T. J. (2005). Whose culture has capital? A critical race theory discussion of community cultural wealth. Race Ethnicity and Education, 8(1), 69-91. https://doi.org/10.1080/1361332052000341006
#IDSF25 will take place from 4–6 June 2025 in Vienna, positioning the Austrian capital once again as a central hub for global dialogue on digital security, sovereignty, and international cooperation. The CEO of OKFN, Renata Ávila, will be joining two sessions in the programme.
Finding legitimate use cases for Artificial Intelligence on the Library Blogs site proved to be difficult after four experiments. Tests include running Hugging Face, Ollama with two different LLMs, U-M Maizey and Google NotebookLM. Costs of AI in maintenance, storage and processing, as well as environmental concerns were deemed too high and outweigh the benefits and value added to the site.
One of the projects I’m working on with my DFN colleagues is creating a web archive of a particular magazine so that we can use it for study. Said another way, we’re creating a dataset. To do this, I use the Webrecorder ArchiveWeb.page extension in Chromium to create WARCs, and to get dataset or WARCs in a place to do further analysis on it, I of course use the Archives Unleashed Toolkit!
The toolkit can produce a number of scholarly derivatives from web archives, and for our use case we’re looking at articles on the site. So, the “Web Pages” gets us close to our goal. Why close? Because as you’re crawling a site you’ll get some materials on the edges and outside of the edges that you don’t exactly need for your specific use case and you’ll want to filter them out. You might also want to enrich the data a bit based on existing content. So, how can you do that?
Filtering
With the toolkit we can get pull the following data from a collection of WARCs into a csv:
import io.archivesunleashed._
import io.archivesunleashed.udfs._
// Data and results.
val warcs = "/path/to/warcs/*"
val results = "/path/to/results/"
// Load and filter data.
val webpages = RecordLoader.loadArchives(warcs, sc).webpages()
// Write results to csv.
webpages.write
.option("timestampFormat", "yyyy/MM/dd HH:mm:ss ZZ")
.format("csv")
.option("escape", "\"")
.option("encoding", "utf-8")
.save(results + "webpages")
Like I mentioned above, if we do that, we’ll get a lot of other material into the csv file that we don’t need at this time.
$ head -n5 part-00000-4fc0903e-de09-424e-ae77-3ee8e39d2ba0-c000.csv
20241206210837,20241119050652,actionbutton.co,https://embed.actionbutton.co/widget/widget-iframe.html?widgetId=SPK-QkZERA==,text/html,text/html,"",""
20241209150905,20241119050652,actionbutton.co,https://embed.actionbutton.co/widget/widget-iframe.html?widgetId=SPK-QkZERA==,text/html,text/html,"",""
20241221134950,20241218212446,actionbutton.co,https://embed.actionbutton.co/widget/widget-iframe.html?widgetId=SPK-REBCQw==,text/html,text/html,"",""
20241221135019,20241218212446,actionbutton.co,https://embed.actionbutton.co/widget/widget-iframe.html?widgetId=SPK-REBCQw==,text/html,text/html,"",""
20241222125709,20241218212446,actionbutton.co,https://embed.actionbutton.co/widget/widget-iframe.html?widgetId=SPK-REBCQw==,text/html,text/html,"",""
How can we get just the articles from our site? Well, if we have well structured URL pattern for them (https://website.com/post/ in our case), that makes our lives easy! Let’s modify our Scala script from above, and add two filters to it; hasMIMETypes and hasUrlPatterns.
import io.archivesunleashed._
import io.archivesunleashed.udfs._
// Data and results.
val warcs = "/path/to/warcs/*"
val results = "/path/to/results/"
// Load and filter data.
val webpages = RecordLoader.loadArchives(warcs, sc).webpages()
val mimeTypes = Array("text/html")
val urlsPattern = Array(".*website\\.com/post.*")
val articles = webpages
.filter(hasMIMETypes($"mime_type_web_server", lit(mimeTypes)))
.filter(hasMIMETypes($"mime_type_tika", lit(mimeTypes)))
.filter(hasUrlPatterns($"url", lit(urlsPattern)))
.dropDuplicates("url")
// Write results to csv.
articles.write
.option("timestampFormat", "yyyy/MM/dd HH:mm:ss ZZ")
.format("csv")
.option("escape", "\"")
.option("encoding", "utf-8")
.save(results + "articles")
This will give as a csv with only articles from our website:
head -n2 part-00000-4fc0903e-de09-424e-ae77-3ee8e39d2ba0-c000.csv
crawl_date,last_modified_date,domain,url,mime_type_web_server,mime_type_tika,language,content
20250107185922,"",website.com,https://www.website.com/post/10-easy-ways-incorporate-movement-throughout-day-reach-10k-step-goal,text/html,text/html,en,"10 Easy Ways To Incorporate Movement Throughout The Day To Reach Your 10k Step Goal...
Enrichment
One the big reoccurring questions we had during our Archives Unleashed Datathons and Research Cohorts meetings was how to get pretty specific data like publication date or the author of a web page from the data. We’ll it’s still a reoccurring question within my team. We wanted to extract 1) the title of a post, 2) the author of a post, 3) publication date, and as a bonus 4) the read time. Luckily in our case, all this info is available in the content column if we look for patterns in the data. Then, if we find them, we just need to parse it all out, with a whole bunch of regex 😩!
Article Title
How to get the title? Well, there is a pattern in our dataset. It’s the first set of characters up to the first pipe character.
// Extract titles from content column.
val articlesWithTitle = articles
.withColumn("title", regexp_extract($"content", "^([^|]+?)\\s\\|", 1))
Article Author
There is a pattern, or well, a few patters for the author name. It is 1) after the pipe character, 2) the string “By” or “Written By” precedes the author name, 3) a three character string for the month follows the author, and 4) sometimes the number of minutes read follows as well.
val authorRegex =
"""(?i)(?:By|Written by)\s+((?:[A-Z]\.){1,3}(?:\s+[A-Z][\p{L}'\-]*)*|(?:Dr\.)\s+(?:[A-Z][\p{L}'\-]*\s*)+|(?:[A-Z][\p{L}'\-]*(?:\s+(?:&|and)\s+[A-Z][\p{L}'\-]*|\s+[A-Z][\p{L}'\-]*)*))(?=\d*\s*(?:min\s+read)?\s*(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\b)"""
Article Publication Date
Publication date is pretty straight forward since we just need to look for that three character string representation of a month, then day and year.
val publicationDateRegex =
"""(?i)(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[a-z]*\s+(\d{1,2})(?:st|nd|rd|th)?[,\s]+(\d{4})"""
Article Read Time
Read time is a little bit easier, since we just need to look for a digit or two then the string “min”.
val readTimeRegex = "(\\d+) min"
Generating an Enriched CSV
Now that we have our regex we’ll need to combine it all together to create a new csv. For our new csv, we also want the publication date converted to from the string like “Jan 1st 2025” to YYYYMMDD. So, we’ll create what’s called a “User Defined Function (UDF)” to help us.
Then if we write that out to a csv we get something like this:
head -n2 evie-articles-enriched-20250520.csv
crawl_date,url,title,author,publication_date,read_time,content
20241118131855,https://www.website.com/post/women-learn-jordan-peterson,Can Women Learn Anything From Jordan Peterson?,Freya India,20241118,5,"Can Women Learn Anything From Jordan Peterson? | Evie Magazine Discover AboutThe GlancePrint Edition Subscribe Login Culture Can Women Learn Anything From Jordan Peterson? Dr. Jordan Peterson is back. By Freya IndiaNov 18th 2024 5 min read Favorite Bookmark for later Pexels/cottonbro studio The renowned Canadian professor and clinical psychologist...
From here, it’s just making time for further investigation and research into our dataset. But, we’re filtered and enriched our original dataset into something incredibly valuable our overall research goals and needs. Hopefully this sparks something on your end!
The 26th ACM International Conference on Multimodal Interaction (ICMI 2024) was held from November 4 to 8 in San José, Costa Rica. This premier international forum brought together researchers and practitioners working at the intersection of multimodal artificial intelligence and social interaction. The conference focused on advancing methods and systems that integrate multiple input and output modalities, and explored both technical challenges and real-world applications.
In this blog post, I share my experience attending the conference in person, the engaging discussions I took part in, and key takeaways from the sessions, presentations, and interactions throughout the week. Our full paper, "Improving Usability of Data Charts in Multimodal Documents for Low Vision Users," was presented on Day 3, Thursday, 7 November 2024, during the 16:30 to 18:00 session in poster format.
ICMI 2024 followed a single-track format that included keynote talks, full and short paper presentations, poster sessions, live demonstrations, and doctoral spotlight papers. Its multidisciplinary approach emphasized both scientific innovation and technical modeling, with a strong emphasis on societal impact.
This keynote offered a compelling reflection on how to center human values when designing multimodal conversational systems. Drawing from projects in domains like cultural heritage, finance, and micro-entrepreneurship, the talk emphasized that conversational user interfaces must go beyond functionality to consider bias, accountability, and trust. One striking example involved using chatbots to engage with underrepresented women entrepreneurs, where values like creditworthiness emerged as complex, culturally embedded concepts. Rather than treating these issues as technical problems alone, the talk advocated for deeply contextual, participatory design approaches that align with users’ lived experiences. I found the discussion particularly relevant as generative conversational AI continues to evolve rapidly, raising new questions around social impact and responsible system behavior. The keynote made a strong case for embedding ethical reflection into every stage of multimodal system development, especially when these systems serve diverse, real-world communities.
This keynote explored the future of communication through 3D calling, where lifelike avatars allow people to share virtual spaces in ways that closely resemble face-to-face interaction. Yaser Sheikh introduced codec avatars, AI-generated models that replicate a person’s appearance, voice, and behavior with high fidelity. These avatars rely on neural networks and advanced capture systems capable of recording subtle social cues in three dimensions. The talk positioned 3D calling as the next step in the evolution of telecommunication, moving beyond video conferencing toward immersive social presence. What stood out was the goal of creating interactions that feel as natural and authentic as real life. The technical demands are significant, involving breakthroughs in both perception and rendering. Still, the potential for transforming remote collaboration, virtual reality, and human-computer interaction is substantial. This keynote presented a compelling vision of digitally mediated communication grounded in realism, presence, and social connection.
This keynote examined the deep cognitive and neural foundations of human conversation and what they mean for the future of conversational AI. Thalia Wheatley argued that the real innovation in human communication is not language alone, but the shared mental map that allows people to align meanings in real time. Drawing on research from psychology and neuroscience, the talk unpacked conversation as a complex, multi-channel coordination of signals involving timing, intent, emotion, and mutual understanding. This dynamic interplay defines how we connect and co-construct meaning. Wheatley challenged the audience to rethink conversational AI not just as a language processing task, but as an attempt to model the subtle choreography of human minds in sync. I found this perspective especially relevant as we continue to push for more socially aware and responsive AI systems. The talk offered both scientific insight and a roadmap for building machines that can genuinely participate in human dialogue.
This keynote traced the evolution of research behind Greta, a virtual agent platform designed for rich social interaction. Catherine Pelachaud shared how her team progressed from modeling emotional expressions to building agents that engage as active conversational partners. The talk focused on the integration of adaptive mechanisms such as imitation, synchronization, and conversational strategies that allow agents to respond dynamically during interaction. Evaluation studies were used to assess the impact of these features on user perception and interaction quality. What stood out was the commitment to modeling nuanced social behavior and making agents not just expressive, but socially responsive. The presentation offered a valuable view into the long-term development of socially interactive agents and the design choices involved in making them believable and effective in real-time communication.
This year’s Best Paper tackled a critical and often overlooked issue in the design of socially interactive agents: gender bias in the generation of facial non-verbal behaviors. The study showed that existing models frequently reproduce and even amplify gendered patterns found in training data. The authors introduced FairGenderGen, a model designed to reduce gender-specific cues by incorporating a gradient reversal layer during training. This approach minimized gender-related distinctions in generated behaviors while preserving their natural timing and expressiveness. Evaluation results confirmed that the model lowered bias without compromising the perceived quality of the behaviors. The work stands out for combining rigorous technical development with a clear ethical motivation. It also raises important questions about the balance between realistic behavior synthesis and fairness, and how perceptions of believability may differ based on gender expectations in human-computer interaction.
This paper introduced a real-time, multimodal system for predicting end-of-turn moments in three-party conversations. Accurate turn-taking is essential for natural interaction in spoken dialogue systems, but most existing models either lag in responsiveness or oversimplify complex dynamics like overlap and interruption. The authors addressed these challenges by combining a window-based approach with a fusion of multimodal features, including gaze, prosody, gesture, and linguistic context. Their model, which integrates DistilBERT and GRU layers, predicts turn shifts every 100 milliseconds and accounts for different turn-ending types such as interruption, overlapping, and clean transitions. The study also involved a new annotated dataset with synchronized gaze and motion capture data. Results showed a substantial performance improvement over traditional IPU-based models, particularly in handling nuanced conversational cues. This work sets a new benchmark for building socially aware agents that can manage multi-party dialogue with precision and fluidity.
Our paper addressed a critical accessibility gap in the usability of data charts for low-vision users, particularly on smartphones. While prior solutions have focused on blind screen reader users, they often neglect the residual visual capabilities of individuals with low vision who rely on screen magnifiers. We introduced ChartSync, a multimodal interface that transforms static charts into interactive slideshows, each offering magnified views of key data points alongside tailored audio narration. The system uses a combination of computer vision, prompt-engineered language models, and user-centered design to link charts with related text and surface important data facts. In our evaluation with 12 low-vision participants, ChartSync significantly outperformed traditional screen magnifiers and state-of-the-art alternatives in terms of usability, comprehension, and cognitive load. Presenting this work at ICMI was a valuable opportunity to engage with researchers in accessibility and multimodal interaction, and to receive constructive feedback on future directions such as desktop deployment and dynamic skimming features
Grand Challenge
Grand Challenges at ICMI serve as focused community efforts to tackle open problems in multimodal interaction by providing shared datasets, clearly defined tasks, and evaluation protocols. They are designed to stimulate collaboration, benchmark progress, and drive innovation in emerging or underexplored areas of research. ICMI 2024 included two main Grand Challenges: EVAC, focused on multimodal affect recognition, and ERR, centered on detecting interaction failures in human-robot interaction.
The Empathic Virtual Agent Challenge (EVAC) focused on the recognition of affective states in multimodal interactions, using a new dataset collected in cognitive training scenarios with older adults. Participants tackled two tasks: predicting the presence and intensity of core affective labels such as "confident," "anxious," and "frustrated," and estimating appraisal dimensions like novelty, goal conduciveness, and coping, both in summary and time-continuous formats. The winning team leveraged state-of-the-art foundational models across speech, language, and vision, combining them through late fusion to outperform unimodal baselines. What stood out was the challenge's emphasis on real-world complexity, working with naturalistic, French-language data from therapeutic contexts, and requiring models to operate across asynchronous and noisy multimodal inputs. The challenge highlighted the potential of emotion-aware AI for health and assistive technologies, while also exposing the limits of current methods in capturing subtle, temporally evolving emotional cues.
This challenge focused on detecting interaction failures in human-robot interaction (HRI) by analyzing multimodal signals from users. Participants were asked to identify three types of events: robot mistakes, user awkwardness, and interaction ruptures. These were labeled based on observable disruptions in the interaction. The dataset was collected in a workplace setting where a robotic wellbeing coach engaged with users. It included facial action units, speech features, and posture data. The winning approach used a time series classification pipeline with MiniRocket classifiers, relying heavily on conversational turn-taking cues. Other strong entries employed GRUs, LSTMs, and modality-specific encoders or fusion techniques. Teams faced challenges related to class imbalance and the subtle nature of the rupture events, which often unfolded over time. The ERR challenge offered a timely benchmark for building more socially aware robots that can detect and respond to failures as they happen. It also demonstrated the value of real-world, multimodal data in advancing HRI research.
Travel experience:
Beyond the conference sessions, exploring Costa Rica offered an unforgettable complement to the ICMI experience. I visited La Paz Waterfall Gardens and Volcán Poás National Park, where the towering crater and tranquil lake provided breathtaking views. The trails were surrounded by rich biodiversity, with vibrant flora and sightings of toucans, hummingbirds, and butterflies creating a vivid encounter with the region's unique wildlife. As a coffee enthusiast, it was a great experience to sample and purchase high-quality peaberry coffee, a local specialty known for its smooth flavor and rarity. The trip also offered a chance to savor traditional Costa Rican cuisine and engage with the country's welcoming culture. Traveling with a group of researchers made the journey even more enriching. I was able to form connections not only with attendees from ICMI but also with those from CSCW, which was colocated, fostering cross-community conversations in both formal and informal settings.
Conclusion
Attending ICMI 2024 was both professionally rewarding and personally enriching. The conference showcased cutting-edge research in multimodal interaction, fostered thoughtful discussions on the future of socially aware systems, and highlighted important ethical and human-centered considerations in AI design. Presenting our work, engaging with a diverse research community, and participating in focused workshops and challenges provided valuable insight into the field’s evolving directions. Beyond the academic sessions, the experience of exploring Costa Rica and building new connections across communities added depth to the journey. I leave with a renewed sense of curiosity, several collaborative ideas, and deep appreciation for the vibrant and inclusive spirit of the ICMI community.
Acknowledgements
I would like to thank the ICMI 2024 conference organizers for curating an engaging and thoughtful program. I am especially grateful to my advisor, Dr. Vikas G. Ashok, and the WS-DL research group for their continued guidance and support. I also sincerely thank the ODU ISAB, the College of Sciences Graduate School (CSGS), and the Department of Computer Science at Old Dominion University for their generous support in funding my conference registration and travel. Finally, heartfelt thanks to all the colleagues, fellow researchers, and friends who made this trip a memorable and enriching experience.
Witt's account of the business side of the early history is much less detailed and some of the details don't match what I remember.
But as regards the technical aspects of this early history it appears that neither author really understood the reasons for the two kinds of innovation we made; the imaging model and the I/O architecture. Witt writes (Page 31):
The first time I asked Priem about the architecture of the NV1, he spoke uninterrupted for twenty-seven minutes.
Below the fold, I try to explain what Curtis was talking about for those 27 minutes. It will take me quite a long post.
The opportunity we saw when we started Nvidia was that the PC was transitioning from the PC/AT bus to version 1 of the PCI bus. The PC/AT bus' bandwidth was completely inadequate for 3D games, but the PCI bus had considerably more. Whether it was enough was an open question. We clearly needed to make the best possible use of the limited bandwidth we could get.
We had two basic ways of making "the best possible use of the limited bandwidth":
Reduce the amount of data we needed to ship across the bus for a given image.
Increase the amount of data shipped in each cycle of the bus.
Imaging Model
A triangle is the simplest possible description of a surface. Thus almost the entire history of 3D computer graphics has modeled the surfaces of 3D objects using triangles. But there is a technique, dating back at least to Robert Mahl's 1972 paper Visible Surface Algorithms for Quadric Patches, for modeling curved surfaces directly. It takes a lot more data to describe a quadric patch than a triangle. But to achieve equivalent realism you need so many fewer patches that the amount of data for each frame is reduced by a significant factor.
Virtua Fighter on NV1
As far as I know at the time only Sega in the video game industry used quadric patches. When we launched NV1 at Comdex we were able to show Sega arcade games such as Virtua Fighter running on a PC at full frame rate, a first for the industry. The reason was that NV1 used quadric patches and thus made better use of the limited PCI bus bandwidth.
At Sun, James Gosling and I built the extremely sophisticated and forward-looking but proprietary NeWS window system. At the same time, I also worked with engineers at competitors such as Digital Equipment to build the X Window System. One of my many learning experiences at Sun came early in the long history of the X Window System. It rapidly became obvious to me that there was no way NeWS could compete with the much simpler, open-source X. I argued for Sun to open-source NeWS and failed. I argued for Sun to drop NeWS and adopt X, since that was what application developers wanted. Sun wasted precious time being unable to decide what to do, finally deciding not to decide and wasting a lot of resource merging NeWS and X into a kludge that was a worse NeWS and a worse X than its predecessors. This was just one of a number of fights at Sun I lost (this discusses another).
Once Microsoft announced Direct X it was obvious to me that Nvidia was doomed if the next chip did quadric patches, because the developers would have to work with Direct X's triangles. But, like Sun, Nvidia seemed unable to decide to abandon its cherished technology. Time for a decision to be effective was slipping away. I quit, hoping to shake things up so as to enable a decision to do triangles. It must have worked. The books recount how close Nvidia was to bankruptcy when RIVA 128 shipped. The rest is history for which I was just an observer.
I/O Architecture
In contrast the I/O architecture was, over time, the huge success we planned. Kim writes (Page 95):
Early on, Curtis Priem had invented a "virtualized objects" architecture that would be incorporated in all of Nvidia's chips. It became an even bigger advantage for the company once Nvidia adopted the faster cadence of chip releases. Priem's design had a software based "resource manager", essentially a miniature operating system that sat on top of the hardware itself. The resource manager allowed Nvidia's engineers to emulate certain hardware features that normally needed to be physically printed onto chip circuits. This involved a performance cost but accelerated the pace of innovation, because Nvidia's engineers could take more risks. If the new feature wasn't ready to work in the hardware, Nvidia could emulate it in software. At the same time, engineers could take hardware features out when there was enough leftover computing power, saving chip area.
For most of Nvidia's rivals, if a hardware feature on a chip wasn't ready, it would mean a schedule slip. Not, though, at Nvidia, thanks to Priem's innovation. "This was the most brilliant thing on the planet," said Michael Hara. "It was our secret sauce. If we missed a feature or a feature was broken, we could put it in the resource manager and it would work." Jeff Fisher, Nvidia's head of sales, agreed: "Priem's architecture was critical in enabling Nvidia to design and make new products faster."
Context
Nvidia is just one of the many, many startups that Sun Microsystems spawned. But at the time what made Nvidia unique among the competing graphics startups was the early engineers from the team at Sun that built the GX series of graphics chips. We went through an intensive education in the techniques needed to implement graphics effectively in Unix, a multi-process, virtual memory operating system. The competitors all came from a Windows background, at the time a single-process, non-virtual memory system. We understood that, in the foreseeable future, Windows would have to evolve multi-processing and virtual memory. Thus the pitch to the VCs was that we would design a "future-proof" architecture, and deliver a Unix graphics chip for the PC's future operating system.
The GX team also learned from the difficulty of shipping peripherals at Sun, where the software and hardware schedules were inextricable because the OS driver and apps needed detailed knowledge of the physical hardware. This led to "launch pad chicken", as each side tried to blame schedule slippage on the other.
Not only do input/output operations have to be carried out by operating system software, the design of computers utilizing the PDP11 architecture usually requires that registers at each of the input/output devices be read by the central processing unit in order to accomplish any input/output operation. As central processing units have become faster in order to speed up PDP11 type systems, it has been necessary to buffer write operations on the input/output bus because the bus cannot keep up with the speed of the central processing unit. Thus, each write operation is transferred by the central processing unit to a buffer where it is queued until it can be handled; other buffers in the line between the central processing unit and an input/output device function similarly. Before a read operation may occur, all of these write buffers must be flushed by performing their queued operations in serial order so that the correct sequence of operations is maintained. Thus, a central processing unit wishing to read data in a register at an input/output device must wait until all of the write buffers have been flushed before it can gain access to the bus to complete the read operation. Typical systems average eight write operations in their queues when a read operation occurs, and all of these write operations must be processed before the read operation may be processed. This has made read operations much slower than write operations. Since many of the operations required of the central processing unit with respect to graphics require reading very large numbers of pixels in the frame buffer, then translating those pixels, and finally rewriting them to new positions, graphics operations have become inordinately slow. In fact, modern graphics operations were the first operations to disclose this Achilles heel of the PDP11 architecture.
We took two approaches to avoiding blocking the CPU. First, we implemented a queue in the device, a FIFO (First In First Out), that was quite long, and we allowed the CPU to read from the FIFO the number of free slots, the number of writes it could do and be guaranteed not to block. When the CPU wanted to write to NV1 it would ask the FIFO how many writes it could do. If the answer were N, it would do N writes before asking again. NV1 would acknowledge each of those writes immediately, allowing the CPU to proceed to compute the data for the next write. This was the subject of US5805930A: System for FIFO informing the availability of stages to store commands which include data and virtual address sent directly from application programs (inventors David S. H. Rosenthal and Curtis Priem), the continuation of an application we filed 15th May 1995. Note that this meant the application didn't need to know the size of the device's FIFO. If a future chip had a bigger or smaller FIFO, the unchanged application would use it correctly.
Second, we tried as far as possible not to use the CPU to transfer data to and from NV1. Instead, whenever we could we used Direct Memory Access, in which the I/O device reads and writes system memory independently of the CPU. In most cases, the CPU instructed NV1 to do something with one, or a few writes, and then got on with its program. The instruction typically said "here in memory is a block of quadric patches for you to render". If the CPU needed an answer, it would tell NV1 where in system memory to put it and, at intervals, check to see if it had arrived.
Remember that we were creating this architecture for a virtual memory system in which applications had direct access to the I/O device. The applications addressed system memory in virtual addresses. The system's Memory Management Unit (MMU) translated these into the physical addresses that the bus used. When an application told the device the address of the block of patches, it could only send the device one of its virtual addresses. To fetch the patches from system memory, the DMA engine on the device needed to translate the virtual address into a physical address on the bus in the same way that the CPU's MMU did.
So NV1 didn't just have a DMA engine, it had an IOMMU as well. We patented this IOMMU as US5758182A: DMA controller translates virtual I/O device address received directly from application program command to physical i/o device address of I/O device on device bus (inventors David S. H. Rosenthal and Curtis Priem). In 2014's Hardware I/O Virtualization I explained how Amazon ended up building network interfaces with IOMMUs for the servers in AWS data centers so that mutiple virtual machines could have direct access to the network hardware and thus eliminate operating system overhead.
Context switching
The fundamental problem for graphics support in a multi-process operating system such as Unix (and later Linux, Windows, MacOS, ...) is that of providing multiple processes the illusion that each has exclusive access to the single graphics device. I started fighting this problem in 1983 at Carnegie-Mellon. James Gosling and I built the Andrew Window System, which allowed multiple processes to share access to the screen, each in its own window. But they didn't have access to the real hardware. There was a single server process that accessed the real hardware. Applications made remote procedure calls (RPCs) to this server, which actually drew the requested graphics. Four decades later the X Window System still works this way.
RPCs imposed a performance penalty that made 3D games unusable. To allow, for example, a game to run in one window while a mail program ran in another we needed the currently active process to have direct access to the hardware, and if the operating system context-switched to a different graphics process, give that process direct access to the hardware. The operating system would need to save the first process' state from the graphics hardware, and restore the second process' state
Our work on this problem at Sun led to a patent filed in 1989, US5127098A: Method and apparatus for the context switching of devices (inventors David S. H. Rosenthal, Robert Rocchetti, Curtis Priem, and Chris Malachowsky). The idea was to have the device mapped into each process' memory but to use the system's memory management unit (MMU) ensure that at any one time all but one of the mappings was invalid. A process' access to an invalid mapping would interrupt into the system's page fault handler, which would invoke the device's driver to save the old process' context and restore the new process' context. The general problem with this idea is that, because the interrupt ends up in the page fault handler, it requires device-dependent code in the page fault handler. This is precisely the kind of connection between software and hardware that caused schedule problems at Sun.
There were two specific Nvidia problems with this idea. First that Windows wasn't a virtual memory operating system so you couldn't do any of this. And second that even once Windows had evolved into a virtual memory operating system, Microsoft was unlikely to let us mess with the page fault handler.
As you can see in Figure 6 of the '930 patent, the I/O architecture consisted of an interface between the PCI bus and an internal bus that could implement a number of different I/O devices. The interface provided a number of capabilities:
It implemented the FIFO, sharing it among all the devices on the internal bus.
It implemented the DMA engine and its IOMMU, sharing it among all the devices on the internal bus.
Using a translation table, it allowed applications to connect to a specific device on the internal bus via the interface using a virtual name.
It ensured that only one application at a time could access the interface.
The difference between the PCI and PC/AT buses wasn't just that the data path grew from 16 to 32 bits, but also that the address bus grew from 24 to 32 bits. The address space was 256 times bigger, thus Nvidia's devices could occupy much more of it. We could implement many virtual FIFOs, so that each application could have a valid mapping to one of them. The device, not the operating system, would ensure that only one of the virtual FIFOs was mapped to the single physical FIFO. A process accessing a virtual FIFO that wasn't mapped to the physical FIFO would cause an interrupt, but this time the interrupt would go to the device's driver, not the page fault handler. The driver could perform the context switch, and re-assign the physical FIFO to the new virtual FIFO. It would also have to copy page table entries from the CPU's MMU into the IOMMU to reflect the placement of the new process' pages in physical memory. There would be no page fault so no knowledge of the device in the operating system's page fault handler. As we wrote in the '050 patent:
the use of many identically-sized input/output device address spaces each assigned for use only by one application program allows the input/output addresses to be utilized to determine which application program has initiated any particular input/output write operation.
Because applications each saw their own virtual FIFO, future chips could implement multiple physical FIFOs, allowing the virtual FIFO of more than one process to be assigned a physical FIFO, which would reduce the need for context switching.
One of the great things about NeWS was that it was programmed in PostScript. We had figured out how to make PostScript object-oriented, homomorphic to SmallTalk. We organized objects in the window system in a class hierarchy with inheritance. This, for example, allowed Don Hopkins to implement pie menus for NeWS in such a way that any user could replace the traditional rectangular menus with pie menus. This was such fun that Owen Densmore and I used the same technique to implement object-oriented programming for the Unix shell.
At a time when PC memory maxed out at 640 megabytes, the fact that the PCI bus could address 4 gigabytes meant that quite a few of its address bits were surplus. So we decided to increase the amount of data shipped in each bus cycle by using some of them as data. IIRC NV1 used 23 address bits, occupying 1/512th of the total space. 7 of the 23 selected one of the 128 virtual FIFOs, allowing 128 different processes to share access to the hardware. We figured 128 processes was plenty.
The remaining 16 address bits could be used as data. In theory the FIFO could be 48 bits wide, 32 from the data lines on the bus and 16 from the address lines, a 50% increase in bits per bus cycle. NV1 ignored the byte part of the address so the FIFO was only 46 bits wide.
So we organized the objects in our I/O architecture in a class hierarchy, rooted at class CLASS. The first thing an application did was to invoke the enumerate() method on the object representing class CLASS. This returned a list of the names of all the instances of class CLASS, i.e. all the object types this instance of the architecture implemented. In this way capabilities of the device weren't wired in to the application. The application asked the device what its capabilities were. In turn, the application could invoke enumerate() on each of the instances of class CLASS in the list, which would get the application a list of the names of each of the instances of each class, perhaps LINE-DRAWER.Thus the application would find out rather than know a priori the names of all the resources (virtual objects) of all the different types that the device supported.
The application could then create objects, instances of these classes, by invoking the instantiate() method on the class object with a 32-bit name for the newly created object. The interface was thus limited to 4B objects for each application. The application could then select() the named object, causing an interrupt if there was no entry for it in the translation table so the resource manager could create one. The 64Kbyte address space of each FIFO was divided into 8 8K "sub-areas". The application could select() an object in each, so it could operate on 8 objects at a time. Subsequent writes to each sub-area were interpreted as method invocations on the selected object, the word offset from the base of each sub-area within the 8Kbyte space specifying the method and the data being the argument to the method. The interface thus supported 2048 different methods per object.
In this way we ensured that all knowledge of the physical resources of the device was contained in the resource manager. It was the resource manager that implemented class CLASS and its instances. Thus it was that the resource manager controlled which instances of class CLASS (types of virtual object) were implemented in hardware, and which were implemented by software in the resource manager. It was possible to store the resource manager's code in read-only memory on the device's PCI card, inextricably linking the device and its resource manager. The only thing the driver for the board needed to be able to do was to route the device's interrupts to the resource manager.
The importance of the fact that all an application could do was to invoke methods on virtual objects was that the application could not know whether the object was implemented in hardware or in the resource manager's software. The flexibility to make this decision at any time was a huge advantage. As Kim quotes Michael Hara as saying:
This was the most brilliant thing on the planet. It was our secret sauce. If we missed a feature or a feature was broken, we could put it in the resource manager and it would work."
Conclusion
As you can see, NV1 was very far from the "minimal viable product" beloved of today's VCs. Their idea is to get something into users' hands as soon as possible, then iterate rapidly based on their feedback. But what Nvidia's VCs did by giving us the time to develop a real chip architecture was to enable Nvidia, after the failure of the first product, to iterate rapidly based on the second. Iterating rapidly on graphics chips requires that applications not know the details of successive chip's hardware.
I have been privileged in my career to work with extraordinarily skilled engineers. Curtis Priem was one, others included James Gosling, the late Bill Shannon, Steve Kleiman, and Jim Gettys. This search returns 2 Sun patents and 19 Nvidia patents for which both Curtis Priem and I are named inventors. Of the Nvidia patents, Curtis is the lead inventor on 9 and I am the lead inventor on the rest. Most describe parts of the Nvidia architecture, combining Curtis' exceptional understanding of hardware with my understanding of operating systems to redefine how I/O should work. I rate this architecture as my career best engineering. It was certainly the most impactful. Thank you, Curtis!
With the bot swarms descending to ransack
websites for every scrap of text they can get in the attempt to improve
their ever-larger-language-models we’ve started to see the emergence of
tools for trapping them in an endless maze of text. The explicit goal of
these tools is to make the bots waste time and energy making their
models (ultimately) worse.
Some examples of these tools include Cloudflare’s AI Labyrinth
service as well as the open source software Nepenthes, Babble, Gabble. They take different
approaches to generating the maze of hyperlinked trash. Sometimes the
site can be pre-generated, as is the case with AI Labyrinth, which saves
on the energy usage since the content isn’t being recomputed. Or the
content can be generated on demand, like Babble and Gabble do, using resource
efficient compiled languages (Rust and Go respectively). Nepenthes
uses Lua, which I believe is more efficient than other interpreted
language like Python or Ruby, but maybe not as much as a well written
program in a compiled language. I’m not sure, but perhaps that’s why
Babble and Gabble got written.
At any rate, no matter how these mazes are generated, the sites are
often called tarpits
because of how the bots get snarled up in them.
One interesting dimension to this is what to put in the tarpit’s
robots.txt, or whether to even have one at all. The Robots Exclusion
Protocol is a simple text file you put at the root of your website.
When crawling a website, a well behaved robot will read it to determine
what parts of the website the website owner wants to be crawled, and by
whom (which bots).
Part of the problem is that ill-behaved robots, like the ones
aggressively crawling websites (using botnets running
off of infected phones, etc) have zero interest in looking at a
robots.txt, let alone parsing it in order to determine what
to crawl. They are programmed to take anything and everything they can
find.
So if they aren’t going to look at it why does it matter what you put in
a tarpit’s robots.txt?
If you care about website owners being able to decide how they
want their content to be crawled and used it’s really important for a
tarpit’s robots.txt to tell bots to go
away.
User-agent: *
Disallow: /
Note: the slash is really important because if it’s not there it
says the opposite–that everyone is allowed in.
Part of the rationale here is that there are well-behaved bots that read
the robots.txt, in order to generate search indexes,
archive web content, and do other generally useful things. In the
interests of a healthy web ecosystem it’s a good idea to warn them away
from tarpits.
But the most important reason for this is that, given enough tarpits, it
will be in the interests of bot creators to modify their software to
read and respect the robots.txt so they don’t get stuck in
a maze of hypertrash. I was reminded of
this subtle technique recently by Ivan Topolsky, who pointed out that
this was an approach taken decades
ago to try to trap bots scraping the web for email addresses.
I guess if you are one of those people that have a problem with
plagiarism machines more generally, even ones that do respect
robots.txt, you may want to explicitly allow those in. This
would be kind of like the inverse of the ai.robots.txt
list.
Who knows whether it will work or not, but I thought it was interesting
that rules can be important even when they aren’t being followed by
everyone.
The ACM Web4All Conference (W4A) is the premier venue for research focused on web accessibility. It brings together a diverse community committed to making the web inclusive for users of all abilities and backgrounds. This year, the 22nd International Web for All Conference (W4A 2025) was held at the ICC Sydney: International Convention & Exhibition Centre in Sydney, Australia, from April 28–29, 2025. In this blog post, I highlight my work titled “AccessMenu: Enhancing Usability of Online Restaurant Menus for Screen Reader Users,” which addresses the accessibility challenges faced by blind and visually impaired (BVI) users when navigating image-based restaurant menus online.
Figure 1 Venkatraman et al.: (A) A sample restaurant menu in its original visual form. (B) The output produced by JAWS Convenient OCR when applied to the menu. (C) The AccessMenu interface, where the red box highlights the natural language query field, the yellow box marks the voice-input button, and the green box indicates the submit button. (D) The refined AccessMenu view displaying only the menu items that match the user’s query.
Motivation:
Ordering food online has become a routine part of modern life, offering speed, convenience, and variety. Yet for BVI individuals, accessing restaurant menus on the web remains a significant barrier. Many restaurants present their menus as images or PDFs, formats that are largely incompatible with screen readers. As a result, tasks such as browsing dishes, identifying ingredients, or comparing prices become laborious and error-prone. While OCR tools and AI-powered assistants attempt to help, they often misinterpret layout structures, skip context, or generate misleading information. These limitations impact users' ability to make informed choices and reduce confidence in digital interactions. To address this, we introduce AccessMenu, a browser extension that converts visual menus into screen reader-friendly interfaces and supports natural language queries for efficient menu navigation.
Background and Related Work:
Prior research in screen reader accessibility has led to the development of tools aimed at simplifying web interaction for BVI users. These include screen reader enhancements, AI-based captioning systems, web automation tools, and voice assistants that allow for more efficient navigation of digital content. Many of these tools focus on improving interaction with standard web elements such as text, forms, or images with alt-text. Recent advances in multimodal models have also enabled basic question-answering over documents and forms, and models like LayoutLM and Donut have demonstrated success in parsing structured data from visually rich documents.
However, these solutions fall short when applied to restaurant menus, which often lack structured HTML markup and rely heavily on spatial cues, icons, and dense visual formatting. OCR-based tools frequently produce disjointed outputs that confuse screen readers and users alike. General-purpose AI assistants may hallucinate or misclassify menu content due to a lack of domain-specific grounding. AccessMenu builds on this foundation by tailoring its design specifically for restaurant menus. It leverages multimodal large language models with customized prompts to extract menu content, interpret layout-dependent information, and present it in a linear, queryable format optimized for screen reader navigation.
Uncovering Usability Issues with Menu Navigation:
To better understand the challenges BVI users face with online restaurant menus, we conducted a semi-structured interview study with 12 blind participants, all proficient in screen reader use. Participants shared their food ordering habits, menu navigation strategies, and frustrations with existing assistive tools. Interviews were conducted remotely and recorded with consent, followed by qualitative analysis using open and axial coding to identify recurring themes and insights that shaped our design approach.
Participants consistently described the limitations of OCR-based tools, citing difficulty in mentally reconstructing menu layouts from screen reader outputs, confusion caused by inconsistent text order, and a reliance on external help to make decisions. Several noted that current AI assistants often generated misleading or incorrect responses due to lack of contextual awareness. Key design insights included the need for a linear presentation of menu items, centralized item information, and support for natural language queries to filter or search menus. These findings directly informed the structure and features of AccessMenu.
AccessMenu Design
Key Features
AccessMenu introduces two core features that address the barriers identified in our user study: linear menu rendering and natural language query support.
Linear Menu Rendering: Recognizing that fragmented OCR outputs hinder screen reader navigation, AccessMenu presents extracted menu items in a clean, linear layout. Each item is structured with its name, description, price, and any applicable dietary indicators in one place, making it easier for users to follow and understand. The layout supports intuitive keyboard navigation, allowing users to move through items sequentially with minimal effort.
Natural Language Query Support: To reduce the burden of manually scanning the entire menu, AccessMenu includes a query interface where users can type or speak natural language questions. For example, users can ask for "gluten-free desserts under $10" or "spicy vegetarian appetizers." The system processes these queries using a multimodal language model, returning only the relevant subset of items while preserving accessibility within the interface.
Architecture
The architecture of AccessMenu is organized into two main stages: information extraction and query processing. These are supported by a lightweight front-end browser extension and a back-end server that hosts the multimodal language model pipeline.
Figure 2 Venkatraman et al.: System architecture of AccessMenu comprising two key phases: (a) Information Extraction Phase, where the system captures visual menu images and uses a multimodal language model to generate a structured JSON representation of the menu content; and (b) Question Answering Phase, where user-issued natural language queries are processed against the structured data to return accessible, filtered results rendered in a screen-reader-friendly format.
Information Extraction Phase: When a user activates AccessMenu on a restaurant website, the extension captures menu images from the page. These are sent to a back-end server, where a multimodal language model is prompted with a custom-designed Chain-of-Thought (CoT) prompt. This prompt instructs the model to parse the image for key content such as dish names, prices, descriptions, and icons, and then structure this content as a JSON menu model. The model is guided to handle icons, legends, and visual layout cues that are often overlooked by conventional OCR methods.
Figure 3 Venkatraman et al.: Chain-of-Thought (CoT) prompt template used for structured menu item extraction. The prompt guides the model through sequential steps: extracting visible text, identifying visual and stylistic cues (such as icons or bold headers), interpreting icons using provided or inferred legends, applying a rigid JSON schema for consistent structuring, and filtering out irrelevant text. An example demonstrates how raw OCR output is transformed into a structured JSON format using reasoning steps grounded in the visual layout.
To evaluate the extraction quality, we curated a dataset of 50 diverse restaurant menus and manually annotated their ground truth structure. We then compared the output of three leading models: GPT-4o-mini, Claude 3.5 Sonnet, and LLaMA 3 Vision. GPT-4o-mini achieved the highest performance across entity recognition (F1 = 0.80), relationship modeling (F1 = 0.73), and structural organization (F1 = 0.84), and was thus integrated into the system.
Query Processing Phase: Once the JSON menu model is generated, it serves as the knowledge base for user queries. The user can issue a natural language query via text or voice, which is again sent to the model using a second CoT prompt. This prompt provides reasoning steps and few-shot examples to ensure the model grounds its responses strictly in the extracted menu data. The system returns a filtered JSON list of relevant items, which is dynamically rendered on the screen in an accessible format.
Query evaluation was conducted by collecting freeform questions from users across five sample menus. Each menu session allowed users to explore and query for 10 minutes. The resulting responses were compared to annotated answers, yielding an overall F1 score of 0.83. Most errors were due to vague query wording or voice transcription issues with complex item names.
User Interface: The AccessMenu interface features a structured layout designed to support seamless screen reader interaction. At the top, users find a query field for typing natural language questions, a voice-input button for spoken queries, and a submit button. Below this, menu items are displayed in a collapsible accordion format, with each item acting as a header that can be expanded using the Enter key to reveal full details, including descriptions, prices, and dietary indicators. The interface supports intuitive navigation using TAB, SHIFT+TAB, and ARROW keys. It incorporates ARIA attributes and tab-indexing to preserve logical focus order and ensure compatibility with common screen readers. The design is optimized for accessibility and ease of use, enabling even novice screen reader users to explore and interact with menus independently.
Modularity in Implementation: The back-end is built using Django and containerized with Docker for consistent deployment. The language model pipeline is managed via LangChain, enabling modular integration of different LLMs. While GPT-4o-mini was used for our evaluations, AccessMenu is designed to accommodate future models with improved reasoning and latency characteristics.
Evaluation
To assess the effectiveness of AccessMenu, we conducted a comprehensive user study comparing it with an existing screen reader-based OCR tool. This evaluation aimed to measure both the functional improvements and subjective experiences offered by AccessMenu across realistic food ordering scenarios.
Participants and Study Design: We conducted a comparative evaluation study with 10 blind screen reader users to assess the usability of AccessMenu in real-world browsing scenarios. Participants were recruited through accessibility mailing lists and community forums and represented a diverse range of ages and screen reader proficiency levels. The study followed a within-subjects design, comparing AccessMenu against a baseline method using JAWS Convenient OCR.
Procedure: Data Collection and Analysis:
Participants completed two food ordering tasks on randomly assigned restaurant websites using each interface. They were asked to locate specific menu items, compare prices, and answer a set of guided comprehension questions. Interaction logs, task completion times, and user errors were recorded. After each condition, participants filled out the System Usability Scale (SUS) and NASA-TLX questionnaires, followed by a semi-structured interview. Quantitative data was analyzed using paired t-tests, while interviews were thematically coded.
Quantitative Results:
AccessMenu significantly outperformed the baseline across all usability metrics. SUS scores improved from a mean of 52.5 (JAWS OCR) to 75.1 (AccessMenu). NASA-TLX scores indicated reduced cognitive load, dropping from 68.2 to 47.5. Task success rates improved from 58% to 89%, and average task completion time decreased by 36%. Error rates also declined, especially for multi-attribute queries.
Qualitative Feedback:
Participants described AccessMenu as easier to follow, less frustrating, and more empowering. They appreciated the structured layout and the ability to filter menus using natural language. Several noted that it was the first time they felt fully in control when browsing visual menus independently. Some suggestions included improving the clarity of voice input feedback and adding shortcuts for faster navigation.
Discussion
The evaluation findings point to concrete directions for future enhancement. The effectiveness of the structured layout and natural language queries highlights the potential for expanding AccessMenu to platform-wide filtering i.e allowing users to issue global dietary or preference-based queries across multiple restaurants. This would align with participants’ interest in scaling the system beyond single menus to support comparison shopping or broader food discovery.
The request for improved voice feedback and faster navigation interfaces directly informs the development of personalized filtering preferences. Customizable query shortcuts, auditory confirmations, and persistent user profiles could reduce repetitive input and support more efficient interactions. Together, these enhancements can strengthen AccessMenu’s role not just as a menu reader but as a personalized decision-making assistant for food ordering.
However, the study also surfaced areas for improvement. Voice input, while appreciated, occasionally caused confusion due to ambiguous transcription feedback. Users expressed interest in additional keyboard shortcuts and auditory cues to streamline interactions further. These insights suggest that even well-designed accessible systems benefit from iterative refinements based on lived user experience. The positive reception and feedback underscore the potential of AccessMenu to support broader deployment and inspire similar domain-specific assistive tools.
Limitations
AccessMenu currently supports only English-language menus and is limited to desktop web environments.
The evaluation was conducted with a small sample size; broader studies are needed to capture variability in user needs.
The system relies on cloud-based LLMs, which may introduce latency and are dependent on stable internet access.
Voice input, while useful, can suffer from transcription inaccuracies, particularly with uncommon item names.
Current design assumes menus are presented as single high-quality images; performance may degrade with low-resolution or cluttered layouts.
Menu legends and icons are interpreted heuristically and may not generalize well across diverse visual styles.
Conclusion
AccessMenu demonstrates how domain-specific applications of multimodal language models can meaningfully advance web accessibility. By transforming complex visual menus into accessible, structured, and interactive formats, the system enables BVI users to navigate restaurant menus with greater ease and independence. The evaluation highlights clear usability improvements over conventional tools, and user feedback reinforces the importance of adaptive, personalized features. While limitations remain, particularly around language support and real-world variability, the foundation laid by AccessMenu offers a promising path forward. Future developments focused on personalization, multilingual access, and broader integration into food ordering platforms can further amplify its impact. Ultimately, AccessMenu represents a step toward more inclusive digital environments, where accessibility is not an afterthought but a core design principle.
References
Venkatraman, N., Nayak, A. K., Dahal, S., Prakash, Y., Lee, H.-N., & Ashok, V. (2025, April). AccessMenu: Enhancing Usability of Online Restaurant Menus for Screen Reader Users. In Proceedings of the 22nd International Web for All Conference (W4A 2025). ACM. Preprint
I was pleased to sit down this month with Laura Spinney, the author of Proto: How One Ancient Language Went Global, a new book about Proto-Indo-European. Spinney is a Paris-based British and French science journalist best known for Pale Rider, a global history of the 1918 influenza, which has been translated into more than 20 languages.
Proto: How One Ancient Language Went Global traces the story of Proto-Indo-European, the ancestor of languages spoken by nearly half of humanity, including English, Latin and Irish in western Europe, Sanskrit and Hindi in India, and even the lost Tocharian languages of western China. Starting in its Black-Sea cradle 6,000 years ago, Spinney blends historical linguistics, mythology, archaeology and genetics with travel stories and personal encounters. Kirkus called the result “a smart, dense, detailed account,” while Publishers Weekly concluded that “this rivets.”
As a former student of Latin, Greek and a little Hittite, I was eager to read the book and interview the author. I was excited to find that archaeology and genetics have transformed the field in recent years. We spoke about the DNA revolution, her favorite language and—of course—her books and reading life!
Tim: What made you want to write about Proto-Indo-European?
Laura Spinney: Because it’s a subject that people get passionate and very grumpy about, that matters to them out of all proportion; because getting at the truth requires real detective work, gathering clues in at least three scientific domains; and because the ancient DNA revolution has pretty much rewritten the Indo-European story in the last decade – to the extent that even people working in those three fields will tell you that nobody has an overview. When I heard that, I realised that there was a useful service that I could provide as a journalist, because I could interview people in the three fields and weave a narrative out of what they told me – a sort of state of the union of the Indo-European question, at this moment in history.
Tim: Starting from hundreds or thousands of original speakers, the descendants of Proto-Indo-European now outpace all other language families in numbers and geographic spread. Why?
Laura Spinney: Something was very successful about that particular language family, no doubt about it. But I think a lot of it comes down to historical accident, or accidents. Proto-Indo-European happened to be the language of a group of people who invented a new way of life – nomadic pastoralism – that allowed them to exploit the vast energy reserves of the Eurasian steppe better than anyone had before them. The inevitable result was a population explosion, and as they spread out, those nomads’ descendants carried their languages with them. But Proto-Indo-European itself eventually died out, and so did many of its offspring. About 400 Indo-European languages and dialects are spoken today, and none of them would have been intelligible to the original Proto-Indo-European-speakers, so it’s not as if the family stood still. Its success, if you want to call it that, has been due to (some of) its speakers’ ability to adapt to a changing context.
Tim: After covering the Yamnaya, the likely first speakers, you move onto chapters about the many branches of Proto-Indo-European. What did you most enjoy learning and writing about?
Laura Spinney: I love them all. I would say that, like a good parent. But it’s true that the Tocharian story was one of the ones I took most pleasure in writing, because of the suggestion that the language was seeded by prehistoric people who were on some kind of crusade – looking for their own utopia. People have set off in search of that non-existent paradise throughout history, and now we know they were doing it in prehistory too. The human imagination is a powerful thing.
Tim: The German translation is titled Der Urknall unserer Sprache, “The Big Bang of Our Language.” Maybe that’s because Germans self-centeredly call it “Indo-Germanic.” But is understanding the origin of our language and people also a sort of self discovery?
Laura Spinney: It certainly has been for me. What have I learned? I’ll keep my list to three things. One, that language is unbelievably malleable, and that languages are time capsules that store their own history within them. If we are clever, we can unravel them like old scrolls and discover that history. Two, that there are deep connections between languages spoken very far apart in the world, and between the stories that their speakers tell. This fact seems to me to explain much about us, but it was previously absent from my education. And three, that migration has been a constant throughout human (pre-)history, and that the paths those migrants took are, to a very large extent, preserved in the branchings of our linguistic family trees.
Tim: Tell us about your library.
Laura Spinney: I love to read but unfortunately I’m a slow reader. If I could change one thing about myself, it would be that. I prefer to read physical books, though I’m not dogmatic about it. I live in Paris where apartments are relatively small so there isn’t an enormous amount of space for books and very annoyingly, mine are not organised according to any known system. My solution has been to carve out two emergency areas. One, on the floor, is books relevant to my current project. The other – suitably elevated – is books that have been important to me at various times and that remain close to my heart. They include works by Camus, Kundera, Faulkner, Jeanette Winterson and Italo Calvino. The shelf dedicated to them is always the closest to where I work, so that their good literary vibes can wash over me.
Tim: What have you been reading lately?
Laura Spinney:John Steinbeck’s East of Eden. I loved it. I copied out these lines into my diary: “‘Maybe the knowledge is too great and maybe men are growing too small,’ said Lee. ‘Maybe, kneeling down to atoms, they’re becoming atom-sized in their souls. Maybe a specialist is only a coward, afraid to look out of his little cage. And think what any specialist misses – the whole world over his fence.'”
Bioinformatics Hub of Kenya initiative (BHKi) was able to find errors in over-complex spreadsheets and use ODE's metadata panel to standardise schemas for future surveys.
The City of Zagreb was able to reduce error-resolution time, comply with open data standards and foster a culture of data literacy across different city offices in the capital of Croatia
DataGénero co-founder joins us for the fifteenth #OKFN100, a series of conversations with over 100 people about the challenges and opportunities facing the open movement
Open Knowledge Nepal was able to cut error-resolution time from weeks to hours and audit 40+ datasets across 5 municipalities, enabling public servants to validate data independently.
This is an excerpt from a longer contribution I made to Responses to the LIS Forward Position Paper: Ensuring a Vibrant Future for LIS in iSchools [pdf]. It is a sketch only, and somewhat informal, but I thought I would put it here in case of interest. It is also influenced by the context in which it was prepared which was a discussion of the informational disciplines and the iSchool in R1 institutions. If you wish to reference it, I would be grateful if you cite the full original: Dempsey, L. (2025). Library Studies, the Informational Disciplines, and the iSchool: Some Remarks Prompted by LIS Forward. In LIS Forward (2025) Responses to the LIS Forward Position Paper: Ensuring a Vibrant Future for LIS in iSchools, The Friday Harbor Papers, Volume 2. https://doi.org/10.6069/F6CQ-H317 [pdf]
We often hear it said that libraries (and librarians) select, organize, retrieve, and transmit information or knowledge. That is true. But those are the activities, not the mission, of the library. … the important question is: “To what purpose?” We do not do those things by and for themselves. We do them in order to address an important and continuing need of the society we seek to serve. In short, we do it to support learning. Robert S. Martin (2003). Reaching across Library Boundaries. In Emerging Visions for Access in the Twenty-first Century Library.
As Robert Bellah observed in The Good Society (Knopf, 1991), "Institutions are socially organized ways of paying attention." Hospitals pay attention to illness and health, police pay attention to crime prevention, and the courts pay attention to justice. Similarly, public libraries are society&aposs way of paying attention to learning and equity. In the United States we hold both in high esteem, so we fund public libraries with tax revenues. Eleanor Jo Rodger (2002) Value and Vision.
[…] the similar shift within academic libraries from an existence based on an assumed and stable value that libraries contribute to the institutional mission to a negotiated comprehension of services and resources where social and intellectual capital provide apt and useful frameworks for conceiving of the exchanges that occur between libraries, librarians, users, communities, institutions and other stakeholders. Tim Schlak in Schlak, T., Corrall, S., & Bracke, P. (2023). The social future of academic libraries: new perspectives on communities, networks, and engagement.
Introduction
The iSchools have the collective resource to situate the library of today in current technology, policy and organizational questions. And potentially to connect to research and education agendas across disciplines.
The report makes clear an immediate challenge - to emphasise and elevate the library research and education agenda within the university. Addressing this seems like a priority. This involves playing (in Bourdieu’s terms) the research university ‘game’, especially looking at what is valued in an R1 institution.[1] At the same time, today’s libraries will benefit from better frameworks, evidence and arguments to guide them. There is a need for stronger connection to workplace issues and skills, and potentially greater focus on credentialing for ongoing development. Is there a tension between these two goals? Is there a way of aligning academic and practice incentives?
However, from an educational perspective it is important to see them in their full breadth as institutionalized community and cultural actors. From a research perspective, libraries are sites of major social, organizational and cultural questions.
There are many scholars in iSchools who are making interesting connections with the library as organization, social actor and institution. Within a more technical iSchool context, there may be sometimes a tendency to see libraries as a collection of information management practices. However, from an educational perspective it is important to see them in their full breadth as institutionalizedcommunity and cultural actors. From a research perspective, libraries are sites of major social, organizational and cultural questions.
Libraries
Here are some of the ways in which libraries intersect with broader agendas.
They are social, learning and research infrastructure connected in multiple ways to the communities they serve. They prompt questions about support and investment in social infrastructure, equity, the status of public goods, health and wellness, the construction and maintenance of research and learning infrastructure.
They are social creations where one can explore long standing manifestations of the public sphere, of social capital, of network theory, of memory and forgetting.
They support learning in both directed and emergent ways – early reading, social skills, study spaces, instruction, life-wide learning. This interacts with community agendas around reading, childhood development, equity, pedagogy, student retention, and wellness.
They have curated the scholarly and cultural record, and so offer the opportunity to explore cultural patterns, including legacies of oppression or oversight.
They are embedded in evolving scholarly ecosystems and help influence their direction. They are centrally involved in service and policy questions around open access, scholarly communication, and research infrastructure.
They have created innovative organizational responses to the network dynamics of recent decades, developing network platforms and logistics systems before they were common more broadly. They have built multi-faceted consortia to help distribute collections, infrastructure and expertise. They were early movers to the cloud. They are deeply embedded in collaborative, vendor and other networks, raising strategic, investment and organizational development questions about platforms, network organization, and related topics. This poses interesting organizational development, management, negotiation and partnering strategies and skills.
They are exploring what organizational, skills development, and staffing patterns will support their future as they continue to provide access to the means of creative production. They need a broad array of skills and attributes, some of which will be drawn from outside the MLIS pool.
Libraries deploy and advise about technologies in a variety of settings – enterprise systems, discovery, content delivery, research workflow, and so on. They are great environments in which to explore the sociotechnical evolution of technologies in practice.
Here are some broad ways in which the library may be reconfiguring services, expertise, and positioning.
Scope. Libraries are co-creating their futures with deeply engaged communities. There is a transition from a library which was transactional and collections-based to one which is relational and community-based. Public libraries have a social role and align services with education and social services, with a range of non-profits and charities. They serve the community’s needs for equity, for educational attainment, for food security, for immigrant services. Academic libraries more deeply engage with campus partners across the research and learning spectrum. They are important partners in research effectiveness, scholarly communication, student retention, and life-wide learning. These trends all create the need for a variety of teachable skills.
Institution. The library is an institution, embedded in particular social relations, values and investments. As such, it has a history and evolving social and cultural meanings. Rodgers argues that public libraries are society’s way of paying attention to equity and learning. What happens when elements of a society do not value learning and equity? Or where this is not understood by the voter? We see this now in the challenges to the public library. What is the equivalent of ‘learning and equity’ for academic or other libraries? It is important for all libraries to explain their value and story in ways that the host understands, and libraries have been very focused on value, values and vision. For much of its existence these have been stable and accepted. However, the abundance of information resources on the web and the rise of economic liberalism has meant that this can no longer be taken for granted.
Story. The library story is being retold to be relational, community-focused and generative, but this story is not widely socialized or always understood by those that support or fund libraries. The value of the library cannot be taken as ‘assumed or stable.’ As Bob Martin suggests, an information-based story is not strong, especially as information activities and investigation are diffused through multiple services on the web, personal activities, and disciplinary homes. The library story is being renegotiated.
Empathy and equity. Recent experiences have underlined the need for the library to purposefully recognize the importance of equity and empathy. Libraries have recognized a need to move past mere statements of diversity and inclusion, to recognize harm or omission and to begin to repair damaging and exclusive practices. The pandemic has also underlined economic inequities, the digital divide, the importance of available social infrastructure, especially for those that critically rely on library spaces and extended services. We know that libraries support mental wellness, social cohesion, digital equity, and personal and community development. Libraries have been asked to step up to additional social roles and to reshape services. This in turn has highlighted staff stress and unpreparedness, and the need for self-care and boundaries.
This makes it an extraordinarily interesting time to prepare people to work in libraries or to investigate them. The library position and role are being re-negotiated and co-created within diverse user communities. This generates educational needs and a wide variety of research questions which would benefit from a multi-disciplinary approach.
At the same time, today’s libraries will benefit from better frameworks, evidence and arguments to guide them.
Libarianship and the iSchool
With new challenges comes a demand for education and for training new skills. This necessity is the topic of this article. We seek the answer to the question of how research librarians can educate themselves to meet the challenges of the unknown ‘new research library’? […] No uniform standardized educational program can take into consideration all the possible paths that the modern research library may choose and therefore all the skills needed by a modern research librarian – and information specialist. Wien, C. N., & Dorch, B. F. (2018). Applying Bourdieu’s field theory to analyze the changing status of the research librarian.
It’s very important for people who want to work in a library to learn communication skills, advocacy skills, because people with an MLIS are going to rise to leadership roles. It’s not going to necessarily be the entry point anymore for working in a library—there are other entry points. When you’re seeking that MLIS, it should be a management degree. It should be helping people to prepare for leading in some way. Sari Feldman (2019) Sari Feldman Gets Ready to Transform (Again).
Of those 11 skills ranked as core, only four could be considered specific to the field of LIS: knowledge of professional ethics, evaluating and selecting information sources, search skills, and the reference interview. The remaining seven are not only more generic but could also be categorized as “soft skills” or personal attributes: interpersonal communication, writing, teamwork, customer service skills, cultural competence, interacting with diverse communities, and reflective practice grounded in diversity and inclusion. Laura Saunders (2018). Core and more: examining foundational and specialized content in library and information science.
Learning and teaching
It was always a challenge encompassing the range of desired skills in the MLIS, given the variety of practice-oriented roles a librarian performs throughout their careers and the variety of working environments (youth services, special, academic, public, etc.). The knowledge and skills needed to run libraries effectively continue to evolve, to the extent that it is now common to recruit for other expertise (languages, social work, instructional design, disciplinary knowledge, marketing and communications, research expertise). The library of today requires the skills needed to manage complex changing organizations, engage and nurture diverse communities, negotiate and advocate.
The library of today requires the skills needed to manage complex changing organizations, engage and nurture diverse communities, negotiate and advocate.
In this context, Sari Feldman’s perspective that the MLS should be a management degree is interesting when placed alongside the demand for increasingly diverse technical skills (data science, data curation, application development, content licensing, collection development, instructional design, and so on) and a broad range of other vocational and general skills (communications, project management, and so on).
The skills and competencies required by the library (and related organizations) have been the subject of ongoing research (e.g. Saunders). And different career stages may prompt very different responses, depending on role, administrative responsibilities, and so on.
Of course, programs may offer different pathways, there may be specialisms (legal, health, …), and there is a variety of joint options possible in some settings (with an MBA for example, or history, or some other discipline).
Is the library education market large enough for eMBA or eMPA style MLSs for those in library management career paths? What about additional certificate-style credentials? These may be in technical areas (data science or AI, for example), in public administration or public policy, in intellectual freedom, in negotiation, in social work, in industrial relations, in copyright, and so on.
The University of Southern Denmark took an interesting approach. In the article above the authors argue that the prestige of the research librarian[2] has declined and that it is not actually clear what skills they will need to do their jobs given the evolving nature of the research library. They introduced a masters program which allowed students to combine some LIS courses with courses from elsewhere in the University and beyond which they feel prepares them best.[3]
Has Library Studies kept up with the changing library landscape? Is it well-positioned to educate the library workforce or to guide its development? In its current iteration the report does not look at career preparation needs or the research and policy interests of the library community they serve. Anecdotally, one is aware of concerns that there is a gap here. Closing this gap is clearly a priority, although one has to understand it first. This is naturally a focus of individual schools and their positioning and emphases.
More collectively, it would be useful to do some research about career preparation needs and research and policy interests in the context of an exploration of Key Areas for education and research. A part of the ambition I spoke about in the introduction should surely involve some recalibration of the library education and research agenda, to ground and connect.
That said, again, the ability to refocus within the existing model in R1 institutions is limited in various important ways.
The Report does not discuss curriculum, and the candidate range could be broad given the discussion above. Here are some high-level library emphases, which do not necessarily map onto potential courses, but which suggest some directions. It is of course just indicative and incomplete.
Nurturing and engaging community
As the library engages with a variety of community partners, a set of collaboration, communication and other skills is required. This might include supporting student success and retention and research workflows in academic settings. In a public library setting, the library is welcoming the community to its space with a growing variety of creative activities and events. It is partnering with social and educational services, with local charities or cultural institutions, with schools and colleges. It is reaching previously overlooked or marginalized populations, it is developing special programs for particular language groups, it is providing services for immigrants. Skills around, for example, community engagement, instruction, event management, public health, and exhibitions are more important.
Values
The Report emphasizes LIS values. The importance of equity and empathy has been underlined in recent years. Both within the structures of the library itself, and in relation to the role of the library within its community. Libraries are refocusing organizational cultures and values, the importance of reparative action in relation to collections, practices and attitudes, and are more actively working to understand and practice inclusion, plurality and diversity. They are working to embrace more justly the experiences, memories and knowledges of all the communities they serve.
Social and so-called soft skills
So-called soft skills, and the contributions of the (often female) library workers who demonstrate them, have often been undervalued or gone unobserved. However, the value and visibility of this work is increasingly recognized as critical (see for example Decker, Dempsey). This is especially so as the library is more relational and collaborative. These are learnable skills which include advocacy, negotiation or conflict resolution, for example, or empathy, communications and teamwork. So-called soft skills are actually very hard, especially in the stressful contexts that have become common in some public library settings.
So-called soft skills, and the contributions of the (often female) library workers who demonstrate them, have often been undervalued or gone unobserved.
Administrative and organizational
The skills required to manage complex connected organizations are various. It is common now to have a Management of People and Organizations course (or some such) which may end up being overloaded.
Management of complex organizations, relationships and tasks.
Greater project-based work.
Negotiation – for content, services and collaborative work.
Strategic planning and budgeting.
People recruitment, retention and development.
Creating diverse and inclusive environments for users and for staff.
Working in consortial and collaborative environments.
Organizational development.
Industrial relations.
Fund raising.
Grant work.
Positioning, communication and advocacy
Traditionally, libraries may have distrusted ‘marketing’, however developing the library story has become more important given changing roles and current pressures around value and values.
Communications and marketing – ensuring that the library story and position is well understood within its community, and elsewhere.
Advocacy – representation of library interest to voters, host institutions, funders, user groups, and others.
Information policy and intellectual freedom
Balancing the rights of creators and consumers, pricing of licensed materials, open access discussions, information ethics, digital equity, attitudes to harvesting for AI or search – there is heightened attention to a range of information policy issues where libraries have to make decisions, advocate, and implement. Librarians also need to understand the legal framework of intellectual freedom, to be equipped with strategies and arguments in a contested political environment, and to have access to updated information.
Specialist skills
A growing number of staff will be drawn from outside the MLIS ranks. We already see this in roles like communication and marketing, technology, development, social work, subject specialties. Certificates or other approaches offered in partnership with others on campus may become more useful. Public Administration or instructional design come to mind, for example.
Information management
Of course, this is a historic core. The iSchool location is valuable in terms of the expanding information management skills of interest. Career preparation certainly benefits from options in data science, programming, research data management, instructional technology, metadata management, and so on.
LIS Forward
As noted throughout, the position of Library Studies within the university, its relationship to other informational disciplines, and its practice orientation have been much discussed. LIS Forward places this discussion in the current iSchool dynamic, a multidisciplinary school in an R1 institution.
As noted, it is in some ways a story of progressive subsumption. Schools of Library Studies diversified into LIS to reflect the changing technology environment, and the variety of informational careers students were following. LIS was subsumed into broader schools of information, as information processing and management became more common. The possible range is very wide, from quite vocationally oriented information systems, to social and philosophical aspects of an informational society, to values-driven social justice and equity emphases. In some cases, informatics or related undergraduate degrees were added. The Report makes clear that Library Studies feels squeezed or undervalued, despite reporting continued demand for the MLS. There are also questions about balance between teaching-oriented faculty and research faculty, as this educational demand continues, as well as increased use of guest faculty.
Given that this is a recurrent discussion, given the gap to practice, given the putative advantages of the multidisciplinary environment, and given the ‘urgency’ expressed in the report, one might expect that a strong response is required. To move the needle, after all, the needle must be moved.
I suggest some candidate areas for attention in the recommendations below. As noted in the introduction, I emphasize these four factors in relation to education and research for libraries throughout:
the benefits of increasing the awareness, scale and impact of research and policy work through a more concertedly collaborative approach,
the benefits of reconnecting more strongly with libraries and related organizations, and the organizations that channel their interests, which includes discussion of more flexible and tailored learning and certification reflecting evolving skills and workplace demands,
the possible benefits of refocusing this particular discussion of Library Studies around the institutional and service dynamics of LAM and connecting that with a variety of disciplinary hinterlands (public administration, social studies, and so on) and moving away from the familiar and maybe superseded discussions about IS, LIS and so on,
the benefits of developing an agenda of Key Areas which connect with current library needs, and which can provide some rationale or motivation for recruitment, research activity, granters, collaborative activity and so on. If iSchool education and research respond more actively to evolving library issues, the people with appropriate skills and interests need to be in place.
This was written before the current administration came into office in the US. I have not updated it in that context.
References
Buckland, M. (2012). What kind of science can information science be? Journal of the American Society for Information Science and Technology, 63(1), 1–7. https://doi.org/10.1002/asi.21656
Cronin, B. (1995). Shibboleth and Substance in North American Library and Information Science Education. Libri, 45(1), 45–63. https://doi.org/10.1515/libr.1995.45.1.45
Decker, E. N. (2020). The X-factor in academic libraries: the demand for soft skills in library employees. College & Undergraduate Libraries, 27(1), 17–31. https://doi.org/10.1080/10691316.2020.1781725
Saunders, L. (2019). Core and More: Examining Foundational and Specialized Content in Library and Information Science. Journal of Education for Library and Information Science, 60(1), 3–34. https://doi.org/10.3138/jelis.60.1.2018-0034
Schlak, T., Corrall, S., & Bracke, P. (2023). The social future of academic libraries : new perspectives on communities, networks, and engagement. Facet Publishing.
Wien, C. N., & Dorch, B. F. (2018). Applying Bourdieu’s field theory to analyze the changing status of the research librarian |. LIBER Quarterly: The Journal of the Association of European Research Libraries, 28(1). https://liberquarterly.eu/article/view/10719/11586
[1] Blaise Cronin (1995) is caustic about librarianship’s inability to play the university game (he does not express the thought in these words), tying this directly to closure of library schools. He argues for the ‘decoupling’ of the L and the IS in LIS, and, in his discussion of a candidate future, foreshadows something of the multi-disciplinary way in which the iSchool actually developed.
[2] The authors use ‘research librarian’ in a specialist sense which seems somewhat similar to library faculty in the US.
[3] This particular program is no longer offered but in a personal communication one of the authors informs me that it is possible to assemble a similar program at a more general level within the university.
On Sunday 4th May Vicky & I saw Berkeley Rep's production of a thought-provoking new play by Moisés Kaufman, Amanda Gronich, and Tectonic Theater, the team behind The Laramie Project:
about the reaction to the 1998 murder of gay University of Wyoming student Matthew Shepard in Laramie, Wyoming. The murder was denounced as a hate crime and brought attention to the lack of hate crime laws in various states, including Wyoming.
An example of verbatim theatre, the play draws on hundreds of interviews conducted by the theatre company with inhabitants of the town, company members' own journal entries, and published news reports.
There’s something awful about a lost picture. Maybe it’s because of a disparity between your original hope and the result: you made the photograph because you intended to keep it, and now that intention—artistic, memorial, historical—is fugitive, on the run toward ends other than your own. The picture, gone forever, possibly revived by strange eyes, will never again mean quite what you thought it would.
The play dramatizes the process archivists at the US Holocaust Memorial Museum went through to investigate an album of photographs taken at Auschwitz. Photographs from Auschwitz are extremely rare because the Nazis didn't want evidence of what happened there to survive.
Below the fold I discuss the play and some of the thoughts it provoked that are relevant to digital preservation.
Developing one of these plays is a painstaking process, The actors conduct extended interviews with the people they will represent, and the playwrights select and organize quotes from the interviews. And, in the case of London Road, set them to music.
In this case most of the participants were archivists at the US Holocaust Memorial Museum who, in 2006, received an offer to donate an album of photographs from Auschwitz. Initially skeptical, archivist Rebecca Erbelding rapidly established that the more than 100 images in the album documented the life of Karl-Friedrich Höcker during the period from May to December 1944 when he served as the adjutant to Richard Baer, the last commandant of Auschwitz. Thus the album became known as the Höcker Album.
The archivists were immediately struck by the lack of any images of camp inmates. Instead, many images showed camp staff relaxing at a resort the inmates constructed for the camp called the Solahütte. Time off at the Solahütte was a reward for good performance. Höcker is seen at the Solahütte:
in the company of young women—stenographers and typists, trained at the SS school in Obernai, who were known generally as SS Helferinnen, the German word for (female) "helpers".
Many of the images showed Höcker with more senior SS officers including Rudolf Höss and Josef Mengele. Some showed Höcker's home and children, including a scene with the children floating in the home's pool in a boat the inmates made for them.
The survival of the album is remarkable. A US Army counter-intelligence officer was assigned to Frankfurt in the aftermath of the war. His story is that, unable to find a official billet, he occupied an abandoned apartment in whose trash he found the album. It wasn't until 2006, six decades later, that he offered it to the US Holocaust Memorial Museum.
One image shows a group of about 70 soldiers at the Solahütte celebrating. It was discovered that they were celebrating the conclusion of the operation led by Adolf Eichmann to exterminate the Hungarian Jews from Carpathian Ruthenia:
Between 15 May and 9 July 1944, over 434,000 Jews were deported on 147 trains, most of them to Auschwitz, where about 80 percent were gassed on arrival.
Despite the celebration, the operation was less than completely successful. Before it was stopped by Miklós Horthy, the Regent of Hungary, it had only succeeded in transporting about 434,000 of Hungary's about 825,000 Jews, and killing about 350,000 of them.
Strangely, the arrival of one of the trains is documented in the only other known album of photographs from Auschwitz, the Auschwitz Album. The images:
document the disembarkation of the Jewish prisoners from the train boxcars, followed by the selection process, performed by doctors of the SS and wardens of the camp, which separated those who were considered fit for work from those who were to be sent to the gas chambers. The photographers followed groups of those selected for work, and those selected for death, to a birch grove just outside the crematoria, where they were made to wait before being killed.
The original owner of that album, Lili Jacob (later Zelmanovic Meier), was deported with her family to Auschwitz in late May 1944 from Bilke (today: Bil'ki, Ukraine), a small town near Berehovo in Transcarpathian Rus which was then part of Hungary. They arrived on May 26, 1944, the same day that professional SS photographers photographed the arrival of the train and the selection process. Richard Baer and Karl Höcker arrived at Auschwitz mere days before the arrival of this transport. After surviving Auschwitz, forced labor in Morchenstern, a Gross-Rosen subcamp, and transfer to Dora-Mittelbau where she was liberated, Lili Jacob discovered an album containing these photographs in a drawer of a bedside table in an abandoned SS barracks while she was recovering from typhus.
first found a photograph of her rabbi but then also discovered a photo of herself, many of her neighbors, and relatives, including a famous shot of her two younger brothers Yisrael and Zelig Jacob.
She took the album with her as she immigrated to the United States. In 1983 she donated it to Yad Vashem, after which it was published.
The title of the play, "Here There Are Blueberries", is the caption of a series of images showing Höcker serving blueberries to a group of Helferinnen at the Solahütte.
Höcker served 18 months in a British POW camp before resuming his pre-war life as a bank cashier. In 1963 he was tried in Frankfurt and sentenced to 7 years:
Höcker denied having participated in the selection of victims at Birkenau or having ever personally executed a prisoner. He further denied any knowledge of the fate of the approximately 400,000 Hungarian Jews who were murdered at Auschwitz during his term of service at the camp. Höcker was shown to have knowledge of the genocidal activities at the camp, but could not be proved to have played a direct part in them. In post-war trials, Höcker denied his involvement in the selection process. While accounts from survivors and other SS officers all but placed him there, prosecutors could locate no conclusive evidence to prove the claim.
On 3 May 1989 a district court in the German city of Bielefeld sentenced Höcker to four years imprisonment for his involvement in gassing murders of prisoners, primarily Polish Jews, in the Majdanek concentration camp in Poland. Camp records showed that between May 1943 and May 1944 Höcker had acquired at least 3,610 kilograms of Zyklon B poisonous gas for use in Majdanek from the Hamburg firm of Tesch & Stabenow.
Obviously, many of the thoughts provoked by the play are relevant to current events, about how one would have behaved and how bureaucrats can compartmentalize their lives so as to claim ignorance of the activities they administer, as Höcker did in his first trial:
I only learned about the events in Birkenau…in the course of time I was there… and I had nothing to do with that. I had no ability to influence these events in any way…neither did I want them, nor carry them out. I didn’t hurt anybody… and neither did anyone die at Auschwitz because of me.
The other set of thoughts are relevant to digital preservation. Obviously, the negatives of the photographs in the two albums did not survive. Individual prints from the negatives did not survive. The images survive because in both cases a set of prints was selected and bound into an album, which protected them. Once the albums had been discovered, their survival for many decades was well within the capabilities of the officer and the survivor. "Benign neglect" was all that was needed. A few of the prints suffered visible water damage, but this didn't impair their value as historical documents.
Compared to WW2, many more images and videos documenting the events currently under way in Ukraine, Gaza, Myanmar, Syria, and many other places are being captured. But they are captured in digital form, and this makes their survival over enough decades to be used in war crimes trials and as the basis for histories unlikely. They may be collected into "albums" on physical media or in cloud services, but neither provides the protection of a physical album. The survival for many decades of such digital albums is well beyond the capabilities of those who took, or who found them. This fact should allow the perpetrators of today's atrocities to sleep much easier at night.
Before they were delivered to preservation experts the survivor and the officer had custody of their albums for four and six decades respectively. Four decades is way longer than the expected service life of any digital medium in common use; images and video on physical media require the custodian proactively to migrate them to new media at intervals. "Benign neglect" does not pay the rent for cloud storage, and trusting to the whims of the free cloud storage services is likely to be neglect but hardly benign.
To improve the chances of current-day albums analogous to the Auschwitz Album and the Höcker Album we need a consumer-grade long-lived storage medium that is cheap enough for everyday use. Alas, for the reasons I set out in Archival Storage, we are extremely unlikely to get it.
The potential uses of artificial intelligence (AI) for metadata workflows have grown rapidly. As a result, there’s a greater need for resources that support metadata managers in leveraging AI to enhance the capabilities of their teams. To address these opportunities for the profession, the OCLC Research Library Partnership (RLP) Metadata Managers Focus Group (MMFG) recently kicked off the Managing AI in metadata workflows working group. The primary goal of the working group is to engage our collective curiosity, identify key challenges, and empower metadata managers to integrate AI into their workflows with confidence.
Our call for participation has attracted contributors from the UK, the United States, Canada, and Australia. During our first meeting, we took time to learn how our contributors are currently looking at AI opportunities in their workflows, including:
How can AI make workflows more efficient and effective?
How can AI services help reduce backlogs of materials by creating brief records?
What are the best practices for AI to help libraries with non-Latin script materials?
How can AI be used to augment metadata workflows for institutional repositories, research data/information management ecosystems, and cultural heritage digital asset management platforms?
We discussed some of the challenges that Metadata Managers are currently facing. Broadly, our conversation touched on:
People: How to engage in change management within metadata organizations, from supporting existing staff to thinking about future competencies. This includes thinking about how AI can help staff navigate complex cataloging rules and best practices.
Economics: How to build financial support for AI into library budgets, including for services, training, and future staffing.
Metadata and platforms: How can metadata managers assess AI platforms and features, especially to understand how to apply them to specific parts of metadata workflows (e.g., generating records, quality control, entity management/authority control)?
Collections: How to learn about the different kinds of machine learning or AI, and which ones are best suited for different collection types. For example, computer vision can be used to generate metadata for photography vs. subject analysis of Electronic Theses and Dissertations (ETDs).
Professional values and ethics: How can metadata managers explore the above questions/areas while championing professional values and ethics and honoring commitments to protect and steward our collections responsibly and sustainably?
These topics were used as a starting point for a deeper exploration in our three workstream groups, which are currently meeting on a regular cadence:
Primary cataloging workflows
Metadata for special/distinctive collections
Institutional repositories
Thanks to our working group members for carrying this important work forward!
Helen Baer, Colorado State University
Michael Bolam, University of Pittsburgh
Jenn Colt, Cornell University
Elly Cope, University of Leeds
Susan Dahl, University of Calgary
Michela Goodwin, National Library of Australia
Amanda Harlan, Nelson-Atkins Museum of Art
Miloche Kottman, University of Kansas
Chingmy Lam, University of Sydney
Yasha Razizadeh, New York University
Jill Reilly, National Archives and Records Administration
Mia Ridge, British Library
Tim Thompson, Yale University
Mary Beth Weber, Rutgers University
Cathy Weng, Princeton University
Helen Williams, London School of Economics
We expect work to conclude by the end of June, with additional blog posts about our findings to follow. Stay tuned!
The biennial NDSA Excellence Awards were established in 2012 to recognize and encourage exemplary achievement in the field of digital preservation stewardship at a level of national or international importance. Over the years many individuals, projects, and organizations have been honored for their meaningful contributions to the field of digital preservation.
The time has come again to recognize and celebrate the accomplishments of our colleagues! Nominations are now being accepted for the NDSA 2025Excellence Awards.
Anyone, any institution, or any project acting in the context of the award categories (noted below) can be nominated for an award. No NDSA membership or affiliation is required. Self-nomination is accepted and encouraged, as are submissions reflecting the needs and accomplishments of historically marginalized and underrepresented communities.
We encourage you to help us highlight and reward distinctive approaches to digital preservation practice. Please submit nominations here:2025 NDSA Excellence Awards Nominations form. Awards will be presented at the Digital Preservation 2025 event this fall.
Nominations are accepted in the following categories:
Individual Award: Recognizing those individuals making a significant contribution to the digital preservation community through advances in theory or practice.
Educator Award: Recognizing academics, trainers, and curricular endeavors promoting effective and inventive approaches to digital preservation education through academic programs, partnerships, professional development opportunities, and curriculum development.
Future Steward Award: Recognizing students and early-career professionals making an impact on advancing knowledge and practice of digital preservation stewardship.
Organization Award: Recognizing those organizations providing support, guidance, advocacy, or leadership for the digital preservation community.
Project Award: Recognizing those activities whose goals or outcomes make a significant contribution or strategic or conceptual understanding necessary for successful digital preservation stewardship.
Sustainability Award: Recognizing those activities whose goals or outcomes make a significant contribution to operational trustworthiness, monitoring, maintenance, or intervention necessary for sustainable digital preservation stewardship.
If you need a little inspiration, check out our webpage for lists of past winners or this blog post on submitting a notable nomination. If you have any questions about the nomination form, please contact the Excellence Awards Working Group co-chairs.
This paper explores an ethics of care framework in academic libraries, specifically with the implementation of a professional development initiative for student employees. Using the Architecture Library at Texas Tech University as a case study, we examine how formal professional development opportunities align with care ethics principles by responding to students’ individual needs, fostering nurturing relationships, and contributing to the academic learning environment. Through an exploration of Noddings’ relational theory of care and Tronto’s phases and elements of care, the aim of this paper is to highlight the ability of higher education to engage in a practice of care towards student workers.
Academic libraries often embody a culture of care, whether through deliberate action or instinctive response. Librarians and library staff routinely provide support and assistance, often without explicitly labeling their actions as “care ethics” – they simply see a need and respond. This caring approach is fundamental to educational settings, where the goal is to empower learners and facilitate their growth.
Student workers in these environments occupy a unique position, straddling the roles of both learner and employee, and as we will later explain, both cared-for and ones-caring. Through their work in the library, they develop essential workplace skills and learn to navigate professional relationships. The supervisor (for the purposes of this paper, a person who may be a librarian or library staff) is responsible not only for overseeing the daily tasks, but also for supporting workers as they learn to navigate a workplace environment, often for the first time.
This paper will first provide some context of the ethics of care, then focus on the relational (one-caring/cared for) approach of Nel Noddings and finally examine the expanded ethics of care of Joan Trontom which moves into realms of organizational care. Second, we will provide a literature review of current practices of care in academic libraries and higher education. Then, we will discuss the actors in academic libraries broadly and finally share a practice of care enacted at the Texas Tech Architecture Library.
Ethics of Care Theories: An Overview
The ethics of care as a philosophical theory largely began with psychology researcher Carol Gilligan. Participating in a study on moral development with her mentor Lawrence Kohlberg, graduate student Gilligan pointed out that the research findings were largely biased against girls. The study operated under the theory that moral development moves from universal to principled thinking, and under that assumption, girls were “behind” boys in their moral thinking. Gilligan argued that girls are not morally stunted, they have a different perspective, and an inter-relational way of approaching conflict. The question being asked of children (“Should Heinz steal medicine for his wife?”) could be interpreted as “Should Heinz steal medicine for his wife?” (with boys largely saying Heinz “yes, he should”) and “Should Heinz steal medicine for his wife?” (with girls largely saying “no, he needs to find another way to get the medicine- what if he’s arrested?”). Gilligan argued that both considerations were engaging in moral development, one with a perspective of justice and another with a perspective of care. Gilligan says that without the voices of girls in the study, the consideration of care was largely unheard (1982, 2011).
This intellectual disagreement with Kohlberg led to the publication of In A Different Voice: Psychological Theory and Women’s Development in 1982. Gilligan’s study on moral development has been criticized for seemingly contributing to stereotypes about gender, with critics arguing that it posits a strict demarcation between the behaviors of men and of women and ignored the gendered socialization of boys and girls (Peirson-Hagger, 2023). As Gilligan has continued to write on the moral development and experiences of girls and women, her theory has been refined but still highlights the societal issue of gendering care as feminine, instead of as a human action. In her 2011 book Joining the Resistance, Gilligan clarifies:
Listening to women thus led me to make a distinction I have come to see as pivotal to understanding care ethics. Within a patriarchal frame, care is a feminine ethic. Within a democratic framework, care is a human ethic. (p. 22)
Gilligan argues that she is not making these statements about how all men act and how all women act. The issue is that in a patriarchal society care is gendered as feminine, while justice is seen as “aligned with reason, mind, and self—the attributes of ‘rational man’” (p. 23-24). An ethic of care, for Gilligan, is democratizing—it requires interdependence and a responsibility to others.
Writing soon after Gilligan, Noddings builds on the ethics of care and applies it to an educational framework. Her early framework drew heavily on women’s traditional caregiving roles, which she described as examples of “natural” caring. She often illustrated her concepts using mother-child relationships, drawing criticism for this potentially limiting and essentialist perspective that seemed to require self-sacrifice for the child’s benefit. Her theories are largely framed as a relationship, and she later called her version of ethics of care “relational ethics” instead of “feminist ethics” (Noddings, 2013).
Like Gilligan, Noddings has refined her theory and reflected back on her works. In the 2013 edition of Caring, she clarifies her ethics of care as being chiefly concerned with “how, in general, we should meet and treat one another—with how to establish, maintain, and enhance caring relations” (p. x) after commenting that “hardly anyone has reacted positively to the word feminine” (p. xiii).
Noddings suggests in her 1984 work Caring: A Relational Approach to Ethics and Moral Education that teachers demonstrate caring tendencies that drive them to address students’ specific needs. While discussing teacher-student dynamics, she employs the terms “one-caring” and “cared-for” to characterize their interactions. In this relationship, the “one-caring” (teacher) becomes invested, or engrossed, in the development of the “cared-for” (student). Using Noddings’ concepts to apply to the library workplace environment, we can frame student workers as recipients of care (the cared-for) and supervisors as providers of care (the one-caring) much like Noddings attributes those roles to students and teachers. This perspective establishes the relational framework necessary for implementing a care-based practice.
On the tasks of the teacher/one-caring, Noddings says they must “stretch the student’s world by presenting an effective selection of that world with which she [the one-caring] is in contact and to work cooperatively with the student in his struggle towards competence in that world” (p. 167). Teachers/ones-caring use their knowledge and resources to address the questions and problems of the cared-for, a process familiar to librarians as we field questions about reference and problems of accessing and finding materials on varied subjects for varied patrons. Those questions and problems are unique to each individual, and an effective one-caring is attentive to those differences when presenting solutions. The questions and problems of the student worker as cared-for may include how to polish skills necessary for their career goals and transferring those skills to other communities.
While one goal of ethics of care is to establish reciprocal relationships of care, it is not always the case that the cared-for will participate in caregiving to the one-caring. However, when student workers help their fellow students, they participate in caregiving, taking on a one-caring role themselves. The success lies not in their care being directed back to supervisors, but in their ability to continue this cycle of support within the academic community.
It is important to note there is a power imbalance in the relationship between a supervisor and a student worker that must be considered to properly enact an ethics of care, an ethic concerned with addressing vulnerabilities. Noddings says of this imbalance, “Social worker and client, physician and patient, counselor and student in their formal roles necessarily meet each other unequally” (p. 62). Crawley et al. (2008) comments in their essay that the recognition of the power imbalances is necessary for a feminist approach to education: “one must consider not only the power relations among classroom actors (e.g., teachers and students) but also the power relations implicit in knowledge construction, ultimately working toward empowerment of students” (p. 3). Recognizing the vulnerability of the cared-for and viewing our actions from their perspective is the first step in addressing the power imbalance inherent in learning and working environments and is also an embodiment of Noddings’ idea of “engrossment.”
An ethical model that frames interactions between a one-caring and a cared-for has received criticism for being narrow and chiefly concerned with caring for those physically close to you, as opposed to extending care out towards people with whom you may not interact, which is Joan Tronto’s critique of Nodding’s dyadic model (Tronto, 1993). Modern care ethicists have grown the theory to include looking at care from various perspectives, viewing care as a gender neutral activity, and expanding care beyond the encounters of two people. Tronto expands the scope of care beyond relational interactions between educator and the educated, beyond one-caring and cared-for, to institutions, bureaucracies, and governments. Care for people, rather than economics, should be the driving force for these organizations.
In her 1993 book Moral Boundaries, Tronto describes ethics of care as:
A species activity that includes everything that we do to maintain, continue, and repair our ‘world’ so that we can live in it as well as possible. That world includes our bodies, ourselves, and our environment, all of which we seek to interweave in a complex, life-sustaining web. (p 103)
Tronto is often cited in works about using a practice of care in higher education, as the consideration of the institution’s responsibility of care to its workers and students arises. We will reference Tronto’s phases of care and elements of care as it pertains to the role of the librarian supervisor and the evolution of our professional development project.
Tronto’s phases of care:
Caring about – the act of noticing that care is needed in the first place
Taking care of – taking on the responsibility for caring
Care-giving – the work of giving care
Care-receiving – the response of being cared-for
Out of these phases of care arise the necessary ethical elements required for care: attentiveness, responsibility, competence, and responsiveness (Tronto, 1993, pp. 127-131). It is these elements that we will further reference in our discussion about enacting care. These are useful for determining how well a project may be adhering to an ethics of care, and we will be using them as we reflect on our project.
Practices of care regularly show up in areas where there are elements of collaboration, consideration, and concern for vulnerabilities. And it is not surprising that it is demonstrated in professions that have historically been feminized (education, librarianship, etc.) and may shed light on why care-giving is often wrongfully relegated as “women’s work.” Often called the “feminist ethic,” it is important to note that a feminist ethic or practice may be enacted by anyone or any institution.
Ethics of Care in Libraries and Academia: A Review of the Literature
In addition to the literature of Noddings and Tronto, the study of using an ethics of care approach in higher education and libraries has shaped this paper and this project as well as work about the importance of student training in academic libraries.
Ladenson (2017) describes using the feminist framework for reference services in libraries as a practice that is more than simply helping a student find a certain resource but also digging a bit to identify the “why” it is important for them to do so. Is it a passion project? A new curiosity? Why is this interesting? These extra questions invite the students to reflect on their own research and work to establish a beginning of a back-and-forth between the researcher and the library/librarian. Bruce (2020) goes on to say about a caring approach to librarianship: “These one-on-one sessions are not just about the exchange of information. Instead, they are a moment which adds to a student’s sense of belonging and care with regards to their academic and personal selves” (para. 8). The attentiveness and recognition in each individual researcher and each question are elements that re-cast the interaction as practice of care, embodying the ideas of Noddings as well as Tronto. Librarians may find themselves frequently in positions of engaging in care as they establish relationships with library users, guide people through the research process, and advocate for library services to assist users.
Beyond the library, an ethics of care framework is useful in considering the interactions and responsibilities of the university towards students and the interactions of faculty members with each other to cultivate a supportive learning community. “An Ethic of Care in Higher Education: Well-Being and Learning” (Keeling, 2014) highlights the importance of focusing on the entire student and addressing issues of access for individuals. On the other hand, Sai’s 2024 work highlights that the current practice of most institutions of higher education prioritizes faculty outputs and profit, to the point of neglect for the faculty’s well-being and life outside of the university. These papers act as a call for reimagining the culture of higher education as one that values all members of the academic community as whole people and that values knowledge creation not just as metrics and outputs.
A similar autoethnographic paper focuses on the concept of “critical friendships” in academia. “Critical friendship: An alternative, ‘care-full’ way to play the academic game” (Sotiropoulou, 2022) looks at the collaborative relationships between academic colleagues, fostering working environments that support others across disciplines:
Finding the time to get to know each other better and to continuously invest in practicing our critical friendship was a strategy we utilized to deviate from the fast-paced and measurable mandates of the neoliberal academia and our way of prospering both personally and professionally as academics. (p. 1112)
This approach to collaborating with others, sharing feedback, actively listening, and taking time to meet with and discuss life and careers was a tactic to push against the culture of academia that prioritizes hustle and churning out work. It is a mistake of the neoliberal academic institution that individualizes work, instead of seeing the work of academia as collaborative. Naylor (2023) writes in “A Feminist Ethic of Care in the Neoliberal University”:
To transform neoliberal academic spaces into spaces that are caring means recognizing that collective support within a department does not have to be an archipelago, but can be contiguous and form a web of reinforcement that does not have strict borders which isolate research from teaching and service. (para. 7)
Encouraging an environment concerned with care and collaboration in our working and learning environments is a way to push back against hierarchical systems of neoliberal academia that encourage competition and rugged individualism.
Student Workers (Cared-For)
On-campus jobs are an opportunity for students to work in environments that allow them to balance the needs of academic work, usually providing flexibility around classes, projects, and exams. Working in an academic library provides student workers a further opportunity to have access to collection materials, a closer connection to faculty, occasional “down time” to work on homework, and opportunities to cultivate transferable soft skills like teamwork and customer service. It is important to note the vulnerable position of student workers, working jobs that offer them flexibility and training but little pay, while primarily focusing on coursework for which they have paid thousands of dollars.
Student workers in academic libraries engage in customer service, organizational tasks, circulation duties, and the daily tasks necessary for the smooth running of an academic library. Their work brings them into contact with their peers and their professors at the library. Student workers have the opportunity to participate in supporting research, navigating the library collection, and using library technology like scanners and software. These tasks develop and polish skills like communication, critical thinking, and technological proficiency. Mitola, et al. (2018) discuss academic library work as a high-impact practice, saying: “The work experiences of undergraduate students can also shape their college experiences and contribute to the development of skills employers seek in college graduates” (p. 352).
Student workers, however, are first and foremost students juggling academic and social obligations in addition to working. These competing responsibilities require flexibility and a willingness to understand each unique student’s situation, academic goals, and career aspirations on the part of the supervisor. The academic environment offers a workplace where student-workers are learning about the expectations of professional environments and developing necessary skills, and their supervisors are in a position of educating and training students in these skills.
The Role of Supervisors (Ones-Caring)
What is our responsibility to our student workers? How can we approach training and development in a way that is responsive to student workers? How can we prepare students for the working world outside of the university? Asking and exploring these questions represents Noddings’ sense of engrossment for the needs of the cared-for and aligns with Tronto’s framework, specifically embodying the elements of “attentiveness” and “responsibility.”
Library-specific training and the freedom to explore supplemental career-specific training fosters an environment of growth and supports the further career aspirations and goals of students. It also offers an opportunity to enact care with thoughtful feedback and support. To be able to enact any care successfully, there must already be a groundwork of support and trust established, which is the responsibility of the supervisor to cultivate.
The academic library serves as an environment for student workers to grow personally and professionally through high impact practices that demonstrate “an affective orientation of care for student employees” (Vine, 2022, conclusion). All this while “respecting other’s positionalities, autonomy and embodied differences and working with them to improve the capacity of those cared for and about to make better decisions” (Sai, 2024, p. 533). By building programs and initiatives that are responsive to student needs, we can support students as they navigate what it means to be a member of a workplace and also provide an opportunity to explore their professional curiosities and talents.
Competence in supervising students, another of Tronto’s elements of care, requires that supervisors effectively communicate the goals of training or practices. When we discuss students engaging in professional development, we clarify that we intend this project to help develop students’ abilities to articulate their strengths, practicing the task of advocating for themselves in future workplaces. A supportive, caring work environment can reinforce to the students that their labor is valuable and appreciated, hopefully setting the bar for future supervisors and workplaces to meet.
Why Professional Development?
How is professional development an act of care? Our goal is not to oil the wheels of capitalism, ensuring that workplaces have well-trained cogs for their machine, but rather to prepare student workers for the transition from an academic environment to the workplace arena. With a focus on setting their own self-development path, a supportive space, and a concern for the costs (time and money) that later professional places may not provide, we want to give students a leg up as they are moving from being student workers to workers.
The National Association of College Employers (NACE) produces a list of competencies that we use to frame our professional development project. NACE gets these competencies from annually polling many companies about what skills they find valuable in new employees out of college. It is important to note that these professional expectations are coming from employers and professional organizations, who represent The Market. These eight NACE competencies are self-development, communication, technology, equity and inclusion, leadership, critical thinking, professionalism, and teamwork. They also encompass behaviors that can be regarded as caring actions, with the opportunities for the cared-for to engage in the actions of the ones-caring. The practice of these learning competencies may develop behaviors that ripple out to influence other communities.
We found the NACE framework on the Texas Tech University career center website, among many tips for students preparing and applying for jobs, going on interviews, and preparing to leave the work of being a student for the work of The Market—with its expectations of knowing the rules (rules often left unsaid by a dominant culture). NACE often works in partnership with career centers of universities, the goal being to prepare students for the transition into workplaces. We discovered that it offers many overlapping skills that are useful in learning and working communities. After finding Franklin Oftsund’s (2022) study on career-readiness for student workers, where they also used the NACE framework, we determined it would be useful for our library as well.
The concept of “professionalism” and success can vary widely based on location, gender expression, and industry. There are many spoken and unspoken rules workers navigate, and learning how to navigate those is a skill in itself. NACE defines professionalism on their webpage “What is Career Readiness” as “Knowing work environments differ greatly, understand and demonstrate effective work habits, and act in the interest of the larger community and workplace” (National Association of College Employers, n.d). NACE writer, Gray (2022) comments that while professionalism can change based on various factors (in-person vs. virtual interactions, industry norms, or geographic location) a common thread is to “show respect for others and make sure you contribute” (para. 17).
We do not want to surrender to market values, defining worth by marketability, because the human experience encompasses much more. We also want to respect the goals of the students who may aspire to succeed by the definitions of The Market, while also instructing student workers about their successes in skills that may be beneficial in enriching communities. Or as Beilin states it: “We ought to encourage alternative definitions of success while at the same time ensure success in the existing system” (2016, p. 18).
Returning to Tronto’s definition of care, and speaking to her comment on “repairing our world,” mindfully engaging in promoting supportive and caring work cultures can be considered an attempt to respond to the harm of “hustle culture” and commonplace worker exploitation, especially of new workers starting out on their path. This also speaks to Noddings’ charge that the ones-caring respond to a problem of the cared-for, in this instance the problem of addressing the potential injustice student workers may find in their post-university workplaces. While our library might be just one small workplace among many, taking care to make it a positive one will hopefully have some ripple effects in the complex web and set a standard of a supportive, inclusive, and compassionate workplace.
Practice of Care at the Architecture Library
Now that the theories, supporting literature, and players have all been identified: what does a practical application of care look like? The Texas Tech Architecture Library has engaged in an initiative with its student workers that embodies an ethics and practice of care for nearly two years at this point. Texas Tech is an R1 university with an Hispanic-Serving Institution (HSI) status since 2019. Our architecture library is a branch library, embedded in the Huckabee College of Architecture, a program that has over 700 students, undergrad and graduate. Our ones-caring at the library include three faculty librarians and a member of staff who is the direct student supervisor. The reference to “we” throughout this paper is the public services librarian and the member of staff who serves as the direct student supervisor. We have, on average, seven to ten student workers in the role of the cared-for (and, as the job requires, participating in caregiving for library patrons). They are responsible for working at the circulation desk, assisting with scanning issues, pulling and shelving books, and participating in inventory projects. The job involves a lot of interaction with fellow students, members of architecture faculty and staff, and other library departments.
Noddings says we must address the problems of the cared-for. We do so by offering professional development opportunities to address the problems and expectations of transitioning into post-graduate life and answer the questions of “what skills do I have?” and “how do I leverage this work experience into future work experience?” We want student workers to develop competence to tackle and navigate future workplaces while also supporting their individual interests and building skills that transfer to other communities. During our mid-semester check-ins, we pose questions like “What does professionalism mean to you?” to facilitate discussion and talk about how these skills have been useful in other areas. One student worker commented in our mid-semester check-in that they were applying their practice of leadership to collaborate and communicate better in their student organizations.
The nature of the student assistant job requires skills that address these competencies like customer service, communication, and time management: soft skills necessary for most workplaces. We wanted the students to consider how else they can grow their skills during their time at the Architecture Library. The inspiration for a professional development project initially came from helping student workers with their resumes and letters of reference. Students often listed their tasks (shelve and organize books, use scanner/printer, check in/out books) and we found ourselves recommending the mention of specific skills, saying “You didn’t JUST check out books! You provided good customer service, communicated library policies, and worked effectively on a library team.” We decided to be clearer about the skills they were engaging in and developing, so that they can recognize the value they brought to our workplace and better articulate their skill sets, strengths, and experience for future jobs. The project also came from a desire to expand the scope of skills that students practice as a way of encouraging self-directed discovery and thinking about how their work in the library could inform their future careers. The NACE competencies were chosen a year into the project as a way to further structure the semester-long project.
In consideration to the students’ financial situations, these projects are to be done on work time, and to be done with resources freely available to them. Texas Tech has a subscription to a platform called Udemy that the students are encouraged to explore. This platform offers multiple types of online learning materials across many topics. Their access to these materials only lasts while they are students at Texas Tech University. In the future, professional development may have to be on their own time and own dime, but while the library can support this development we feel that we should.
This approach reflects an ethic of care through the practice of supporting students’ development and fostering skills relevant to their goals. Engaging in this practice through a lens of care means that we must view each student holistically and give support and consideration to their larger aspirations. The supervisor (the one-caring) applies Tronto’s elements of care- attentiveness, responsibility, competence, and responsiveness—in the development and implementation of the project, while the student worker (the cared-for) practices these elements while engaging in the project.
An outline of the project as it embodies elements of care:
Each student decides their professional development project based on their wants and needs for growth into their professional aspirations. This demonstrates an attentiveness by the supervisor to encourage students in self-development, and attentiveness by students to reflect on their goals.
The supervisor, as a means of offering guidance and embodying responsibility, provides a list of potential training materials that students can explore.
While a requirement, it is weighted among other tasks so students who do not complete a professional development project have other opportunities to succeed at the job and demonstrate competency with their work.
Students do this work during their working hours with tools available to them for free. This is an act of being attentive to the value of the students’ time, and to offer an opportunity to take advantage of resources made available by the university. This also requires responsibility from the student to prioritize this project as much as their other work tasks.
The supervisor conducts regular check-ins about the students’ professional development progress before end-of-semester evaluations to provide feedback or offer guidance. These check-ins create opportunities for the supervisor and student workers to be responsive to feedback as we finish the semester.
At the end of the semester, the supervisor sends out an anonymous survey to student workers for feedback and adapts projects based on that feedback. The nature of it being a semester-long project means that we can quickly adjust expectations as a way of being responsive to feedback.
We are now almost two years into this professional development initiative, and the projects that students have chosen have ranged from creating library signage in graphic design software to language learning on DuoLingo to learning industry software like Blender, Rhino, and Grasshopper. Given that many of our students are architecture students, it is not surprising that many have chosen to refine their skills on design software common for their major. This has also had the added benefit of increasing the knowledge of these software for our student patrons, giving our student workers the opportunity to step into the role of teacher/care-giver as they help our student patrons navigate these programs.
A particularly exciting project a student worker wanted to pursue was to start a book club at the library. This student had been a participating member of many student organizations and wanted to take the opportunity to plan and implement a program for the library. We discussed what a full expression of this project would look like (a process that hits many NACE competencies for the cared-for as well as Tronto’s ideas of attentiveness and responsiveness for the ones-caring). To fully support this project, we explored ways to fund meetings in a library system that does not offer much funding for programs. The public services librarian applied for and was awarded an internal grant for faculty-led book clubs meant to increase participation for those interested in a free book, coffee, and scones. The student worker was responsible for the tasks of choosing the text, signage, and leading discussion prompts. We offered support with the administrative aspects of the grant (ordering materials, reporting receipts, etc), social media posts, and participation in the book group. This specific project required a lot more engagement in Tronto’s elements of care compared to student workers who chose to view and discuss training videos. But being attentive to the individual interests and strengths of this student was required of the ones-caring, a task we were happy to engage in. The results of this project were that our student worker got to perform a project interesting to them with our full support and caring and to give care to the learning community through thoughtful discussions.
We collect qualitative data during our in-person check-ins as well as the end-of-semester anonymous survey as we attempt to answer the questions: Are we cultivating a supportive workplace? Do the students feel confident in their skills?
At the end of the semester, we discuss how the library can support the career goals of student workers and perform an in-person evaluation that looks at all of their work over the semester. We discuss the semester’s work based on library-specific tasks like shelving, reporting reference interactions, teamwork, and professional development. Generally our feedback from students is positive, though it must be said that the inherent power dynamic between student employee and supervisor must be considered when taking into account in-person feedback, so we provide an anonymous survey for feedback. We enjoy a high level of student retention semester to semester and many student workers recommend their classmates and friends to apply when we have openings.
While we vary the end-of-semester survey each semester, questions we ask about their professional development have included:
Do you feel like your work in the library supports your future career goals? (Response Options: Yes, No, I Don’t Know)
Select the career-ready competencies you feel you have developed while working at the library (Multi-response option of the eight NACE competencies)
In what ways has your role in the library helped you develop new skills?
Student workers responded that they feel like their work in the library is supportive of their future career aspirations and that they felt most confident in “communication” and “professionalism” of the NACE competencies. Students reported feeling very positively about their work and enjoy the fact that they can pursue projects that are interesting to them. They believe they effectively communicate with their team/colleagues and enjoy working with their coworkers.
Quotes from students collected via anonymous end-of-semester anonymous survey:
I’m proud of my work, it required a lot of communication, teamwork and patience
I hope to find a job in the future with a similar culture to the one here in the library
Everyone has been very helpful and encouraging as well as a good influence. I really enjoy my time at the library
Speaking with and helping patrons with different tasks in a field I am unfamiliar with has helped my work in my problem solving and critical thinking
I have definitely had the opportunity to develop customer service problem solving skills. I have improved at asking targeted questions to clarify patron issues and offer solutions
By creating content for the library I have been working on my graphic design skills
While we did not set out to conduct a project guided by the ethics of care, or a feminist approach to supervising students, our interest and engagement with our student workers led, perhaps Noddings would say “naturally,” to enacting one. This professional development initiative demonstrates that an ethics of care approach is not merely a theoretical construct, but a practical method of supporting student workers. Our students have grown in their abilities to communicate their skills and competencies. This practice enriches the cared-for and the workplace, creates meaningful work for the one-caring, and ideally extends out further to other communities.
Conclusion
While this project is one element of cultivating a caring and supportive workspace, it is not sufficient enough to be the only element that is necessary for supervising student workers with a praxis of care. The holistic working environment must be operating as a space that seeks to offer care broadly, and this professional development project has been able to grow out of that established space on that groundwork of trust that has been cultivated.
This approach to workplace dynamics acts as a challenge to traditional transactional labor to a nurturing, collaborative relationship that empowers student workers. Embracing care-centric practices in academic institutions can create environments that create meaningful learning, and teaching, experiences for the academic community, instead of treating them as individual components of an institutional machine.
Tronto ends her work Moral Boundaries with a call to care:
To recognize that value of care calls into question the structure of values in our society. Care is not a parochial concern of women, a type of secondary moral question, or the work of the least well off in society. Care is a central concern of human life. It is time we began to change our political and social institutions to reflect this truth. (1993, p. 180)
We aim to reflect this truth in our small library workplace, where our faculty, staff, and students feel supported, purposeful, and seen.
Acknowledgements
The author would like to thank their peer-reviewers: Pam Lach, Brittany Paloma Fiedler, and Liz Vine as well as the editors of In the Library with a Lead Pipe for their feedback, guidance, and direction during this work. Their care means a lot!
Bibliography
Beilin, I. (2016). Student success and the neoliberal academic library. Canadian Journal of Academic Librarianship, 1(1), 10-23.
Benjamin, M., & McDevitt, T. (2018). The benefits and challenges of working in an academic library: A study of student library assistant experience. The Journal of Academic Librarianship, 44(2), 256–262.https://doi.org/10.1016/j.acalib.2018.01.002
Crawley, S. L., Lewis, J. E., & Mayberry, M. (2008). Introduction—Feminist pedagogies in action: Teaching beyond disciplines. Feminist Teacher, 19(1), 1–12.http://www.jstor.org/stable/40546070
Gilligan, C. (2003). In a different voice: Psychological theory and women’s development (38th print). Harvard University Press.
Gilligan. C. (2011). Joining the resistance. Cambridge: Polity Press.
Keeling, R. P. (2014). An ethic of care in higher education: Well-being and learning. Journal of College and Character, 15(3), 141–148.https://doi.org/10.1515/jcc-2014-0018
Ladenson, S. (2017). Feminist reference services: Transforming relationships through an ethic of care. The feminist reference desk: Concepts, critiques, and conversations. Sacramento, CA: Library Juice Press.
Mitola, R., Rinto, E., Pattni, E. (2018). Student employment as a high-impact practice in academic libraries: a systematic review. The Journal of Academic Librarianship. 44(3), 352-373.
Noddings, N. (2013). Caring: A relational approach to ethics & moral education (2nd ed., updated). University of California Press.
Ofsthun, F. (2022). Just like the library: exploring the experiences of former library student assistants’ post-graduation careers and perceptions of job preparedness as impacted by library work [Doctoral dissertation, ProQuest Dissertations & Theses].
Sai, L., Gao, G., Mandalaki, E., Zhang, L. E., & Williams, J. (2024). Co-constructing new ways of working: Relationality and care in post-pandemic academia. Culture and Organization, 30(5), 523–538.https://doi.org/10.1080/14759551.2024.2323726
Sotiropoulou, P., & Cranston, S. (2022). Critical friendship: An alternative, ‘care-full’ way to play the academic game. Gender, Place & Culture, 30(8), 1104–1125.https://doi.org/10.1080/0966369X.2022.2069684
Stoddart, R., Pesek, J., & Thornhill, K. (2022). Assessing student employment in libraries for critical thinking & career readiness. In Library Assessment Conference Proceedings 2022.
Tronto, J. C. (1993). Moral boundaries: A political argument for an ethic of care. Routledge.
Vine, L. (2021). HIP check: Equity, learner-centered pedagogies, and student employment. In Ascending into an open future: The proceedings of the ACRL 2021 virtual conference (pp. 321-329). ACRL.
The contemplation on impermanence can help us live our life with the
insight of impermanence so we can be free from many afflictions such as
anger, fear, and delusion. It isn’t the idea or notion of impermanence,
but the insight of impermanence that can free and save us. Impermanence
is not a negative note in the song of life. If there were no
impermanence, life would be impossible. Without impermanence how could
your little girl grow up and become a young woman? Without impermanence
how could you hope to transform your suffering? You can hope to
transform your suffering because you know it is impermanent. So
impermanence is something positive. We should say, “Long Live
Impermanence!”
I think working in digital preservation, and as a memory worker in
general, it’s easy to see impermanence as a, if not the, enemy. If you
are thinking about a specific item in isolation, say a computer file, or
an archival document, it kind of is. But if you focus your attention on
the information artifact for a little bit you often come to discover
that it is actually related to other artifacts and entities that may or
may not still available, and that it is already incomplete, in many
ways. This incompleteness is what gives the artifact value, and makes it
worth preserving, and is also why exact preservation of its current
state isn’t always possible. Forever is mental trap that causes anxiety
and suffering.
Secure the early bird rate, register for Learn@DLF workshops, and start planning for yet another memorable week with DLF.
DLF member organizations receive one complimentary DLF Forum registration as part of their member benefits. Not sure who received your code? Email us at forum@diglib.org.
This year, CAPWIC welcomed 193 attendees from universities, high schools, companies, and non-profit organizations. The majority were undergraduate students (41%), followed by graduate students (36%), college and university faculty (15%), industry professionals (4%), and high school students (3%). Eight Ph.D. students in Computer Science from ODU attended the CAPWIC 2025 conference. Among them, five students from ODU's Web Science and Digital Libraries (WS-DL) research group participated in person, presenting research shorts and posters.
Ph.D. students from ODU Computer Science at the CAPWIC 2025 conference at George Washington University
Dr. Brown shared how her background in computer science has informed her work across research, education, and policy, emphasizing the importance of innovation through collaboration and interdisciplinary engagement. She highlighted three key principles, namely policy, practice, and people, as the main operating principles in the working environment. Her keynote concluded with a powerful reflection: "Whose life do you want to be better because you are here?"
ACM @capwic 2025 conference kicked off with the inspiring keynote titled "Oh, the places you’ll go!", by Dr. Quincy K. Brown, the Director of Space STEM and Workforce Policy at the National Space Council. #CAPWIC2025pic.twitter.com/gGM5kiRDO2
Dr. Quincy K. Brown gives the morning CAPWIC 2025 keynote
Parallel Sessions - Cybersecurity
Following the morning keynote, two parallel sessions were held: Cybersecurity and AI. In the Cybersecurity session, ODU's own CS PhD student and senior lecturer, Susan Zehra, presented “Mitigating Cyber Threats in V2V and V2I) Networks: A Security-Centric Approach”. Her research focuses on Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication, which involves vehicles interacting with each other and with traffic control systems. Her method combines dynamic key management with RSUs (Roadside Units), public-key cryptography, and blockchain-based fallback mechanisms to counter cyber threats. As a result, her approach significantly reduced the breach rate for key exchange between vehicles while ensuring reliability and user anonymity.
The other two presentations in the session were: “Combining Open-Source Intelligence (OSINT) with AI for Threat Detection” by Jackline Fahmy (Marymount University) and “A Hierarchical Deep Reinforcement Learning Chatbot for Cybergrooming Prevention” by Heajun An (Trustworthy Cyberspace Lab - tClab at Virginia Tech). Fahmy's study explored how AI and NLP techniques can enhance threat detection across social media platforms and the Dark Web. Heajun An's research introduces an AI chatbot that adapts to different vulnerability levels in teenagers by using dynamic interactions to prevent cyber-grooming.
Student Research Posters
Kritika Garg from WS-DL presented her poster “Redirects Unraveled: From Lost Links to Rickrolls.” The research examined 11 million redirecting URIs to uncover patterns in web redirections and their implications on user experience and web performance. While half of these redirections successfully reached their intended targets, the other half led to various errors or inefficiencies, including some that exceeded recommended hop limits. Notably, the study revealed "sink" URIs, where multiple redirections converge, sometimes used for playful purposes such as Rickrolling. Additionally, it highlighted issues like "soft 404" error pages, causing unnecessary resource consumption. The research provides valuable insights for web developers and archivists aiming to optimize website efficiency and preserve long-term content accessibility.
Thu Nguyen from the Bioinformatics and Parallel Computing group at ODU presented “Enhancement of Deep Learning for Segmentation of Protein Secondary Structures from Cryo-EM,” at the poster session. Their work enhances the efficiency of DeepSSETracer, a deep learning framework for cryo-electron microscopy segmentation, by partitioning large cryo-EM maps and merging the outputs. This approach reduces memory usage and processing time, enabling smoother handling of large datasets within the ChimeraX visualization tool.
The session on CS education was chaired by Briana Morrison and featured data science research on admissions biases and success rates in CS courses.
Anaya Prakash presents at CAPWIC 2025
Anaya Prakash of Virginia Tech presented “Who Gets In? The Role of AI in Shaping the Next Generation of Computer Scientists.” She analyzed a closed dataset of a large public university’s MS CS admissions. She found patterns in the admissions data, including a preference for men from either the US or China being 2.5X more likely than other subgroups. She also found that attributes like age, which were not intended to be a factor in admissions, were key predictors in models such as Random Forest.
Nawar Wali presents at CAPWIC 2025
Next, Nawar Wali of Virginia Tech presented her work, “Machine Learning Insights into Academic Success in CS3: The Role of Mathematics and CS Coursework.” She analyzed how mathematics coursework is correlated with success in Data Structures and Algorithms (CS3). She analyzed 10 years of Virginia Tech CS3 data, including CS and math courses with associated grades for 3900 students. She found that discrete structures and computer organization were the highest correlated courses with CS3 success. She also found that students who pass linear algebra earlier also have a higher success rate in CS3.
Victoria Wiegand from Villanova University presented “Data Collection Pipeline to Diversify AI Training Data.” Their work addresses the cultural biases present in AI vision-language models, which are often trained on predominantly Western data sources. As a result, these models perform poorly when interpreting or generating content from non-Western contexts, frequently misrepresenting communities with inaccurate or stereotypical imagery. To help bridge this digital divide, the researchers developed a low-cost, community-driven data collection pipeline. Partnering with a university service trip, they trained participants in ethical data collection and gathered images via a WhatsApp-linked web form. These images will be compiled into a publicly available dataset aimed at helping developers diversify model training data and improve cultural representation.
Victoria Wiegand presents at CAPWIC 2025
Hajra Klair from Virginia Tech presented “Agentic AI for the Rescue: Factual Summarisation of Crisis-Related Documents.”Their work presents a new approach to summarizing crisis-related documents using large language models. Evaluated on the CRISISFacts dataset covering 18 real-world events, the two-phase architecture first retrieves documents based on entity prominence and then generates summaries guided by structured, crisis-specific queries. This approach reduces hallucinations and enhances information coverage, aligning the summaries with the needs of emergency response officials.
Hajra Klair presents at CAPWIC 2025
Yeana Lee Bond from Virginia Tech presented “Driver Facial Expression Classification: A Comparative Study of Computer Vision Techniques.” Their work explores how well different machine learning models can detect driver emotions—happy, angry, and neutral—using facial expressions. They evaluated EfficientNet, Vision Transformer, and CNN-based models on a driver emotion dataset. EfficientNet stood out for its speed and high accuracy, while the study also highlighted how model design and data quality play a bigger role than just model size.
Yasasi from NIRDS Lab and WS-DL presented “Examining Visual Attention in Gaze-Driven VR Learning: An Eye-Tracking Study” at the Games and Virtual Reality session. Their work presents a framework for analyzing visual attention in a gaze-driven VR learning environment using a consumer-grade Meta Quest Pro VR headset with a built-in eye-tracker. Yasasi discussed how their study contributes by proposing a novel approach for integrating advanced eye-tracking technology into VR learning environments, specifically utilizing consumer-grade head-mounted displays.
Sanjana Kumari from Virginia Tech presented “Evaluating Children's Ability to Distinguish Between Traditional and AI-Generated Media”. Their study investigates whether children can distinguish between human-authored and AI-generated content through a structured intervention comprising surveys and an educational workshop. The authors note that while adults are increasingly adapting to tools like ChatGPT, we still lack a clear understanding of how children perceive and process these technologies.
Rebecca Ansell from Georgetown University presented “Assessing Public Perception of AI-Generated Social Media Content of the 2024 U.S. Presidential Debate”. Their study explored key research questions such as: Can humans distinguish between AI-generated and human-authored content on social media? And what characteristics make a social media post appear human? To investigate this, they collected a dataset of X posts and YouTube comments related to the 2024 presidential debate, and supplemented it with content generated using ChatGPT. Human annotators were then employed to label whether each piece of content was AI-generated or not. The researchers used the Bradley-Terry model to analyze the ratings and evaluate human perception patterns. The findings showed that annotators could generally differentiate between AI and human content. Interestingly, the study also examined how sentiment influenced perceived humanness, positively toned posts were more likely to be perceived as AI-generated, while negative or offensive content was often seen as human-authored. The results highlight the role of tone, civility, and emotional cues in shaping public perception.
Himarsha from WS-DL presented “Infrastructure for Tracking Information Flow from Social Media to U.S. TV News”. Their work focused on understanding how social media content is amplified through mainstream media by expanding its reach to new audiences. They are using the data from Internet Archive’s TV News Archive to explore how social media content flows into TV news and the contexts in which it is incorporated.
In parallel to the Social Media session, the Human-Computer Interaction (HCI) session was held, featuring three short research presentations. It was chaired by Dr. Jin-Hee Cho from Virginia Tech.
The session began with Anika Islam from George Mason University, presenting their research titled “Leveraging Smartwatch Sensors for Detecting Off-Task Behaviors of Neurodivergent Individuals”. The study aims to improve workplace success for neurodivergent individuals by using smartwatch sensor data to identify off-task behaviors and deliver personalized interventions. In the current phase, data were collected from 25 neurodivergent young adults engaged in a manual task within a controlled lab environment. Anika outlined their future plans to apply machine learning techniques to analyze the data, with the goal of developing real-time, tailored interventions that improve productivity in the workplace.
The next presenter, Marissa Hirakawa, an undergraduate Computer Science student from Virginia Tech, presented “Usability Heuristics and Large Language Models: Enhancing University Website Evaluations.” Their study focused on exploring the use of large language models (LLMs) to support usability evaluations of university websites, based on Nielsen’s 10 usability heuristics. Their findings show that while LLMs can help uncover usability issues often missed in manual reviews, human verification remains essential as LLMs occasionally hallucinate usability issues. Their future directions include refining LLM evaluation processes and incorporating multimodal data such as user action logs and screenshots to enhance assessment accuracy.
Kumushini from NIRDS Lab and WS-DL presented “Advanced Gaze Measures for Analyzing Joint Visual Attention.” This research explored how user pairs coordinate their joint visual attention (JVA) by using egocentric and eye-tracking data. In their user study, participants engaged in a collaborative screen-based visual search task while wearing Project Aria smart glasses. Their findings suggest that users who maintained similar attention behaviors (ambient/focal) over time exhibited more frequent and sustained moments of joint attention compared to those with differing attention behaviors. In future work, they plan to further refine the methodology by integrating machine learning techniques to automatically identify and classify different patterns of ambient and focal visual attention during collaborative tasks.
The first presenter at #CAPWIC2025 HCI session, Anika Islam from @GeorgeMasonU is presenting their research titled "Leveraging Smartwatch Sensors For Detecting Off-Task Behaviors Of Neurodivergent Individuals". pic.twitter.com/SbA31XhIcU
In the machine learning session, Eleni Adam from the Bioinformatics Lab at ODU presented “Analysis of Subtelomere and Telomere Regions of Cancer Genomes on the Cloud.” Eleni examined subtelomeres in cancer patients. She used ODU’s Wahab Cluster to carry out her research. In this work, she implemented a computational pipeline to enable the subtelomere analysis. In reality, each patient’s DNA takes hours to run. Her ultimate goal is to understand cancer and the subtelomere’s role in it. Her work is available at https://github.com/eleniadam/storm.
Keynote #2, Awards, and Closing Remarks
Becky Robertson gave the closing keynote at CAPWIC 2025
Becky Robertson, Vice President at Booz Allen Hamilton delivered the closing keynote. Her talk centered around the concept of inspiration. She engaged with the audience by asking about what or who inspires us personally. She encouraged us to channel that inspiration into meaningful actions, overcome challenges, and pursue our goals.
Following the keynote, it was time for the award ceremony and the closing remarks. Several awards were presented, including the Best Research Short Award, Honorable Mention Research Short Awards, Flash Talk Awards, and Best Poster Awards, which recognized outstanding contributions from both graduate and undergraduate participants.
Congrats to Thu Nguyen et al. from @oducs@ODUSCI for winning the Best Poster Award (Graduate Category) for their work: 'Enhancement of Deep Learning for Segmentation of Protein Secondary Structures from Cryo-EM'! 👏 pic.twitter.com/QadMLJelwD
Thu Nguyen from the Bioinformatics and Parallel Computing Group at ODU received the Best Poster Award in the Graduate Category
During the closing remarks session, the next year’s organizing committee announced that CAPWIC 2026 will be at Virginia Tech’s Innovation Campus in Alexandria, VA.
Wrap-up
For all of us, it was our first time attending the CAPWIC conference in person. CAPWIC 2025 provided an inspiring platform to exchange new ideas and showcase innovative research within the tech community, encouraging greater participation among women and minorities in computing. The CAPWIC 2025 conference was held in Washington, D.C., during the peak of the cherry blossoms. We had the opportunity to take part in the National Cherry Blossom Festival at the Tidal Basin, enjoying the beautiful sight of the city covered in pink and white blossoms. It was a memorable experience to see the capital come to life with the colors of spring.
Ph.D. students from WS-DL at Tidal Basin, Washington, D.C.
The kind folks at the Prosocial Design Network asked me to be a guest for April’s “pro-social,” a very low-key virtual gathering for folks interested in creating more inclusive digital spaces.
More about PDN:
The Prosocial Design Network connects research to practice toward a world in which online spaces are healthy, productive, respect human dignity, and improve society.
They shared the questions in advance, which I very much appreciated! Here are my prepared notes - we certainly didn’t cover it all during the call.
What principles should be front of mind in designing inclusive digital spaces, particularly social spaces?
First off, hire people with different lived experiences from yours. Hire trans people. Hire Black people. Hire disabled people. Hire disabled Black trans people. Let them cook. Listen to them. Otherwise you are, as my wife says, “Pissing into the wind.”
Prioritize accessibility. Ensure spaces are accessible for users on many devices, using different device settings, in different contexts in the real world including with assistive technologies. Often accessibility is an afterthought. Shift left and allow it go drive your design and architecture decisions from the jump. For social apps, this includes setting smart defaults - i.e. requiring folks to add alt text if they’re uploading images.
Keep your tech stack light and boring. Design for a 4-year-old Android phone on a 3g connection, with bandwidth paid for by the megabyte. Bloatware takes longer to load and harms or disincentivizes participation from folks on slower connections or older tech.
Design for trust, privacy and safety. Design for people to be able to protect their privacy, control what they share and what they see.
Don’t ask for information you don’t need, and tell people why you’re asking for what you do need.
Make privacy and sharing settings crystal clear.
Remind folks that no site is 100% secure even if you’re encrypting every bit.
Provide feedback/reporting mechanisms.
Allow people to block/opt out of interacting with others or groups, or types of content.
Don’t overpromise! If you have gaps or areas still under development, name them.
Have good documentation and support. Don’t leave people wondering what to do.
Look to successful, intentionally-designed communities - like BlackSky - for cues about designing inclusive, safe spaces.
Allow people to define themselves. The way you do it ain’t the way everybody else does it.
Be aware of any type of binary options when it comes to identifying themselves - not just gender, but everything else. Are you technical or nontechnical? Employed or unemployed? Full-time or part-time? In all of these cases it’s not so clear.
Think in terms of checkboxes, not radios. Tagging, not categorizing.
Give people freedom in choosing avatars or profile images.
Give people freedom to change/update usernames and login email addresses without hassle.
Don’t make inferences about who people are or what they’d like based on their gender, race or other things that they choose to share with you.
Confront your own ideas about people having one “true identity” - like a real name policy or assuming that everyone has the same interactions with everyone in their lives in every context. We certainly know this is true because 4chan exist(ed) - but let’s also remember that this might be the way that a trans person tries on a new name for the first time.
You may have noticed this isn’t necessarily specific to trans-inclusive design. That’s because this is the kind of work that, by considering folks in marginalized positions, benefits everyone. It’s the curb cut effect for accessibility AND privacy AND safety AND inclusion. By focusing our design on the margins we include everyone between them too.
Since you wrote your article in 2019, what are fails sites continue to make when it comes to trans inclusive design?
The biggest fail I continue to see is that folks are asking for gender or sex information at all, because it is usually not needed. It usually means that this data is being brokered into a database somewhere and sold for money.
I don’t need to tell you my gender to book a hotel. Why are you asking for it?
The unnecessary asking for gender gets worse now that we are seeing a rollback of previous progress in inclusive design we had made in the past few years. We’d been doing so well! The US Web Design system had a really thoughtful pattern about asking for gender that was starting to roll out to all these government forms. But now agencies are in the process of removing the pattern for asking for gender in an inclusive way, and replacing it with a binary option for sex.
These design systems changes are in addition to removing all references to being trans from websites, and no longer offering services or information for trans people. It’s a very literal erasure of trans identity. It’s really upsetting, scary, and for trans folks, it’s existential.
I encourage practitioners to plan ahead for the moment when you are asked to do something that you know is wrong. That day will come. What will you say? What will you say no to? What’s your red line?
What new concerns do you have with AI and do you have any advice for tech folk?
I have a lot of concerns with AI. I do think there are useful applications for the technology, and 99.99% of the applications out there are either actively predatory, passively harmful, gratuitous and mid, or all of the above. And they are all harming the environment and our health.
Garbage in, garbage out. AI is pattern recognition. And the patterns it’s trained on are filled with bias! Bias harms people who are in the minority. According to a recent study out of Stanford:
“synthetically generated texts from five of the most pervasive LMs …perpetuate harms of omission, subordination, and stereotyping for minoritized individuals with intersectional race, gender, and/or sexual orientation identities.” - Laissez-Faire Harms: Algorithmic Biases in Generative Language Models (2024)
…and this includes code. When AI is trained on design patterns or code that is widely popular, but that also includes a lot of code that’s inaccessible or unusable, the resulting code is also inaccessible or unusable. We should also be extremely wary of any AI tool that claims it can refactor a codebase written in a language that most modern coders are not using.
AI is a tool of capitalism and state violence. Generative AI is being used to consolidate, analyze, and generate information in a way that can be used to surveil, prosecute, incarcerate, and kill people.
AI is seen as a smart humanoid. People tend to believe algorithms more than each other as task complexity increases - but we also tend to view AI as human-like. We anthropomorphize AI tools by giving them human-like names or designing them as chat prompts (rather than command prompts or even search boxes), which leads us to believe that we are in fact talking with another living being rather than a computer. It also leads some folks to think that AI will become sentient. It won’t, actually, but it will if humans believe that it is, which is perhaps worse.
AI is mid. And by that, I mean that what it produces is functionally a middle-of-the-road, average, non-“edge case” output. This flattens our differences and creates a “norm” which actually does not exist. Individual people aren’t “normal”, but AI sure likes to tell us that’s a thing, and that really harms people who are far from that norm. Saying that everyone is the same denies the fact that we are all weird as hell. It’s our differences that make us stronger, more creative, better.
Critique is painted as fear. Proponents of AI say that skeptics are “afraid” of AI or don’t understand it. I, for one, am not afraid of it - I’m frustrated by how folks are positioning it as the solution to all our problems. I do understand it! I know too much. Dismissing AI detractors as “fearful” allows proponents to dismiss valid critique outright rather than engage with it. It’s a strawman argument.
If you don’t need to use AI, don’t. Do something else. Turn off default settings that include AI. Switch your search engine to DuckDuckGo and turn off AI features. Turn off Apple intelligence. Turn off Google Gemini. Take a harm-reduction approach to your tech use. (FWIW, this is my approach to eating animal food products. I’m not vegan or even completely vegetarian, but I don’t build my food habits around animal products, which reduces how many animal products I consume.)
Don’t make AI your main thing. Charles Eames said, “Never delegate understanding.” Don’t rely on AI alone to make decisions about what’s true, certainly not for core parts of your work.
Understand the bias that ships with your LLM. Do everything you can to critically evaluate outputs for inaccessible, biased or otherwise harmful content. Right-size your models and turn down the “creativity” setting.
Advocate for sustainable, safe AI, including regulation and environmental mitigation measures. Individual choices get us down the road a piece, but what we really need is to mitigate the impacts at a high level.
Engage your discomfort. If someone critiques AI and it makes you uncomfortable, listen to understand and be open to changing your mind. Most of the folks who are warning about the harms of AI are minoritized people - Black and brown women, queer and trans people. Believe them!
Are there any questions you think researchers could help answer regarding trans-inclusive design?
This is an excellent question. Some of the things I’d ask folks to understand include…
What are ways we can design for trust and safety? How can we create digital spaces where people feel safe? What are some of the ways we can foster trustworthiness?
What would trans-informed design look like? How can we use the very concept of transness - boundary-crossing, liminality, non-binary thinking - to expand our thinking about how technologies can be used, and to what ends?
Oliver Haimson is studying this very thing, and his new book Trans Technologies is available for free, open access, from MIT Press.
How might trans-inclusive digital design change IRL service design? We’re already seeing this as part of our work in Civic Tech, moving from automation to true digital transformation. We all know that real-world constraints map to technological design choices. How then do we transform the tech stack and use that to change our very service delivery model?
On May 7, 2025, we held our fourth annual WS-DL Research Expo. We continued the same format as the prior years (2024, 2023, 2022 & 2021), with one student from each WS-DL professor giving a short overview of their research. Links to all the materials (slides, papers, software, data) are gathered in the GitHub repo, but repeated here are the links for the students and their presentations:
We were fortunate enough to welcome back some of our alumni, including: Chuck Cartledge (PhD, 2014), Gavindya Jayawardena (PhD, 2024),Mat Kelly(PhD, 2019), and Sawood Alam(PhD, 2020). We really appreciate the ongoing relationship we have with our alumni -- WSDL is for life!
If you were unable to attend, we recorded the students' presentations and have embedded the video below.
--Michael
The @WebSciDL 2025 Research Expo is happening now! @phonedude_mln initiated the session followed by faculty and alumni intros!
Collaboration is a topic of ongoing interest and need for libraries. It has long been an important area of inquiry at OCLC Research because of its fundamental role in effective library work. One participant in the RLP discussion groups that led to the report said,
“Within our professional competencies, there is. . . an ethical requirement for us to be thinking about the future. I don’t think I’d consider myself a good librarian if I wasn’t actually thinking about collaborations across boundaries.”
The Silos report, despite its punny name, delivers enduring value by offering a compelling framework for how collaborations mature. And while the report was focused on libraries, archives, and museums (LAMs), its findings and recommendations apply across many library activities.
Collaboration continuum
The Collaboration Continuum framework depicts collaborative activity across a spectrum, illustrating a gradual increase in interdependency and benefits. The framework is elegant in its simplicity, offering a simple yet compelling view of how and why collaborations flourish.
The Collaboration Continuum. Originally published in Beyond the Silos of the LAMs: Collaboration Among Libraries, Archives and Museums.
As collaborations move from left to right on this continuum, collaborative efforts require greater investments, risk-taking, and trust, while offering the potential for greater rewards for all participating partners. The initial stages (Contact, Cooperation, Coordination) are seen as additive, fostering working relationships that are layered on top of existing processes, without changes to institutional hierarchies or organizational structures. Cooperation and Coordination rely upon both informal and formal agreements between groups to achieve common goals.
But the fourth stage, Collaboration, offers “a new vision for a new way of doing things.” It involves fundamental change and transformation, which makes it a much more ambitious undertaking. Convergence represents a state where collaboration has matured to the level of infrastructure that is so ingrained that it may no longer even be recognized as a collaborative effort.
It’s not just about LAMS
While written to foster greater collaboration between libraries, archives, and museums, the Silos report is relevant to a much broader library audience. In fact, as academic libraries are increasingly assuming new research support responsibilities–such as research data management, ORCID adoption, and research impact services—collaboration with other campus units become imperative. This imperative stems from the complex research lifecycle that spans multiple stakeholder groups where no single unit, including the library, can “own” research support. Instead, cross-unit collaboration is increasingly required, and the library must now work with unfamiliar partners such as research administration, faculty affairs, and campus communications.
The Collaboration Continuum offers a framework that can guide libraries as they develop research support capacity with campus partners in support of institutional goals. Building trust relationships is challenging in a decentralized university environment characterized by local autonomy and incessant leadership churn, and more recent OCLC Research outputs such as Social Interoperability in Research Support: Cross-Campus Partnerships and the University Research Enterprise, build upon the Silos report to offer strategies and tactics that librarians may apply to build social interoperability, “the creation and maintenance of working relationships across individuals and organizational units that promote collaboration, communication, and mutual understanding.”
Both the Silos and Social Interoperability reports inform current OCLC Research work as we observe libraries forging new partnerships with other units in the campus community. Many partnerships are ad hoc and experimental, falling in the Cooperation and Coordination sections of the Collaboration Continuum. But some collaborations are establishing more formalized operational structures, such as the University of Manchester Office of Open Research or Montana State University Research Alliance, where library expertise and capacities are combined with those of other campus units, moving these partnerships closer to the Collaboration segment of the Collaboration Continuum. These changes have implications for library strategies, organizational structures, and value proposition, which we are examining in the OCLC Research Library Beyond the Library project.
Building a pedestrian bridge in Dublin, Ohio, home to OCLC. Nheyob, CC BY-SA 4.0, via Wikimedia Commons
Collaboration catalysts
The Silos report also describes nine Collaboration Catalysts that can help partnerships flourish. This list can serve as a useful checklist for assessing readiness for moving further along the Collaboration Continuum, and the absence of catalysts can suggest project risk. I summarize these briefly here, but I encourage you to read the richer explanation and examples in the report.
Vision—A collaboration must be embedded in an overarching vision shared by all participants. This is core.
Mandate— A mandate, conveyed through strategic plans or high-level directives, can incentivize collaboration.
Incentives—Collaborations nurtured by incentive structures reward both individual and collective efforts.
Change agents—Collaborations require leadership from a trusted individual, department, or programmatic home base to provide stability and sustained stewardship.
Mooring—Collaborations thrive when they have an administrative home base from which they can operate, communicate, and incorporate their efforts into broader institutional goals. In practice, however, collaborations are often handshake agreements with individuals reporting to different units, which can threaten the partnership in a dynamic institutional environment.
Resources—Collaborations must be adequately resourced in order to succeed. This includes funding, human labor, expertise, and necessary infrastructure.
Flexibility—When professionals approach collaboration with open-mindedness, they can learn and embrace new ideas from other stakeholders.
External catalysts—Factors like peer pressure, funding requirements, and user needs can influence the decision to partner with others.
Trust—Trust is foundational to any collaborative relationship due to the resulting interdependencies.
Enduring relevance
Beyond the Silos of the LAMs is aging well and remains one of the greatest hits in the OCLC Research back catalog. The report offers timeless guidance for libraries, museums, and archives that extends to broader library audiences today.
I invite you to read the full report—available open access like all OCLC Research reports—and consider where your collaborations fall on the continuum and whether your partnerships have multiple collaboration catalysts in play, as the report suggests.
AI Nota Bene: I used AI tools to write this blog post. I found Claude to be useful as an editor and proofreader of my final draft, as I prompted it to recommend ways I could improve clarity and conciseness. I also prompted Claude to help me find a title for this essay. I incorporated many, but not all, of Claude’s suggestions.
LibraryThing is pleased to sit down this month with novelist Nancy Kricorian, whose work explores the experiences of the post-genocide Armenian diaspora. Her debut novel, Zabelle, published in 1998, has been translated into seven languages and adapted as a play. Her essays and poems have appeared in journals like The Los Angeles Review of Books Quarterly, Guernica, Parnassus, Minnesota Review, and The Mississippi Review. Kricorian has taught at Barnard, Columbia, Yale, and New York University, as well as with Teachers & Writers Collaborative in the New York City Public Schools, and she has been a mentor with We Are Not Numbers. She has been the recipient of a New York Foundation for the Arts Fellowship, a Gold Medal from the Writers Union of Armenia, and the Anahid Literary Award. Her newest book, The Burning Heart of the World, follows the story of an Armenian family caught up in the Lebanese Civil War, and was recently published by Red Hen Press. Kricorian sat down with Abigail to answer some questions about her new book.
The Burning Heart of the World was published to coincide with the fiftieth anniversary of the Lebanese Civil War and the one hundred and tenth anniversary of the Armenian Genocide, events which are central to the book’s story. How did the idea for linking these events, and the more recent trauma of 9/11 come to you? What insights can be gained from thinking about these terrible episodes of history in relation to one another?
I am interested in the way that mass trauma events inform and shape people’s life trajectories, and in the Armenian case the way that the genocide haunts families across generations. That haunting is often a silent or unspoken one, and all the more powerful for being so. In making these connections visible I hope to open spaces for repair and renewal. Sometimes going back to imagine and give shape to our forebears’ traumas is also a way of building strength to deal with our present ones.
This new book, and your work as a whole addresses the experiences of the Armenian diaspora, of which you are a part. How has your own personal and familial history influenced your storytelling? Are there parts of The Burning Heart of the World that are based upon that history?
My first novel, Zabelle, was a fictionalized account of my grandmother’s life as a genocide survivor and immigrant bride. My next book, All the Light There Was, told the story of someone of my generation growing up in my hometown under the shadow of the unspoken familial and community experience of the Armenian genocide. All the Light There Was, which is set in Paris during World War II, went far beyond the scope of my personal and family history in a way that required extensive research, as did The Burning Heart of the World, but there are small details in both of those novels that are drawn from personal history as well as different elements of my main characters’ temperaments that are similar to mine.
Your story is told from the perspective of a young person living through these events, but chronicles their effect on multiple generations. Is this significant? Are there things that a youthful perspective allows you to do, that a more mature outlook might not?
I have had a long fascination with the bildungsroman, the novel of formation, which in its classical form is the story of the growth and character development of a young man. In college I took a course on the “female bildungsroman” in which we read The Mill on the Floss and Jane Eyre, among other texts, and learned that the novel of development for women traditionally ended in either death or marriage. In all four of my novels, I write from the point of view of girls as they make their way towards adulthood. With Vera in The Burning Heart of the World, I wanted to show the Lebanese Civil War from a young girl’s perspective as she moves through adolescence. I am interested in centering the experience of girls and women in my work, with a particular focus on the way they manage and care for their families in times of great violence.
Did you have to do any research, when writing your book? If so, what were some of the most interesting and/or memorable things you learned?
I want the reader to be immersed from the first page in the time and place I am writing about—to be able to see, smell, and hear the world that the characters inhabit. It takes deep research and knowledge to build that world, and my favorite part of that work is listening to people who lived through the time I’m writing about tell their stories. I collect anecdotes and details in the way that a magpie gathers material to build a nest. So, for The Burning Heart of the World, I read over 80 books, both fiction and non-fiction, and interviewed upwards of 40 people. I also made three trips to Beirut so that I could become familiar with the city and the neighborhood that Vera lived in.
Tell us a little bit about your writing process. Do you have a particular place you prefer to write, a specific way of mapping out your story? Does your work as a teacher influence how you yourself write?
My writing process varies from project to project. For the last two novels, I have sat cross-legged in my favorite armchair with my laptop. Sometimes I make up rules for myself—such as I have to write one page a day, or if I’m busy with other commitments, I tell myself I must write for fifteen minutes a day. If I sit down for fifteen minutes, it will often turn into an hour or two, and if it’s only fifteen minutes, the piece I’m working on will stay in the front of my mind as I’m walking the dog or going to the subway. I have not been teaching formal university classes much in the past ten years but have moved to a one-on-one mentoring model that I enjoy a great deal. The careful attention that I pay to my mentees’ writing has made me more attentive to my own.
What is next for you? Are there other books in the works that you can share with us?
I’m currently working on a series of essays about my family that I think will be a memoir in pieces. I have written one essay about my relationship to the Armenian language and my grandmother that’s called Language Lessons, and one about my father’s relationship to motor vehicles called His Driving Life. Next up is a piece about my Uncle Leo, who was an amazing character—as a teenager he was the Junior Yo-Yo Champion of New England and for many decades was a guitar player in an Irish wedding band, the only Armenian in the band but quite a rock star in Boston’s Irish community.
Tell us about your library. What’s on your own shelves?
In my study, I have shelves filled with books about Armenian history, culture, and literature. I particularly love and collect books of Armenian folk tales and proverbs. In the bedroom, we have all our novels, memoirs, and literary biographies. There is one shelf devoted to Marcel Proust, and another to Virginia Woolf. Poetry collections, photo and art books, and books about the history of New York City are in the living room.
What have you been reading lately, and what would you recommend to other readers?
I have the distinction, or dis-honor, of having all of my active federal research grants terminated by the current administration. None of the grants were researching anything especially controversial, but they were all funded by programs that have been effectively shut down. To add insult to injury, the termination letters each stated that our research project no longer “effectuates” the goals of the funding program and, in one case, "no longer serves the interest of the United States," which feels a bit harsh. Further, we were given no advance notice -- the terminations were effective on the same day we received notice (one of which was at 4:30pm on a Friday).
I apologize for the length of this post (I am a professor, after all), but I've broken things up into sections so you can skip around as desired.
Executive Summary: Academic research is essential for the advancement of technology and scientific/medical breakthroughs and is how we train the next generation of researchers. American research universities have become the envy of the world largely thanks to the support of the US federal government through the awarding of highly-competitive research grants. I am greatly saddened by what already enacted and proposed future cuts will mean for basic research and research universities in the US.
Background on Academic Research Funding
I'm planning to share this with friends outside of academia, so here's some background on how academic research funding works.
Faculty Summer Salary. Most faculty at research universities like ODU have 9-month contracts with the university. We are expected to fund our summer salaries by obtaining research grants, most often from federal and state agencies and sometimes from private foundations or through industry contracts.
Graduate Student Stipends and Tuition. More importantly, we use research funding to provide stipends and tuition support throughout the year for the graduate students who are working with us on research. PhD students in STEM fields, like computer science, generally do not pay for graduate school themselves. They are employed as research assistants (paid by research grants) or teaching assistants (paid by state funds) and paid a relatively meager stipend with full tuition support. This not only provides critical support to advance research projects, but also provides hands-on research training that contributes to marketable skills for graduate students after graduation.
International Student Support. US students have the option to attend graduate school part-time while they work outside the university; however international students cannot hold outside employment, so these assistantships are their only form of income while in school. Acceptance into PhD programs is highly competitive. Faculty are committing research funds and counting on the students to help advance the research projects, so we must be very selective. The international students who we support are the best and brightest from their countries, and we hope to keep them in the US after graduation so that they can continue to contribute to America - through advancing research, developing innovative technologies, starting new businesses, or teaching the next generation.
Federal Research Funding. When a federal agency awards a research grant, those funds are then available to the principal investigator (PI) for the duration of the award period, subject to the approved budget and federal agency guidelines. It is not normal for federal agencies to terminate awarded research grant funding when there is a change in presidential administrations.
Basic Research. Many federal research agencies, and especially the National Science Foundation (NSF) and the National Institutes of Health (NIH), support basic research, which is work that may not have an immediate marketable outcome. This type of research is not likely to be performed by private companies as it will not immediately impact their bottom line. However, this basic research is foundational to scientific and medical breakthroughs, even if the long-term impact of basic research comes years after the funding. In my own research, a tool we build for archiving web pages directly from the web browser inspired the development of Webrecorder, which became the standard for high-fidelity web archiving. And my research on vehicular networks, funded 15 years ago, is continually being cited in current work on autonomous vehicles. The research that we perform and tools we build are not meant to compete with commercial software, but are built to experiment, to figure what might (and might not) be possible.
Indirect Costs. As you may have heard with the proposed cuts to indirect rates for NIH, indirect costs, aka overhead, are provided to the university performing the research and are used to pay for major facilities, administrative personnel, utilities, and many other supporting costs. Researchers are typically not allowed to request basic equipment, like computers, as part of the proposed research budget. However, we have to have these things to carry out the research. We also have to have the university infrastructure and administrative staff to hire graduate research assistants, process payments, and make sure that we're complying with the terms of our grants. All of these things are paid for through indirect costs.
Travel Support. Most research grants are allowed to fund travel to technical conferences, both inside the US and international. This type of travel support is essential for PhD students, because it is a required part of academic publishing. In our group, we require our traveling students to publish conference trip reports after they return, so that others who were not able to attend can at least benefit from some of the knowledge that was exchanged. In computer science, if you don't pay the conference registration fees and travel to present your work, your paper will not be published in the proceedings and you can't count it as a publication on your CV. These publications are what demonstrate to potential employers that you have been performing quality research that has been deemed acceptable by your academic peers (i.e., peer-reviewed). In the "publish or perish" model, it takes not only good research, but also money, to publish.
Context on Award Amounts. In my 20 years in academia, my collaborators and I have been awarded over $6.8M in research funding. Of that, $5.3M came from federal agencies (NSF, NEH, IMLS, Dept of Defense). A little over $1.4M came from private foundations, and the remaining $70k came from state or university funds. Without funding at about this level, I would not have obtained tenure or been promoted to Associate and then Full Professor. This is just what's expected of faculty at a research university.
Agencies and Programs
I am heartbroken that these federal agencies and programs have been shut down and that program managers who I've worked with over the years have essentially been fired (or, "put on administrative leave"). So, before I talk about my specific projects that were terminated, I want to tell you a little bit about these agencies and programs.
NEH. I didn't have current funding from them, but the National Endowment for Humanities (NEH), specifically their Office of Digital Humanities, was instrumental in helping me build my research program in web archiving and train PhD students, some of whom have become university faculty themselves. The program managers at NEH were dedicated public servants who cared about the research and scholarship they were funding, and they funded some amazing projects. You can read about a bit of my work that was funded by NEH as well as summaries of those project directors' meetings at https://ws-dl.blogspot.com/search/label/NEH. I am deeply indebted to Brett Bobley, Jen Serventi, and Perry Collins from the Office of Digital Humanities for supporting my work. NEH has a website that highlights the impacts that its funding has made throughout the nation. See https://www.neh.gov/impact for an overview and https://www.neh.gov/impact/states for an interactive map to explore the work being funded in each state. NPR's article on the cuts, "Cultural groups across U.S. told that federal humanities grants are terminated", highlights the effects on libraries and museums around the country.
IMLS. The Institute for Museum and Library Services (IMLS), along with the NEH, provided grants to libraries and museums throughout the country. In 2024, IMLS awarded over $250 million to fund research, education, and preservation activities, some of those described in this article on the impact of IMLS. While that sounds like a lot of money, it’s a tiny fraction of the US federal budget. Along with staff terminations, grants that had been awarded were terminated with little notice. IMLS funded some of my research in web archiving (most recently, our terminated National Leadership Grant, which I'll describe later), mainly used to provide stipends for my graduate student researchers. Through this, I have met amazing IMLS program officers, including Dr. Ashley Sands and Erin Barsan, who ensure that funding goes to worthwhile projects. Many IMLS staff members have degrees in library and information science and have dedicated their careers to supporting state and local library and museum services that help to educate people throughout the nation. Like with NEH, you can explore the outstanding projects that IMLS funds through their interactive map at https://www.imls.gov/map.
DoD Minerva. The goal of the Department of Defense's Minerva Research Initiative (link is to the archived version of the page since the live page has been removed) was to "improve DoD's basic understanding of the social, cultural, behavioral, and political forces that shape regions of the world of strategic importance to the U.S." Dr. Nicholas Evans from UMass-Lowell wrote a great article about the importance and impact of the Minerva Initiative. Quoting from his article: "In launching the program, then-Secretary Robert Gates claimed that 'Too many mistakes have been made over the years because our government and military did not understand — or even seek to understand — the countries or cultures we were dealing with.' Minerva was designed to address the gap between operations and social science." Science reported on the cancellation of this initiative, "Pentagon abruptly ends all funding for social science research".
NSF. While I was working on this came the news that the National Science Foundation (NSF) has stopped awarding new grants until further notice and that a 15% indirect cap has been implemented for new awards ("NSF stops awarding new grants and funding existing ones", "Implementation of Standard 15% Indirect Cost Rate"). I hadn't originally planned to talk about NSF, but I can easily say that without funding from this agency, I would not be professor. During my senior year of undergrad, I was awarded an NSF Graduate Research Fellowship that paid for three years of study at the school of my choice. Because of this, I was able to attend UNC, one of the top graduate schools for computer science. Once I had been hired at ODU, I was awarded three NSF grants during my first five years. This not only allowed me to develop simulation tools to study web traffic, perform foundational research in vehicular networks, and explore how to re-purpose existing sensor networks during emergencies, but this track record of funding paved the way for my promotion to Associate Professor with tenure in 2012.
Terminated Projects and Impact
IMLS National Leadership Grants
Technically, I had two IMLS grants that were terminated, but one was a planning grant for which we had already spent all the funds (so $0 was "saved" by terminating this award). The two grants were related, in that the planning grant allowed us to carry out a preliminary investigation that helped to frame our larger grant proposal.
Grant 1: "Saving Ads: Assessing and Improving Web Archives' Holdings of Online Advertisements", Mat Kelly (Drexel), Alex Poole (Drexel), Michele C. Weigle (ODU), Michael L. Nelson (ODU), Aug 2022 - Jul 2025 Apr 2025, IMLS National Leadership Grant/Planning LG-252362-OLS-22 (proposal PDF via IMLS), $149,479
Grant 2: "Preserving Personalized Advertisements for More Accurate Web Archives", Mat Kelly (Drexel, PI), Alex Poole (Drexel), Michele C. Weigle (ODU), Michael L. Nelson (ODU), Aug 2024 - Jul 2026 Apr 2025, IMLS National Leadership Grant LG-256695-OLS-24, $398,927.
The basis for this project was our observation that today's ads on the web are indicators of cultural significance, much like those from print media of the past (see below). However, major public web archives are failing to capture many embedded ads in their archived pages.
Our first step was to assess how well online advertisements are being archived in places like the Internet Archive's Wayback Machine. The planning grant enabled us to develop a dataset of current online advertisements and assess how well they had been or could be archived by various tools. We discovered that there were several challenges to archiving advertisements, some related to the dynamic nature of ads and some related to how online advertisements are delivered and embedded in webpages. The work that we did was relevant not only for ads, but also for similar types of dynamic elements in webpages. Our goal in the larger project was to investigate ways of saving personalized online ads, which are tailored to users based on their location, browsing history, or demographics. During the first year of the larger grant, we had continued our investigation of how well ads are currently archived and had started developing "personas" to represent different types of web users. Our plan was to use these personas to trigger the display of a diverse set of advertisements, which we could then attempt to archive with existing tools and, as needed, develop additional methods for archiving these personalized ads. Through this work, we hoped to improve archiving practices and to open up more historical digital content for researchers and the public.
Direct Impact of Termination: As noted above, the funds from the planning grant had already been spent when the termination notice was received, but we were only one year into the larger grant period. The larger grant was intended to support one PhD student at ODU for two years and one PhD student at Drexel for two years. It was also intended to support travel to research conferences to present the results of the work and a few weeks of faculty summer funding for the project PIs. Because the grant was terminated during its first year, we were only able to support one PhD student at ODU for one semester and one PhD student at Drexel for two quarters. The project faculty will not be funded this summer or next summer, and travel for PhD students to present our findings and allow our work to be published will not be supported.
DoD Minerva Research Initiative
"What's Missing? Innovating Interdisciplinary Methods for Hard-to-Reach Environments," Erika Frydenlund (ODU VMASC), Jose Padilla (ODU VMASC), Michele C. Weigle (ODU), Jennifer Fish (ODU), Michael L. Nelson (ODU), Michaela Hynie (York University, Canada), Hanne Haaland (Univ of Agder, Norway), Hege Wallevik (Univ of Agder, Norway), Katherine Palacio-Salgar (Universidad del Norte, Colombia), Jul 2022 - Jul 2025 Feb 2025, DoD Minerva Research Initiative, $1,618,699.
We were excited to be invited to join this interdisciplinary and international collaboration to study residents' perceptions of safety and security in hard-to-reach areas. This grant was particularly competitive with 400 white paper proposals submitted, 42 of which were invited to submit full proposals, and only 15 were ultimately funded. Our study sites were two informal settlements, Khayelitsha Site-C near Cape Town, South Africa, and Villa Caracas, Barranquilla in Colombia. The overall goal of the project was to explore the limitations and potential knowledge gaps when only certain methodological or epistemological approaches are feasible in such settings. Each research team used a different methodology to carry out their study: visual sociology, institutional ethnography, citizen science, surveys, and web/social media analysis. In addition, another team performed meta-analysis to study how the interdisciplinary teams collaborated. Our part of the project was to use public data sources, such as worldwide news databases and social media, to learn about the sites. We hope to still be able to produce a tech report to describe our findings.
Direct Impact of Termination: This was a large interdisciplinary, multiple institution grant, so I can only speak to the impact of the termination on my research team. For us, since this grant was cancelled only a few months away from its original end date, we were able to support one graduate student for the three year period of the grant and support faculty summer stipends for each of the summers. The main impact of the termination was the loss of our student's funding for this summer and the loss of travel support to enable all of the project partners to meet at our project wrap-up workshop.
"Trust and Influence Program Review Meeting 2024 Trip Report", my PhD student's trip report summarizing her experience presenting our group's work at the Trust and Influence Program Review Meeting - presumably, most of the other research projects described here have also been terminated