Care at the Edge of Automation

technology
AI
vision
Author
Published

2025-04-29

Abstract

Technologies don’t just solve problems, they change us. We invent technologies, and they invent us in turn, shaping our lives and worlds. This is the phenomenon that Terry Winograd and Fernando Flores, in Understanding Computers and Cognition (1986), called “ontological design.” It matters now more than ever—along with a second lesson they saw clearly. Technological research is always guided (and sometimes misguided) by deep ontological assumptions about, e.g., the nature of cognition, agency, and communication. If we are to create technologies that truly serve human flourishing and care, we must bring these hidden assumptions into the open and question them at their roots.

Communicated by Brendan Fong.

I’m delighted to welcome our first philosopher-in-residence, B. Scot Rousse! In getting to know B over the past few months, I’ve been struck by his insights into the ways technologies can centre and marginalise human care and meaning-making, as well as his deep commitment to serving others and building a world that supports this care. We’re very excited to bring B’s rich, distinctive intellectual tradition into Topos. Moreover, I’m excited to explore how this perspective can be a fresh ingredient in new, Topos-led technologies that empower human communities in this technological era.

Here’s a first post from B, in which he shares a bit more about his intellectual lineage and the questions that drive him.

1 Philosophy, Ontology, and Design

Technologies are not just the application of scientific knowledge to practical problems. They reshape our space of possibilities, altering how we live, act, and understand ourselves. The design of new technologies is, often quietly, the design of new ways of being human. Our inventions invent us in return.

This insight captures the notion of “ontological design,” introduced by Terry Winograd and Fernando Flores in their 1986 book Understanding Computers and Cognition: A New Foundation for Design. Rapid advances of AI and other technologies today demand that we grapple anew with this startling realization.

Take the smartphone. It didn’t merely make telecommunication more convenient. It placed us into a new condition of constant connectivity—reshaping how we learn about events, navigate the physical world, seek social connection, and even become the people we are. The repercussions of this transformation for our collective well-being are still coming into view.

Today is an exhilarating and disorienting time to be alive, and to be thinking about and building new technologies. Advances in AI have reignited fundamental questions about the human predicament: What is language? Intelligence? Communication? Flourishing? What kind of human beings are we becoming? What understanding of our predicament should guide the design and use of AI and other technologies?

In Understanding Computers and Cognition, Winograd and Flores showed that philosophical questions are always at stake in technological design. Every new system, they argued, carries a tacit or explicit stand on fundamental issues: what cognition is, what agency is, what communication is, and so on.

They named the guiding assumptions of the AI research of their time “the rationalistic tradition”: a view that human intelligence consists in formal operations (such as search and inference over explicit representations); that agency is the solving of discrete problems by selecting between definite alternatives; and that communication is the transmission of information.

Winograd and Flroes issued a threefold challenge to the rationalistic tradition: (1) to call attention to the hidden philosophical assumptions shaping AI research; (2) to show how these assumptions can both limit our technological capabilities and thwart our imagination for the possibilities of human-machine interaction; and (3) to offer an alternative ontology to guide future design. The urgency of this threefold challenge has been renewed today.

Understanding Computers and Cognition argued, quite presciently, that computer systems would become woven into human life as conversational technologies. But not all conversations are alike. Sometimes, for example, we are simply speculating about possibilities; other times we are directly coordinating action in requests, offers, and promises.

Adequately designing software to assist in the execution of such conversations for action, Winograd and Flores showed, required rethinking the nature of communication itself—not as the transmission of information, but as the coordination of commitments.

A promise is not a piece of information. It is a way of shaping and bringing forth the future, together. For example, ride-sharing apps work when a request (“pick me up”) and a promise (“driver arriving”) coordinate action; both hinge on mutual commitment, not just clarity of information.

2 Our Need For Renewed Ontological Reflection

While the AI paradigm has shifted since the 1980s—from symbolic rules-based AI to neural networks and machine learning—the need for the kind of philosophical reflection exemplified by Winograd and Flores has only deepened. Today’s systems operate differently, but often rest on similarly narrow assumptions about human intelligence, communication, and agency.

We must ask: What are the ontological assumptions guiding AI research today? How might they be limiting not only technical development, but also how we live and interact with AI systems? What alternative conceptions of intelligence, agency, and communication might better orient the future?

These are the kinds of questions that animate my research. I am enthusiastic to explore, alongside the Topos community, how philosophical reflection and technological invention can mutually inform each other, especially as we seek to create technologies that expand, rather than constrict, our capacity for shared sense-making in these turbulent times.

Philosophical reflection becomes especially vital in times of transition and upheaval, when our settled certainties begin to fracture. The world needs bold and rigorous philosophical reflection in our times more than ever. Philosophy helps us see the world anew by helping us question what seems obvious.

We live in an era shaped by a largely unquestioned commitment to efficiency, control, optimization, and problem-solving. Human life itself increasingly appears as a series of problems to solve; and instrumental rationality, the capacity to identify and pursue efficient means to an end, is treated as the highest human excellence.

AI systems today reflect and amplify this logic. We see it in the long-running dream that machines might soon automate all human work. We see it in the hope—expressed again and again—that a superintelligent AI might one day answer all human questions and solve all human problems.

Machines will be capable within twenty years of doing any work that a man can do.

— Herbert Simon, The Shape of Automation for Men and Management, (1965)

But in this relentless drive to optimize and solve, we risk forgetting a more basic question: For the sake of what?

3 Toward an Ontology of Care

This brings us to the human propensity to care. Caring orients us toward questions of worth: What is worth doing? What tasks are worth automating? What kind of life is worth living? What kind of future is worth creating?

These are questions without technical answers.

Caring shows up in our sense of what matters. It draws our attention, solicits our action, and binds our lives to people, places, and projects that hold meaning for us.

Our propensity to care is likely grounded in our fundamental fragility and interdependence as the peculiar social, mortal, and biological beings that we are.

Traditions from philosophy (especially phenomenology), nursing, education, and sociology have long emphasized that we cannot adequately tend to what matters if we try only to optimize or control it. Indeed, one of my sources in thinking about care is Patricia Benner, a nursing expert who put care at the center of the activity and education of nurses.

What if we cannot adequately tend to what matters while trying to optimize and control it?

Think of the breakdowns in friendships or romantic relationships managed as a series of transactions; the ecological damages wrought by industrial agriculture and factory farming; the failures of medical care when patients are reduced to biochemical aggregates instead of being treated as a whole person.

A maniacal apotheosis of efficiency over care lurks behind one of the most famous thought experiments in AI-safety. I am referring to the so-called “paperclip maximizer,” as described in Nick Bostrom’s Superintelligence. Imagine an AI designed simply to produce paperclips as efficiently as possible undergoes an intelligence explosion. Without constraints, it would set about converting the resources of the entire planet (including human lives) into paperclips.

“Paperclip Embrace,” by The Pier Group (Kevan Christiaens, Hillary Clark, Matthew Schultz) at the Misalignment Museum in San Francisco. Photo by B. Rousse.

“Paperclip Embrace,” by The Pier Group (Kevan Christiaens, Hillary Clark, Matthew Schultz) at the Misalignment Museum in San Francisco. Photo by B. Rousse.

The paperclip maximizer is a monster of unconstrained instrumental rationality! A system relentlessly optimizing but impervious to the question of why its goal matters, or for whose sake it should be pursued.

In this sense, the paperclip maximizer is not a mere fantasy; it is a mirror held up to our own technological age. As Martin Heidegger observed, modern technology tends to reveal the world (and we human beings ourselves) as resources to be optimized, ordered, and consumed.

When care is thoughtlessly subordinated to efficiency, we risk forgetting to ask the questions that make us human and connect us with our care.

How might a closer attention to the dynamics of care—rather than intelligence narrowly understood as efficient problem-solving—help orient the design and deployment of AI systems and other technologies?
My work contributes to such questions by retrieving and extending a tradition of thought centered on the embodied, skillful, and caring dimensions of human life, dimensions often overlooked in dominant conceptions of AI.

This tradition runs through the work of Hubert Dreyfus (Alchemy and Artificial Intelligence, 1965; What Computers Can’t Do, 1972), Stuart Dreyfus (Mind Over Machine, 1986, co-authored with Hubert Dreyfus), Patricia Benner (From Novice to Expert, 1984), Terry Winograd and Fernando Flores (Understanding Computers and Cognition, 1986), to my recent piece, “Can Machines Be in Language?” (Communications of the ACM, Feb 2024, co-authored with Peter Denning).

The enduring message of Understanding Computers and Cognition still holds: technology design is guided by often unexamined assumptions about cognition, action, communication, and other fundamental human phenomena. Today, we are called to articulate and explore an ontology of care, and to do so in a dynamic interplay with the design and development of emerging technologies.

This orientation aligns closely with the mission of the Topos Institute, where I am honored to be a visiting researcher. I look forward to collaborating with the team at Topos in their mission to “research new technologies that increase capacity for collective sense-making, while also creating a culture of use that is mindful of where technology cannot substitute for what must be done by people.”

Leaving a comment will set a cookie in your browser. For more information, see our cookies policy.