Résumé IA
Des professeurs de l'Université d'Auburn en Alabama ont organisé en novembre dernier deux événements appelés « AI Café » : des conversations informelles de 90 minutes réunissant enseignants, étudiants et membres de la communauté dans un café-librairie local. Le format était délibérément décontracté — pas d'estrade, pas d'experts face à un public, mais des groupes assis en cercle où les questions circulaient librement. Les organisateurs, Xaq Frohlich, Cheryl Seals et Joan Harrell, voulaient confronter directement les inquiétudes autour de l'IA plutôt que de les esquiver. Les questions qui ont émergé dès le départ donnent le ton : « Est-ce que je pourrai trouver un emploi après mes études ? » — une anxiété partagée par de nombreux jeunes alors que les entreprises déploient des outils de recrutement automatisés et réorientent des milliards de dollars vers l'infrastructure IA. Ce type d'initiative révèle un manque criant dans le débat public sur l'IA : la plupart des gens ont le sentiment que cette technologie leur est imposée, façonnée exclusivement par des intérêts commerciaux sans que la société civile ait son mot à dire. Lors de ces cafés, les participants n'ont pas exprimé un rejet de l'IA en soi, mais une lassitude face à un schéma répété — celui de technologies puissantes qui reconfigurent leurs vies sans qu'ils y participent. Quand on leur a demandé à quoi ressemblerait un futur centré sur l'humain, les échanges sont devenus constructifs : les gens ont mis en avant l'équité plutôt que l'efficacité, la créativité plutôt que l'automatisation, la dignité plutôt que la commodité. Pour les organisateurs eux-mêmes, l'expérience a été transformatrice — entendre comment l'IA affecte concrètement le travail des gens, l'éducation de leurs enfants et leur confiance dans l'information les a amenés à saisir des dimensions qu'ils n'avaient pas pleinement mesurées. Le modèle s'appuie sur quelques règles simples mais efficaces : ancrer les discussions dans le présent (pas de spéculation science-fiction), s'appuyer sur des analogies historiques comme l'imprimerie ou l'électricité pour contextualiser les réactions, et demander aux participants de nommer des outils précis plutôt que de parler d'« IA » en abstrait. Cette approche s'inscrit dans une conviction plus large des auteurs, issus de l'informatique et des sciences humaines : les universitaires ont un rôle à jouer non pas pour « éduquer les masses », mais pour co-construire avec leurs communautés une vision de l'IA qui serve l'intérêt général.
“Can I get an interview?” “Can I get a job when I graduate?” Those questions came from students during a candid discussion about artificial intelligence, capturing the anxiety many young people feel today. As companies adopt AI-driven interview screeners, restructure their workforces, and redirect billions of dollars toward AI infrastructure , students are increasingly unsure of what the future of work will look like. We had gathered people together at a coffee shop in Auburn, Alabama for what we called an AI Café. The event was designed to confront concerns about AI directly, demystifying the technology while pushing back against the growing narrative of technological doom. AI is reshaping society at breathtaking speed. Yet the trajectory of this transformation is being charted primarily by for-profit tech companies, whose priorities revolve around market dominance rather than public welfare. Many people feel that AI is something being done to them rather than developed with them. As computer science and liberal arts faculty at Auburn University , we believe there is another path forward: One where scholars engage their communities in genuine dialogue about AI. Not to lecture about technical capabilities, but to listen, learn, and co-create a vision for AI that serves the public interest. The AI Café Model Last November, we ran two public AI Cafés in Auburn. These were informal, 90-minute conversations between faculty, students, and community members about their experiences with AI. In these conversational forums, participants sat in clusters, questions flowed in multiple directions, and lived experience carried as much weight as technical expertise. We avoided jargon and resisted attempts to “correct” misconceptions, welcoming whatever emotions emerged. One ground rule proved crucial: keeping discussions in the present, asking participants where they encounter AI today. Without that focus, conversations could easily drift to sci-fi speculation . Historical analogies—to the printing press, electricity, and smartphones—helped people contextualize their reactions. And we found that without shared definitions of AI, people talked past each other; we learned to ask participants to name specific tools they were concerned about. Organizers Xaq Frohlich, Cheryl Seals, and Joan Harrell (right) held their first AI Café in a welcoming coffee shop and bookstore. Well Red Most importantly, we approached these events not as experts enlightening the masses, but as community members navigating complex change together. What We Learned by Listening Participants arrived with significant frustration. They felt that commercial interests were driving AI development “without consideration of public needs,” as one attendee put it. This echoed deeper anxieties about technology, from social media algorithms that amplify division to devices that profit from “engagement” and replace meaningful face-to-face connection. People aren’t simply “afraid of AI.” They’re weary of a pattern where powerful technologies reshape their lives while they have little say. Yet when given space to voice concerns without dismissal, something shifted. Participants didn’t want to stop AI development; they wanted to have a voice in it. When we asked, “What would a human-centered AI future look like?” the conversation became constructive. People articulated priorities: fairness over efficiency, creativity over automation, dignity over convenience, community over individualism. The three organizers, all professors at Alabama’s Auburn University, say that including people from the liberal arts fields brought new perspectives to the discussions about AI. Well Red For us as organizers, the experience was transformative. Hearing how AI affected people’s work, their children’s education, and their trust in information prompted us to consider dimensions we hadn’t fully grasped. Perhaps most striking was the gratitude participants expressed for being heard. It wasn’t about filling knowledge deficits; it was about mutual learning. The trust generated created a spillover effect, renewing faith that AI could serve the public interest if shaped through inclusive processes. How to Start Your Own AI Café The “deficit model” of science communication—where experts transmit knowledge to an uninformed public—has been discredited. Public resistance to emerging technologies reflects legitimate concerns about values, risks, and who controls decision-making. Our events point toward a better model. We urge engineering and liberal arts departments, professional societies, and community organizations worldwide to organize dialogues similar to our AI Cafés. We found that a few simple design choices made these conversations far more productive. Informal and welcoming spaces such as coffee shops, libraries, and community centers helped participants feel comfortable (and serving food and drinks helped too!). Starting with small-group discussions, where people talked with neighbors,