Artificial Intelligence (AI)

Beyond the Hype: How We Built AI Tools That Actually Support Learning

November 12, 2025
by Weiying Li. What does genuine partnership look like when building AI for education? Working with middle school teachers and computer scientists, we co-designed AI dialogs where teachers are valuable contributors to refine what the AI understands as valuable thinking. Through iterative refinement, teachers identified precursor ideas and observations that predicted future learning, and refined guidance design in the dialog. Our AI dialog sees learning the way teachers do, built through genuine collaboration where both model development, learning sciences theories, and teachers' classroom expertise work together from the start, not just at the end.

Lance Santana

Consulting Drop-In Hours: By appointment only

Consulting Areas: APIs, ArcGIS Desktop - Online or Pro, Bayesian Methods, Cluster Analysis, Data Visualization, Databases and SQL, Excel, Git or GitHub, Java, Machine Learning, Means Tests, Natural Language Processing (NLP), Python, Qualtrics, R, Regression Analysis, Research Planning, RStudio, Software Output Interpretation, SQL, Survey Design, Survey Sampling, Tableau, Text Analysis

Quick-tip: the fastest way to speak to a consultant is to first ...

Forecasting Social Outcomes with Deep Neural Networks

October 7, 2025
by Paige Park. Our capacity to accurately predict social outcomes is increasing. Deep neural networks and artificial intelligence are crucial technologies pushing this progress along. As these tools reshape how social prediction is done, social scientists should feel comfortable engaging with them and meaningfully contributing to the conversation. But many social scientists are still unfamiliar with and sometimes even skeptical of deep learning. This tutorial is designed to help close that knowledge gap. We’ll walk step-by-step through training a simple neural network for a social prediction task: forecasting population-level mortality rates.

Predicting the Future: Harnessing the Power of Probabilistic Judgements Through Forecasting Tournaments

April 29, 2025
by Christian Caballero. From the threat of nuclear war to rogue superintelligent AI to future pandemics and climate catastrophes, the world faces risks that are both urgent and deeply uncertain. These risks are where traditional data-driven models fall short—there’s often no historical precedent, no baseline data, and no clear way to simulate a future world. In cases like this, how can we anticipate the future? Forecasting tournaments offer one answer, harnessing the wisdom of crowds to generate probabilistic estimates of uncertain future events. By incentivizing accuracy through structured competition and deliberation, these tournaments have produced aggregate predictions of future events that outperform well-calibrated statistical models and teams of experts. As they continue to develop and expand into more domains, they also raise urgent questions about bias, access, and whose knowledge gets to shape our collective sensemaking of the future.

Navigating AI Tools in Open Source Contributions: A Guide to Authentic Development

December 17, 2024
by Sahiba Chopra. The rise of ChatGPT has transformed how developers approach their work - but it might be hurting your reputation in the open-source community. While AI can supercharge your productivity, knowing when not to use it is just as crucial as knowing how to use it effectively. This guide reveals the unspoken rules of AI usage in open source, helping you navigate the fine line between leveraging AI and maintaining authenticity. Learn when to embrace AI tools and when to rely on your own expertise, plus get practical tips for building trust in the open-source community.

Sharing Just Enough: The Magic Behind Gaining Privacy while Preserving Utility

April 15, 2025
by Sohail Khan. Netflix knows what you like, but does it need to know your politics too? We often face a frustrating choice: share our data and be tracked, or protect our privacy and lose personalization. But what if there was a third option? This article begins by introducing the concept of the privacy-utility trade-off, then explores the methods behind strategic data distortion, a technique that lets you subtly tweak your data to block sensitive inferences (like political views) while still maintaining useful recommendations. Finally, it looks ahead and advocates for a future where users, not platforms, shape the rules, reclaiming control of their own privacy.

Demystifying AI

May 5, 2025, 2:30pm
In this workshop, we provide a basic and relatively non-technical introduction to the foundational concepts underlying contemporary AI tools. First, we’ll cover the the fundamentals of AI, Machine Learning, and Neural Networks/Deep Learning. Then, we’ll examine the capabilities and limitations of contemporary AI tools such as ChatGPT, Claude, and Perplexity, and outline best practices for the use of such tools.

LLMs for Exploratory Research

March 20, 2025, 10:00am
In a fast evolving artificial intelligence landscape, LLMs such as GPT have become a common buzzword. In the research community, their advantages and pitfalls are hotly debated. In this workshop, we will explore different chatbots powered by LLMs, beyond just ChatGPT. Our main goal will be to understand how LLMs can be used by researchers to conduct early-stage (or exploratory) research. Throughout the workshop, we will discuss best practices for prompt engineering and heuristics to evaluate the suitability of an LLM's output for our research purposes. Though the workshop primarily focuses on early-stage research, we will briefly discuss the use cases of LLMs in later stages of research, such as data analysis and writing.

Claudia von Vacano, Ph.D.

Founding Executive Director, P.I., Research Director, FSRDC

Dr. Claudia von Vacano is the Founding Executive Director and Senior Research Associate of D-Lab and Digital Humanities at Berkeley and is on the boards of the Social Science Matrix and Berkeley Center for New Media. She has worked in policy and educational administration since 2000, and at the UC Office of the President and UC Berkeley since 2008. She received a Master’s degree from Stanford University in Learning, Design, and Technology. Her doctorate is in Policy, Organizations, Measurement, and Evaluation from UC Berkeley. Her expertise is in organizational theory and...

The Creation of Bad Students: AI Detection for Non-Native English Speakers

January 21, 2025
by Valeria Ramírez Castañeda. This blog explores how AI detection tools in academia perpetuate surveillance and punishment, disproportionately penalizing non-native English speakers (NNES). It critiques the rigid, culturally biased notions of originality and intellectual property, highlighting how NNES rely on AI to navigate the dominance of English in academic settings. Current educational practices often label AI use as dishonest, ignoring its potential to reduce global inequities. The post argues for a shift from punitive measures to integrate AIs as a tool for inclusivity, fostering diverse perspectives. By embracing AI, academia can prioritize collaboration and creativity over control and discipline.