Dive into the Future of AI with the LLM Working Group at D-Lab
For ten years, D-Lab has offered workshops and consulting in data-intensive social science, recognizing that today’s challenges require data-driven solutions
Programming languages like Python and R have become vital tools in data analysis. In the past decade, we have taught thousands of learners to use these languages to automate repetitive tasks, scrape data from the web, and perform complex analyses, all within a single environment. For social scientists, this has opened up new possibilities for research–from measuring hate speech in social media to exploring racial biases in medical literature.
Our ten-year anniversary coincides with a remarkable shift in data science. We see up close how more and more of our learners are utilizing increasingly impressive Large Language Models like ChatGPT to accelerate and improve their research process. We applaud this democratization of data science as a potential force for social good. At the same time, the outsourcing of analysis means that digital literacy is no longer a luxury; it's a necessity.
The fact that LLMs like ChatGPT perform data analyses on vast scales means that understanding the statistics and methodological choices underpinning these models becomes even more critical. LLMs can analyze large datasets quickly, but they can also amplify existing biases in the data or introduce new ones. As educators, we also face new questions about the usage of these technologies in the classroom.
To explore these issues further with the Berkeley community, this semester we are launching the LLM Working Group: a community founded to facilitate conversations about Large Language Models (LLMs) and Generative AI (GenAI) within academia—from teaching, to researching LLMs, to using LLMs as part of the research toolkit.
This 4-part series will provide fundamental knowledge of LLMs, and generate conversation about the promises and challenges of LLMs in different facets of academic work. Sessions will be interactive, encouraging participants to share their experiences, pose questions, and collaboratively explore the challenges and potential of these technologies in their respective fields.
In our first session, Generative AI and the Digital Humanities, Tim Tanghlerlini, Greg Niemeyer, and Lisa Wymore will pose questions about the future of LLMs in the context of the Digital Humanities. We will discuss the future of Digital Humanities research using LLMs, as well as the role of LLMs in producing creative work—literature, video, music, and so on—and the concomitant issues of ownership, creativity, and originality that come with this production.
Other questions we will be addressing in the months to come include:
Who has access to AI tools, and whose futures are determined by them?
Is AI fundamentally derivative, or does it mirror the human experience?
Will LLMs fundamentally alter the importance of remembering knowledge and learning?
What kinds of approaches and methods can we try when using LLMs for research?
We encourage Berkeley community members to participate, regardless of their experience level with LLMs and GenAI. The LLM Working Group is a welcoming and supportive community for all.