Large Language Models (LLMs)

Human-Centered Design for Migrant Rights

October 29, 2024
by Victoria Hollingshead. In honor of the 2024 International Day of Care and Support, Victoria Hollingshead shares her recent work with the Center for Migrant Advocacy’s Direct Assistance Program and their innovative approach to supporting Overseas Filipino Workers (OFWs) using generative AI. OFWs, especially female domestic workers in the Gulf Cooperation Council (GCC), are vulnerable to exploitation from foreign employers and recruitment agencies while having limited access to legal support. Using a design thinking framework, Victoria and CMA’s Direct Assistance team co-designed a proof of concept to enhance the legal and contract literacy among OFWs in the Kingdom of Saudi Arabia, a top destination country. This project shows promise in leveraging emerging technologies to empower OFWs, enhancing the Philippines' reputation as a migrant champion and supporting the nation's broader push for digital transformation.

Andrea Lukas

UTech Manager
Computer Science
Data Science

Hi everyone! I'm Andrea Lukas, a 3rd-year student majoring in Computer Science and Data Science at UC Berkeley. I'm passionate about UI/UX design and AI-centered human-computer interaction, and I'm actively involved in Computational Cognition research using Large Language Models (LLMs). As the Manager at D-Lab, I'm excited to contribute to the team by optimizing operations and fostering collaboration.

Outside of my academic and professional work, I’m an active member of Berkeley's Dance Community, where I participate in various teams. I also enjoy discovering new matcha spots and...

Tom van Nuenen, Ph.D.

Data/Research Scientist, Senior Consultant, and Senior Instructor
D-Lab
Social Sciences
Digital Humanities

I work as a Lecturer, Data Scientist, and Senior Consultant at UC Berkeley's D-Lab. I lead the curriculum design for D-Lab’s data science workshop portfolio, as well as the Digital Humanities Summer Program at Berkeley.

Former research projects include a Research Associate position in the ‘Discovering and Attesting Digital Discrimination’ project at King’s College London (2019-2022) and a researcher-in-residence role for the UK’s National Research Centre on Privacy, Harm Reduction, and Adversarial Influence Online (2022). My research uses Natural Language Processing methods to
...

Leveraging Large Language Models for Analyzing Judicial Disparities in China

October 8, 2024
by Nanqin Ying. This study analyzes over 50 million judicial decisions from China’s Supreme People’s Court to examine disparities in legal representation and their impact on sentencing across provinces. Focusing on 290 000 drug-related cases, it employs large language models to differentiate between private attorneys and public defenders and assess their sentencing outcomes. The methodology combines advanced text processing with statistical analysis, using clustering to categorize cases by province and representation, and regression models to isolate the effect of legal representation from factors like drug quantity and regional policies. Findings reveal significant regional disparities in legal access driven by economic conditions, highlighting the need for reforms in China’s legal aid system to ensure equitable representation for marginalized groups and promote transparent judicial data for systemic improvements.

LLM Working Group (March 2024)

March 18, 2024, 1:00pm
Teaching with LLMs: Emily Hellmich, Genevieve Smith, and Cheryl Berg will lead a dialogue on the potential of LLMs in reshaping educational landscapes. It discusses educational challenges such as AI literacy, academic integrity, biases, hallucinations, and privacy issues, as well as opportunities such as accessibility and democratization.

LLM Working Group (February 2024)

February 26, 2024, 1:00pm
Generative AI and the Digital Humanities: Tim Tanghlerlini, Greg Niemeyer, and Lisa Wymore will share experiences, posing questions about the future of LLMs in the context of the Digital Humanities. We will discuss the future of DH research using LLMs, as well as the role of LLMs in producing creative work—literature, video, music, and so on—and the concomitant issues of ownership, creativity, and originality that come with this production.

GPT Fundamentals

April 17, 2024, 3:00pm
This workshop offers a general introduction to the GPT (Generative Pretrained Transformers) model. We will explore how they reflect and shape our cultural narratives and social interactions, and which drawbacks and constraints they have.

LLM Working Group (April 2024)

April 22, 2024, 1:00pm
Understanding LLMs: Tarun Gogineni, who is part of the Technical Staff at OpenAI, will discuss the state-of-the-art research on the inner workings and output of LLMs. Tarun works with John Schulman & Liam Fedus on RL and ChatGPT and is a core contributor to GPT4 in the realm of Model Creativity. Tarun is joined by Zainab Hossainzadeh, a Linguist at Meta, who currently works on LLMs.

LLM Working Group (May 2024)

May 6, 2024, 1:00pm
Researching with LLMs: Douglas Guilbeault and Chris Soria will delve into the use of LLMs as part of the researcher toolkit. We will discuss the use of APIs, prompt engineering, and other techniques to integrate LLMs into research.

Conceptual Mirrors: Reflecting on LLMs' Interpretations of Ideas

April 23, 2024
by María Martín López. As large language models begin to engrain themselves in our daily lives we must leverage cognitive psychology to explore the understanding that these algorithms have of our world and the people they interact with. LLMs give us new insights into how conceptual representations are formed given the limitations of data modalities they have access to. Is language enough for these models to conceptualize the world? If so, what conceptualizations do they have of us?