“What you need to understand about Artificial Intelligence and Holocaust remembrance is that they are bound together by trust and understanding,” said IHRA Chair, Lord Eric Pickles, at the opening of the UK IHRA Presidency conference on AI. 

“We don’t trust AI and AI doesn’t understand us.”  

There was laughter from the audience – but there was truth, too, in the quip. The conference on AI in the Holocaust education, remembrance, and research sector held in London on 1 December, 2024, brought together experts from media studies, sociology, and Holocaust studies to unravel the quip and explore the challenges and opportunities AI presents for the future of Holocaust memory. 

The fear: deepfakes, disinformation, and Chatbots

Dr Victoria Grace Richardson-Walden of the Landecker Digital Memory Lab opened the session by encouraging participants to beware of the hype around AI. She outlined the various types of AI that are already woven into our daily lives; from Netflix recommendations to SatNav to your email spam filter and underlined that the technology can be used in both positive and negative ways. 

Danny Morris from the Community Security Trust shared information about the guardrails that are in place to stop AI being used to generate inaccurate or offensive material related to the Holocaust. He also explained how actors are using descriptive prompts to by-pass safety features. Morris stressed that antisemitism is not a new phenomenon but that AI provides new ways to express the hatred. The audience was stunned into silence by an AI generated video based on Mein Kampf, examples of chatbot discussions with Nazi perpetrators, and photos of a young Adolf Hitler with his arm draped around Anne Frank.  

Noah Kravitz, creator of the NVIDIA AI Podcast, also underlined the danger of disinformation and stated that “AI has the potential to supercharge and transform anything that humans do. We cannot mitigate all harms. Bad actors will always look for ways to circumvent.”  

Dr Robert Williams speaks at the conference at Lancaster House. Photo: Grainge Photography Ltd ©

Turning towards more technical dangers, Dr Richardson-Walden explained that AI can only draw from the information it is trained on. This means that if AI models only have access to inaccurate sources or limited narratives, it will produce flawed outputs or reproduce the same well-known stories or facts over and over again. These information loops amplify some narratives while eroding the breadth and depth of the history of the Holocaust. The mass digitization of records and their integration into AI systems can go some way towards protecting the record of the Holocaust from erasure or distortion. 

During a panel discussion moderated by IHRA delegate, Martin Winstone, concerns were also raised over the unethical practices of commercial AI companies which use low-paid manual labour to tag and moderate content as well as the environmental impact of AI. 

Dr Victoria Grace Richardson-Walden speaks at the conference at Lancaster House. Photo: Grainge Photography Ltd ©

The opportunities: cataloguing, education, and uncovering victim stories

But there were hopeful stories too. Dr Yael Richler Friedman, Pedagogical Director of Yad Vashem’s International Institute for Holocaust Education, explained how AI had helped Yad Vashem identify the names of more than 400 previously unknown victims of the Holocaust. However, she also told an anecdote about how AI had mistaken the very common word “li” – which mean ‘me’ in Hebrew – as a family name, drawing the conclusion that there were hundreds of additional victims with this ‘surname’. The story made a strong case for the need for human review and oversight in any AI-powered project. Shiran Mlamdovsky Somech of Generative AI for Good presented an AI-created telling of the Warsaw Ghetto Uprising. 

Dr Robert Williams, Finci-Viterbi Executive Director of USC Shoah Foundation, spoke about how his organization is training Large Language Models to catalogue testimonies and carry out real-time analysis of content for moderation. He also highlighted that AI can be used to translate testimonies into multiple languages, facilitating access to people all over the world.  

Clementine Smith of the Holocaust Education Trust spoke about their 360 Testimony project, which has two components:  first students engage with USC Shoah Foundation testimonies in which authentic pre-recorded answers are matched to real-time student questions by AI. Then, a Virtual Reality headset transports learners to the present-day locations where the survivors’ stories unfolded. With only a few hundred thousand Holocaust survivors still with us worldwide, Smith stressed the value in seeking innovative, thoughtful ways to use technology to keep their memory alive for future generations.  

IHRA delegate Martin Winstone moderates a panel discussion with Dr. Yael Richler Friedman, Dr. Rik Smit, and Dr. Samuel Merrill at the conference. Photo: Grainge Photography Ltd ©

The future: approaching AI in partnership

Throughout the conference, there were calls for governments to improve the digital literacy of educators, students, and researchers to allow them to critically engage with AI tools and recognize AI-generated misinformation. 

Dr. Rik Smit, University of Groningen, noted that part of the solution is transparency and ethics in design and he advocated for AI companies to consult experts during the design process of generative AI tools. Dr Smit encouraged us to ask ourselves: what problem is this tool aiming to solve? And who is benefitting from it? He also cautioned that regulation is not always the answer, explaining that: “Big Tech loves regulation. It is a PR strategy: we are so big you need to regulate us.” 

Dr. Samuel Merrill from Umeå University’s Department of Sociology and Centre for Digital Social Research encouraged Holocaust experts not to shy away from partnership with experts outside their usual fields: “We are not going to learn each others’ languages without speaking to each other. There is a strong argument for getting computer scientists at the table with us,” he said. 

Participants at the AI conference at Lancaster House in London. Photo: Grainge Photography Ltd ©

As the conference drew to a close, Advisor to the IHRA, Dr Robert Williams, painted a hopeful picture: “Imagine a student in a remote corner of the world where there is limited access to formal education. They have a computer and internet access. With the power of AI, they can learn about the complexities of the Holocaust and Jewish life in rich, imaginative, interactive ways. They can have access to authentic sources. They can watch testimonies in their own language. They can use chatbots or virtual learning assistants to provide personalized education in ways that were previously unimaginable.” 

“But if we continue to work in isolation,” he cautioned “the battle is lost.”  

The IHRA, with its unique network of experts and governments, was identified as an organization that could act as a convener for these different groups, developing guidance and strengthening government commitments to broad-scale digitization efforts.  

Though the speakers at the conference came from diverse fields, their message was unified: we cannot solve a global challenge by working in local limits and within our own subject-matter bubble.  Historians, computer scientists, policymakers, and educators need to work together if we want to see AI applications that respect the complexities of Holocaust memory.