Hannah Claus wants to ensure that language technologies are accessible, fair and culturally sensitive across diverse communities worldwide.
We cannot just rely on the developers to build something that is ethical and of benefit to all of humanity...We need to include way more people in the process and to build a more AI-literate society which understands the risks and limitations of AI as well as its possibilities.
Hannah Claus
When Hannah Claus [2024] studied computer science at school she soon realised that she was in a room full of white boys, looking at posters of white men. “I could not see myself in that,” she says. “I realised there were no role models to follow and that I had to become that myself. There are so many amazing black female scientists, but they don’t get credit because they question the system. That realisation inspired me to question every system I see, to ask why it is not built for me and see how we can rebuild it so people like me can be included.”
She’s starting with language and with connecting AI and robotics with the humanities so they talk to each other rather than existing in two separate worlds. For Hannah, many of the problems that we see or will see with AI, in particular the way it can exclude certain groups, are due to a lack of diversity, not just the exclusion of certain groups, but the exclusion of certain ways of thinking and certain disciplines.
She wants to be the bridge between different cultures – both human and disciplinary – and she hopes that her PhD in Computation, Cognition and Language, which she starts in the autumn, will contribute to the development of responsible and inclusive AI systems, ensuring that language technologies are accessible, fair and culturally sensitive across diverse communities worldwide.
Language is an important basis of culture and Hannah has been fascinated by languages for many years. She grew up speaking three – Amharic, German and English – and there are over 80 in Ethiopia, where her mother is from. Yet most AI language models are built on European languages, she says, and ChatGPT doesn’t work in many other languages, particularly those which use a different alphabet from European languages. Unless AI models are more inclusive, many parts of the world could be left behind and Hannah is determined to do something about that.
Childhood
Born in Hamburg and growing up in Berlin, her father is German and her mother is an Ethiopian artist who specialises in free art, making sculptures and paintings out of many different materials. Hannah, who has a younger brother and travelled to Ethiopia frequently as a child, grew up in her mother’s art studio and was fascinated by the process of turning raw materials into art.
While she says she did not inherit her mother’s artistic genes, she attributes her interest in robots and machine design and in trying new things to her time spent observing her mother at work with wires and metal in the studio.
Around the age of 13 Hannah had to choose between Japanese, Latin, drama and computer science. She went for computer science, becoming the only girl in her class and the only person of colour. She says it was very isolating and she felt expectations for her were low. So she was determined to study hard and to prove she could excel at coding. At 15 she started going to computer science lectures at two different universities as part of a scholarship programme and continued to do so until she started her undergraduate studies. “It was as if someone had lit a flame. A whole world that I did not know existed came to light. I had the freedom to build new worlds and algorithms. I wanted to learn more,” she says.
What fascinated her was the speed of change and the amount of unexplored space there was in the subject. “It was like what sailors may have felt 500 years ago,” she says. “There is still so much to see and so much responsibility to build AI in a better way so it includes everyone.”
Undergraduate degree
Hannah realised over time that she didn’t want to study Computer Science and repeat what she already knew. So she chose to study AI and Robotics which meant she could build new AI models and robots at the University of Bedfordshire, one of the few universities to offer that course at undergraduate level. She began her studies in 2019, a term before the Covid lockdown.
She had a bursary from the university to work during her degree at the German aerospace centre, which is part of the European Space Agency, so she travelled between the UK and Germany for a large part of the course once travel was possible. There she worked on developing an AI algorithm for a humanoid robot, Rollin’ Justin. The robot has human features such as a head, arms and torso and is designed to go to Mars where it will be remotely controlled. Hannah’s work focused on programming the robot’s neural networks to distinguish between normal contact and a collision, such as a rock colliding with it.
After finishing her degree, Hannah started her master’s in AI and Natural Language Processing at Queen Mary University of London. Her focus was on the languages used in computer vision, which encompasses the extraction of high-dimensional visual data from the real world in order to produce numerical or symbolic information. She started questioning the role of languages used in robotics. Halfway through her course ChatGPT came out and suddenly everyone started talking about languages and AI.
Hannah finished her master’s last summer, completing her thesis on an algorithm that sorts the news for positive stories. She then started working at the Ada Lovelace Institute where she focuses on the impact of AI on society. “It’s so important as so many of the algorithms we use are biased. It’s vital that we look at the datasets we are using. If they are based on the same kinds of people, we need to ask who is being excluded,” she says.
Hannah found the job through inclusive networks such as We and AI, an AI literacy for social inclusion initiative, Women in AI and Black Women in AI who raise awareness about the wider implications of bias. Hannah found there was a lot she could learn from the humanities about how governments, regulators and people deal with AI. She saw the need to reflect the vast richness of humanity in AI and to include information from as many different cultural and human perspectives as possible. “We cannot just rely on the developers to build something that is ethical and of benefit to all of humanity,” she says. “Developers are experts in their field, but we need to include way more people in the process and to build a more AI-literate society which understands the risks and limitations of AI as well as its possibilities.”
PhD
For her PhD in Computation, Cognition and Language Hannah realised she needed to go into something that would be sustainable over the next few years, given how fast AI is changing, and that she is passionate about.
She says it was ‘mindblowing’ when she realised the language deficits of the ChatGPT model and how that restricts the amount and quality of information that is available to those not speaking European languages. It also limits the research that is flagged and increases inequality between different regions, making it harder for those from the Global South to keep up with technology advances.
Hannah knows this first hand as she does workshops on AI and AI bias with students in Ethiopia, where war has interrupted knowledge creation. In that she is following in the footsteps of her mother, who led workshops at universities in Ethiopia with next generation artists.
She says her workshops are very much a two-way teaching process as she learns from the students’ different perspectives. “Every time I give talks they are so engaged and want to learn so much. They are eager to apply that to solve the problems that they see,” she says, citing medical problems as an example. “There’s a real sense of purpose. They want to learn about tools they can build and apply in their everyday lives because no-one else is building them for them,” says Hannah. “So many countries and cultures are ignored by AI.”