The use case will consider the possible ethical risks and impacts of the chatbot Kari,AI-based virtual agent developed for local government bodies in Norway, including among others the risks deriving from its ability to discriminate several thousand intents from users' free text input.
The chatbot Kari is an AI-based virtual agent developed for local government bodies – the local municipalities – to answer citizen's questions on the municipality services. It is currently available to the citizens of 80 Norwegian local municipalities – covering about 30% of the Norwegian population, and is in the process of being implemented in other European countries, including Sweden, Finland and Denmark. The chatbots knowledge-base contains over 6000 intents (answers to municipality questions), making it one of the largest in the world.
The chatbot Kari uses natural language to interact with users in a manner that resembles human-human dialogue. This creates situations where the user may potentially reveal highly sensitive personal data, requiring the implementation of secure data management and anonymization procedures. Further, Kari regularly receives questions about personal problems and potential self-harm, such as loneliness, depression and anxiety. This is a new set of questions that require new quality assurance services and policies to handle the information correctly and ethically. While Kari may ultimately represent an opportunity for the municipality of lowering the threshold to accessing social and health services and identify underlying or untreated needs in the population, it may also create social and ethical risks if implemented without sufficient sensitivity to these issues. This case study will thus leverage the ETAPAS framework to assess – among the others - the possible risk of the chatbot to discriminate several thousand intents (ie answers to municipality questions) from users' free text input.