Agenda 2030 Graduate School blog

Lund University Agenda 2030 Graduate School is a global, cutting-edge research school and collaboration platform for issues related to societal challenges, sustainability and the 2030 Agenda. The 17 PhD students from all faculties at Lund University enrolled with the Agenda 2030 Graduate School relate their specific research topics to the Sustainable Development Goals. In this blog the PhD students of the Graduate School discuss topical research and societal issues related to the 2030 Agenda.

Should my financial adviser be a robot?

Robot looking into the camera. Photo.
Photo by Alex Knight on Unsplash.

Posted on 26 May 2021 by Juan Ocampo (Department of Business Administration).

The views expressed in this publication are those of the author and do not necessarily represent those of the Agenda 2030 Graduate School or Lund University. The present document is being issued without formal editing.

This post is part of a blog post series on AI and sustainability.

Artificial Intelligence in the field of Financial Inclusion

As we have been discussing along this series of posts on AI and Sustainable development algorithms are increasingly becoming recognised as an important element in our society and this is not different in the financial world. What interests me and my research is financial inclusion, for which AI has been focused in enhancing both communication and risk assessment. For this introductory post, I will mention the general characteristics of these applications and later briefly explain how the ML for chatbots work. In future posts I will cover the other applications so keep in the loop! But let’s start by summarising the three main use cases for financial inclusion: Risk measurement; Fraud Detections, and Customer service.

Usually a bank will offer you a loan based on your financial history, but what happens with people that have been left out of the financial system? How do they create a financial record if they are not even allowed to start one in the first place? This is a challenge, both for the lenders (usually banks) and for the people that are trying to get a loan. To solve this challenge, AI is being used to assess people’s creditworthiness by using alternative data, which is basically the use of information beyond conventional credit information as your payment history or banking behaviours. Alternative data can consider information coming from your transaction in e-platforms (e.g. Amazon), your emails, or even your movements (i.e. GPS location information)[1],[2]. All this information, or data points, are usually stored in your phone, mobile apps, and internet behaviour, for which you give consent to use when accepting the terms and conditions of the services. Scary as it sounds, the idea is that based on these data points companies create credit-scoring models that evaluate your potential payment behaviour and suggest the approval and loan amount.

A second use of AI is to automate fraud detections which is related to a term known as Know-Your-Customer (KYC). In broad terms, what these applications try to do is to avoid money laundry activities or identity thefts. Through this application the financial institutions try to secure that the information that is given when, for example, opening a bank account is true[3] for example by checking on different databases that store personal information or analysing your social media behaviour in an attempt to confirm your “trustworthiness” for future business transactions[4].

AI as financial advisor

A third way in which AI is being used as a financial inclusion tool is to handle customers. Chatbots can enhance customer support, provide financial advice, or even suggest the user financial products they could acquire based on their financial actions. But how are chatbots relevant for financial inclusion? Well, I have been struggling with that question and even though there are many products related to handling customer calls and helping them to “self-serve”[5], the impact on people is rather functional in the sense that they can be served faster and cheaper (for the company at least), however, as you might have experienced, chatbots do not always offer the best experience (yet). Still, it is questionable the effect on improving people’s lives and solving societal problems.

But we need to propose if we want to reclaim technology for the good, and in this post I will mention some opportunities for using AI for an equal and resilient economy. But before I go into this and following our initial post on more transparent and ethical AI[6], I think it is important to go ‘under the hood’ of chatbots and try to make this more transparent and accessible. The following section is a high overview of how the algorithms in chatbots work, however it serves the purpose of demystifying algorithms and help us think in how to make these algorithms work for the people.

Introducing the chatbot

In this post I will explain customer service chatbots. As for many industries, communicating with customers is key when providing customers a good experience and chatbots are (sometimes) good and cheap solutions for handling customers. As in many machine learning applications, these chatbots are based on great amounts of text databases but in this case they are built from dialogues between people. To make a chatbot application, the first step is to prepare the data, which basically means to separate sentences (or words) and later link these sentences in a way that a meaning can be ascribed. Most times, this is a table with a column that will be a question or context and a column for the answer[7]. Then we need to train the machine learning so it accomplishes the task, which in the case of chatbot could either be to respond in a ‘new’ way (i.e. generative) or based on previous answers (i.e. selective)[8]. If it is generative, machine learning will create a response that is not already defined in your database, so it is basically (trying) to be intelligent. In the second case, the machine will identify the meaning of the question and look for the answer that better suits it. In both cases, the ML use a process called sequence2sequence[9],[10],[11]. This process (developed in Google) iterates over potential answers (i.e. decoder) based on the previous inputs (i.e. encoders) and later selects the answer that offers the least loss in a utility function of the algorithm.

For the sake of the explanation I will build on two examples from our daily life, so bear with me in this one. Imagine you are playing with your dog (in this case the algorithm). You have trained him before and he knows that if he makes you happy he will get a treat. The dog has been trained with a set of predefined instruction (i.e. encoder) in which a certain order means a specific response. So, you say sit, and based on his previous training he decides to sit (i.e. decoder) and gets a treat! This is an example of a selective case in which the algorithm will give a response based on predefined commands. Now imagine you are babysitting your beautiful niece, she is learning to talk so she makes sounds to communicate with you. This is not the first time you have been with her so you have been keeping a list of certain sounds she makes when you are doing certain actions. In this case, you are the algorithm and the list with sounds and actions is the training dataset. Today, with a smile in her face she says “bo” “gal”!  You look at your list and learn that the last time she said “bo” you were eating peas and carrots and for “gal” you were eating smashed apples. Based on that information you decide to give her smashed carrots, she starts to cry! This is a fail case of a generative response based on previous data. It might be she wanted smashed peas or complete apples, so as you see generative algorithms are more complex since the responses are not pre-set and more context is needed. Even thought some would agree that keeping a baby happy is a benefit for humanity, is not the type of purpose we want to discuss.

Chatbots with a purpose

Far beyond rigorous, the purpose of the previous explanation was to demystify AI. The more people understand that AI are algorithms that are developed by human beings with errors, biases, and interests, the quicker it will be for us to reclaim power over them. I want to stress the importance on the dataset that is informing a chatbot. These robots need to be trained by people with an interest in the wellbeing of the user and not only in the profit of the company that is ‘hiring’ the chatbot.

This said, I believe that a meaningful and constructive application of chatbots for financial inclusion lays in basic financial education. Chatbots can be programmed as financial literacy tools in which people can ask questions and learn skills relevant for financial inclusion; for example, financial terminology or basic product education. Imagine an application in which uneducated users could ask a chatbot questions on the legality of products they are being offered or decision making support on questions that they don’t feel capable to deal with like interest rate calculation, conditions, or risk involved in loans. In other words, let’s make chatbots trustworthy financial advisors!

There are three main benefits of this type of financial advisors that come to my mind:

  1. Balance the information access between lenders and borrowers. People should be able to make informed decisions based on transparent, accessible and educative sources
  2. Allow policy makers/regulators to identify opportunities to develop financial education policies based on the needs that people are expressing through the chatbots. In other words, policy built from the bottom
  3. Inform EdTech companies in potential educative needs and uncover opportunities for developing products with a positive societal impact

I end this post with some reflections on how AI is being used in the world of financial inclusion. At the moment, it seems that AI applications are solving the problems of the business. AI applications are made for financial institutions and with a major focus of increasing profit and decreasing risk. But perhaps it is important to have a balance and find ways in which users can also benefit from algorithms. How could AI be beneficial for the user side? How can AI be used to incentivise productive development? If you have any examples or thoughts please get in touch!

By the way, if you are interested in AI and the impacts it will have in our society I will be moderating a panel discussion on AI and the Future of Work[12]. We will be discussing how current and future generations should prepare for a world where algorithms and humans become co-workers! Don’t miss out and register in this link[13].

[1] See for more information on alternative data:

[2] For more info on credit scoring:

[3] For more about AI and Financial services.

[4] Nir Kshetri (2021) The Role of Artificial Intelligence in Promoting Financial Inclusion in Developing Countries, Journal of Global Information Technology Management, 24:1, 1-6, DOI: 10.1080/1097198X.2021.1871273

[5] Some examples:;

[6]Initial post of the series:

[7] Vinyals and Le (2015) A Neural Conversational Model.


[9]  More on sequence 2sequence: Sutskever, Vinyals and Le (2014) Sequence to Sequence Learning with Neural Networks.

[10]  More on sequence 2sequence: Vinyals and Le (2015) A Neural Conversational Model.

[11] More on sequence 2sequence:

[12]More info on the event:

[13] Register to the event:

May 26, 2021

This entry was posted in

AI and sustainability Okategoriserade

Write a comment

Your email address will not be published. Required fields are marked *