Menu close

Danish banks put the brakes on the use of ChatGPT: “As a bank, we have a huge task ahead of us”

The new generative AI technologies like ChatGPT have crept into several financial institutions at lightning speed. Many banks have imposed restrictions, while also working to introduce even more artificial intelligence in their business processes.

3. Jul 2023
12 min
English / Dansk

Jyske Bank, Nordea and Danske Bank have told Finansforbundet that they have taken actions or introduced restrictions to minimise the risks that ChatGPT and similar AI tools are used inappropriately.

All three major banks have thus followed the general trend in the financial sector of restricting the use of the new publicly available AI tools like several other Danish and international organisations.
The largest US bank, JPMorgan Chase, decided as early as February to limit their employees’ use of ChatGPT. Major banks such as Citigroup, Goldman Sachs, Bank of America and Deutsche Bank have since followed suit.

Also the three data centres Bankdata, BEC and SDC report to Finansforbundet that they take the safety risks very seriously and are working on solutions and guidelines.

All of this comes after repeated safety concerns and the revelation that employees at Samsung had accidentally leaked confidential data to the American company OpenAI, which runs the popular ChatGPT.

“Common sense is a good guideline”

Managing Director at Jyske Bank Peter Schleidt confirms to Finansforbundet that they have restricted the use of ChatGPT. He also says that, generally, their employees should be very careful when they access various online tools.

“The most important value at Jyske Bank is common sense. When you know the pitfalls of these tools, common sense is a good guideline for what you should and shouldn’t do”, he says.

Peter Schleidt also confirms that one of the biggest challenges of the rapid development within artificial intelligence will be the intensified security pressure.

“It is scary to think about the new threats of financial crime, cyber attacks and war that these new technologies can bring along. That is the biggest drawback I see right now,” he says and continues:
“It calls for further opportunities for us to collaborate in the sector and across authorities and financial organisations.”

Despite many concerns, he admits that the new technology has many exciting perspectives.
He is especially fond of ChatGPT for private purposes.

“I most recently used it to prioritise a list of white wines for me to taste when I was in Portugal during my Easter vacation. I understand that ChatGPT has passed the highest level of the sommelier exam. It definitely has perspectives,” he says.

“I am impressed, though it also makes mistakes,” he adds.

Peter Schleidt, Managing Director at Jyske Bank since 2017 and former CTO and CIO at Danske Bank. He was one of the first to spot the potential of MobilePay. He holds a Master’s in Civil Engineering from DTU (1989) and was also taught artificial intelligence.

A huge task to manage the risks

At Nordea, the risks inherent in the tools are taken serious enough that access to them has been limited for the time being. The restrictions will not be lifted before a solution to the challenges has been found, says Mattias Fras, Group Head of AI Hub at Nordea.

“We are currently working on a solution that will enable Nordea employees to use ChatGPT and other generative AI services in a secure and compliant way in accordance with our guidelines. Until the new solution is in place, access to ChatGPT on the OpenAI website and similar services is restricted”.

AI Hub at Nordea saw light in 2021 and works to increase the business value of artificial intelligence. Mattias Fras explains that the department follows the AI development closely and generally considers generative AI services to have great potential to improve the work processes at Nordea.

He does emphasise, however, that many risks need to be handled in relation to safety, norms, ethical standards and legislation via the EU among other things.

“We have a huge task ahead of us as a bank in terms of handling these risks responsibly”, he replies and continues:

“It is important to balance the benefits and risks while ensuring that we have efficient controls in place. With the introduction of generative AI comes new and high demands.”

Technical control and concrete guidance

Danske Bank reports to Finansforbundet that new technologies, including generative AI tools such as ChatGPT, are central to the bank’s new Forward ‘28 strategy. Like other approaches, this strategy also factors in the risks related to using these tools.

Danske Bank says that a combination of technical control and concrete guidance on how to use and not use generative AI tools is to enable its employees to use the tools ‘in a safe manner’.

“It gives our employees the opportunity to benefit from the use of these and other digital tools while we reduce the risks to an acceptable level,” reports Media Relations at Danske Bank.

Finansforbundet has also received a short reply from Ringkjøbing Landbobank, a smaller bank, saying that so far, no specific guidelines for ChatGPT use have been issued.

Mattias Fras, Group Head of AI Hub at Nordea.

Alarm bells ringing

Despite concerns and emerging restrictions, it seems that many have started using the tools for work. In a survey completed by Finansforbundet on LinkedIn, 22 per cent of nearly 1,500 respondents reported having used artificial intelligence like ChatGPT in their jobs.

But with strict legislation and stringent compliance requirements in the financial sector, you need to pay attention to what you use services like ChatGPT for.

ChatGPT is an advanced language model developed by American OpenAI. Based on a few text-based commands, it is capable of answering questions and formulating text, trained on a wide range of sources.

But when you write a command in ChatGPT or many of the other publicly available tools, you are transferring information and data that could be disclosed to others. That is a serious matter if it concerns personal or confidential information.

In the case of ChatGPT, it may be even more serious for European businesses because the owner, OpenAI, is based in the USA, principally making it a case of third country disclosure with even more stringent requirements.

Two partners at Plesner have explained to Finansforbundet how employees should go about it, and not least what the tools should not be used for. 

How to tackle artificial intelligence in the financial sector?

A new AI act is in the process of being adopted at EU level, but it will probably not be implemented before two to three years’ time.

Likewise, the Danish financial sector is devising good data ethics practices for AI use.

This year in May, the Danish Financial Supervisory Authority submitted a draft practice document for consultation. Several organisations, including Finansforbundet, have submitted responses.

Artificial intelligence replaces jobs

Whereas several organisations have restricted the use of the freely available AI tools, there is nothing to stop the development of and investments in artificial intelligence.

Several of the biggest financial institutions are working on AI tools that may revolutionise some areas of the way we work.

Goldman Sachs has previously issued a report, revealing that 300 million full-time jobs worldwide could be automated to a certain degree as a result of the latest AI wave.

The Danish Employers' Association for the Financial Sector (FA) has surveyed finance companies in Denmark, 64 per cent of them responding that they expect artificial intelligence to replace job functions that today are carried out by humans, within the next three years.

“This is not to say that 64 per cent of the companies expect the jobs to disappear. There is no way of telling how many jobs will be affected, but the job functions will change in step with development,” is the analysis made by Nadeem Farooq, Senior Vice President of FA, to Finanswatch. (link in danish).

AI must supplement rather than replace employees, and upskilling and retraining are necessary for both managers and employees.”

More skills will be needed

Peter Schleidt, Jyske Bank, says that the bank has had a fair amount of success in bond trading as a result of fine-tuning an AI system.

Yet, he does not believe that AI is going to replace advisers, for example, any time soon.

“We will keep making progress with digital counselling – that’s for sure. But as human beings, we still want confirmation from human beings when it comes to major financial decisions. As far as I can see, we still need our skilled advisers,” he says and continues:

“But more and more well-defined tasks are being increasingly automated. This also means that productivity is increasing. But history has demonstrated an ever-increasing need for new and more complex jobs. Right now many are experiencing a lack of human resources. So, let’s wait and see what the future holds.

What skills do you think you will require more of in the future at Jyske Bank?

“I am sure that competence requirements will increase no matter where you work. Academic, professional and collaborative skills will only become more important.”

Michael Budolfsen, Vice President at Finansforbundet, has urged that the rollout in the financial sector be carried out with ‘humans in control’.

"AI must supplement rather than replace employees, and upskilling and retraining are necessary for both managers and employees. Social dialogue is more important than ever if we are to continue creating value for employees, companies and society.”

One parameter is crucial to succeed with AI in the finalcial setor


How to approach ChatGPT – suggestions by Finansforbundet:

Company policy

Find out if your workplace has adopted a policy in the area.

Not so long ago, virtually no one had heard of ChatGPT, but with more and more people using ChatGPT in their jobs, many companies have followed up with rules, policies and guidelines. Even though you might have used ChatGPT for a long time, and your workplace has had no policy in the area, that may easily change.


ChatGPT uses integrated regulatory and ethical instructions and rules, but ChatGPT does not ‘think’ ethically.

As a result, it is important that you pay attention to potential ethical dilemmas that may arise in the use of ChatGPT.

Bias and prejudice

Be critical to the replies given by ChatGPT.

ChatGPT is a result of the texts that are fed to it, and the service may therefore easily and relatively invisibly reproduce previous understandings of gender, ethnicity and so on that are not representative of the way we think today. Be aware that ChatGPT has been developed based on American perspectives and understandings of human beings, societies, genders, etc.


The replies given by ChatGPT are quite convincing, but you should be critical and double-check whatever information ChatGPT gives you.

One of the disadvantages of ChatGPT is that it provides no sources. And that makes it difficult to follow up on its claims.

Be aware that ChatGPT only gives you one answer at a time. So, unlike Google, it does not give you a variety of answers.

That can make it more difficult to distinguish between human and AI-generated content. Watch out for informational pollution and spread of misinformation.

Data protection

ChatGPT has been criticised for its protection of user data. There have been examples of major data breaches, which meant that users’ historical data at headline level became accessible to all.

In addition, you should keep in mind that what you write is used actively to train the algorithm. Consequently, you should take care not to feed it any kind of information which is traceable to specific individuals, such as customers, or which may violate your own privacy rights (if, for example, you enter your medical record to help you understand it).

Under settings, you can actively prevent OpenAI from using your chat history to train the algorithm. This does not prevent the disclosure of your data, but merely means that the data are deleted after 30 days. This feature must be enabled/disabled on all of the devices you use as it does not take effect across your computer(s), phone, tablet, etc.

The paid version offers a little more security options.

Information to be provided

According to the information to be provided, you have the right to be informed of the data OpenAI has collected about you and for what purpose.

To do so, use ChatGPT’s export function under settings.

Technology dependence

It is important that you maintain your own professionalism and critical sense and do not become too dependent on the technology.

Always consider what your own suggestions would have been, and investigate it curiously and critically if the solution offered to you is different from what you would have suggested.

You remain the expert and are the one knowing all of the surrounding relevant circumstances that the service knows nothing of.

Seneste nyt