Skip to main content

Can AI Replace Your Financial Adviser?

CHATGPT and its competitors have already achieved some impressive milestones — they can pass the bar exam for lawyers and help solve medical cases. So, are these AI tools now ready to replace your financial adviser? The advantages of AI advisers are obvious at first blush. Professional financial advice is costly and beyond the reach of many Americans. AI could drive those costs down and make smart, personalized guidance available for everyone 24/7. AI also can expand the range of financial decisions covered by advisers and offer more holistic advice. These days, people don’t just need help mixing ETFs (Exchange-Traded Funds) into a portfolio — they also have to make hard choices about savings, insurance, and debt management, among other things.

But while AI can do some things as well as a financial adviser, and sometimes can even perform better, it can’t replace human advisers. Yet.

To understand why, let’s look at five essential qualities for effective financial advice and see how AI currently stacks up, and what it will take for AI to get where it needs to go.

1.Debiasing

Let’s start with the bad news. One of the primary things a financial adviser brings to the table is debiasing, or helping clients avoid costly mistakes caused by behavioral tendencies. Consider the tendency of people to overweigh short-term losses and invest too conservatively, even when their investment horizon is 30 years or longer. In one study I conducted with Richard Thaler, people who were shown a one-year chart of investment returns allocated 40 per cent of their portfolio to stocks, while those who were shown longterm charts allocated 90 per cent of their portfolio to stocks — even though both groups of investors were investing for the long term.

A good adviser can help people make financial decisions that align with their long-term goals. They steer clients away from short-term charts, or the latest market swings that constantly pop up on cell- phones, and help clients choose investments that fit their actual time horizons.

Unfortunately, a working paper led by Yang Chen at Queens University in Canada showed that ChatGPT exhibits many of the same behavioural tendencies and biases that a good adviser tries to minimize. For example, humans tend to choose riskier options after experiencing losses, as they try and break even. In Las Vegas, this is known as doubling down. ChatGPT suffers from the same tendency, which could lead to costly mistakes. If an investor lost a lot of money after the crypto crash, ChatGPT might think they should buy even more crypto, doubling down on the risky asset.

And it gets worse. That’s because AI tools are also highly overconfident. It isn’t that they get it wrong sometimes — it’s that too often they think they’re right.

This can amplify existing biases, as the software not only fails to self-correct, it can give human clients a false sense of security.

To improve the performance of AI advisers, we need to create metarules — that’s a rule that governs other rules — to help the software override these biases.

One possible approach is to have the AI, whenever it recommends a specific financial action, also review reasons why that action might be a mistake. It’s like an internal audit, forcing the software to consider what it might have missed.

Metarules are often necessary because of the way these AI tools learn. They are known as large language models or LLMs, and they are trained on massive data sets of text pulled from the internet. Because the Internet often represents human nature in an unfiltered form, the software reflects many of our lesser impulses and tendencies.

The good news: AIs are almost certainly easier to debias than humans by applying metarules. While we can’t directly edit the software running inside our heads, we can revise our AI models.

2. Empathy

The next key quality for an adviser is empathy. Consider an investor who’s nervous and anxious about market volatility. Research shows that the background mood of investors can have a powerful impact on their financial decisions with fear driving risk avoidance, and anger leading to more risk-taking. The role of a good adviser is to reassure and support during turmoil so that fear and other emotions won’t damage our long-term financial prospects.

The good news is that ChatGPT excels at empathy. One recent study compared the responses of ChatGPT and human doctors to the questions of real patients that had been posted on an online forum. The answers were evaluated by healthcare professionals, both in terms of quality of information and empathy.

The results were a resounding win for AI. The healthcare professionals were nearly four times more likely to say that the ChatGPT responses provided “good or very good” information. But they were nearly 10 times more likely.

Research shows that the background mood of investors can have a powerful impact on their financial decisions with fear driving risk avoidance, and anger leading to more risk-taking. The role of a good adviser is to reassure and support during the market to say that ChatGPT was empathetic. Specifically, 45 per cent of AI responses were rated as empathetic or very empathetic, compared with only 4.6 per cent of physician responses.

These results suggest that there are some critical financial adviser tasks that AI can already perform extremely well. While advisers don’t always have the time or ability to reassure clients during market corrections, AI technology can help them become more human, or at least scale their humanity. For instance, the next time there’s a major market drop, advisers don’t have to be limited to making a few calls to their wealthiest clients. Instead, AI can deliver empathetic responses tailored to each client. If a client, say, checks their portfolio daily, the AI can provide reassuring data about longterm market trends, as well as the costly impact of market timing.

3. Accuracy

Another important adviser quality is getting the facts right.

Even if AI can be debiased, it still needs to base its advice on accurate representations about investments, inflation, taxes, and more.

More bad news: The bots are currently very unreliable and make lots of mistakes. For instance, when I asked a leading AI tool to help me choose between Vanguard and Fidelity Nasdaq index funds, it came up with a very impressive answer focused on their long-term performance and expense ratios.

The only problem was that it used the wrong funds as the basis for its analysis, using numbers from a Vanguard S&P 500 fund and a Fidelity real-estate fund. It was both highly confidential and completely inaccurate.

This problem can be largely solved with plug-ins or external tools that the AI calls upon to supplement its known weaknesses. When you ask Google a math question, it pulls up a calculator alongside the answer; AI tools should do the same thing.

In addition to using a calculator, AI advisers should be integrated with reliable financial databases, such as Morningstar, that can ensure that its models and recommendations are based on accurate representations of the financial world. “People too often think of language models as complete solutions to any problem, rather than as components in intelligent applications,” says Dan Goldstein, a senior principal researcher at Microsoft Research, specializing in AI and human-computer interaction. “The optimized systems and vast data stores of the financial world won’t be replaced by AI — they’ll be called upon by AI.”

4. Best interest

Advisers must act in the best interest of their clients. They can’t say, recommend a more expensive fund class just because it makes them more money. In theory, then, AI should be less likely to get into conflicts of interest. Unlike humans, ChatGPT isn’t trying to maximize its income.

But that’s just theory — we don’t really know how well AI will perform. One possibility is that it will have similar issues to humans. For instance, a study found that investors are more likely to buy mutual funds with higher marketing expenses, even when those expenses reduce their overall performance through higher fees. While these funds are likely worse investments, consumers are influenced by their advertising. AI could fall into the same trap, as funds that spend more on advertising could loom larger in the AI database.

Given this uncertainty, AI architects must audit the recommendations of the digital adviser. This is similar to a meta rule, just instead of erasing bias it’s focused on erasing conflicts of interest.

Fortunately, AI is likely easier to monitor for conflicts of interest than a human adviser. If the software starts recommending investments with high fees or mortgages with high-interest rates when there are cheaper alternatives, the AI tools might even be able to auto-correct, like spell check fixing a typo.

Goldstein believes one key is emphasizing transparency. “When decisions are made behind closed doors, we can only wonder about some of these issues,” he says. “But when the inputs and outputs of every decision are logged, they can be put through checks that were never before possible.”

5. Consistency

Good financial advice should be consistent. That is, if the same client takes the same portfolio to different advisers, they should offer similar advice, focused on the same time-tested principles.

Research suggests, however, that advisers struggle to offer advice that consistently reflects the goals, circumstances, and preferences of their clients. One recent study showed that clients tend to invest in funds with different fees and risk profiles after their adviser dies or retires, and they are placed with a new, randomly selected, adviser. This isn’t because their investment preferences suddenly changed — it’s because the new adviser inflicted his or her own beliefs on their portfolios. If the new adviser selected risky investments for his own personal portfolio or expensive funds, he assumed his clients would prefer that, too.

This should be a fixable problem. AI advice should be able to achieve consistency by confirming that it gives the same advice to clients with similar financial needs and preferences. Once AI tools achieve consistency, the software should deliver the same advice to clients in the same situation, much as Netflix recommends similar content to people with the same viewing history.

What the future could look like

A lot of improvements are needed before AI can become an effective financial adviser. Nevertheless, AI will play an important role in the future of financial advice.

What might this future look like?

One potential model comes from the medical domain, where smart software and doctors have been working together for years as a hybrid team. In particular, doctors increasingly rely on AI tools to help them improve their quality of care, as these tools can generate a long list of possible diagnoses that can reduce misdiagnoses or shorten the time to make a diagnosis.

Of course, a human doctor is still required to filter the extended list of possible diagnoses generated by ChatGPT and select the best diagnosis. This suggests that AI can help us expand our thinking, even when it can’t actually find the answer by itself.

While there are no studies on the quality of hybrid financial advice, I speculate that the hybrid model will win, provided humans learn how to effectively collaborate with AI. One reason is a behavioural tendency known as algorithm aversion — people tend to reject automated software unless it’s nearly perfect.

This means that most clients will prefer financial advice from AI that is monitored by a professional, much as people expect a pilot to oversee the autopilot in the cockpit.

What’s more, a hybrid approach is also likely to dramatically increase access to advice. My hope is that human advisers will use AI to help them serve more people.

What about those Americans who still won’t be able to afford a human adviser? I believe AI can be used to deliver advice 24/7, provided we fix those critical issues involving accuracy and debiasing.

And if you’re a financial advisor, I wouldn’t worry about losing your job to ChatGPT. (Autopilots didn’t put pilots out of work.) Instead, I’d focus on how you can use the technology to deliver better advice to even more people.

In brief, the question of whether AI, represented by ChatGPT and its peers, can replace human financial advisers is a complex one. While AI brings numerous advantages, including the potential to make financial advice more accessible and cost-effective, there are significant challenges that need to be addressed. The qualities that human advisers bring to the table, such as debiasing, empathy, accuracy, acting in the best interest of clients, and providing consistent advice, remain areas where AI currently falls short. However, the potential for AI to collaborate with human advisers, much like in the medical field, offers a promising vision for the future of financial advice. This hybrid approach, where AI augments human expertise, could enhance the quality of financial advice and expand its availability. While there is still work to be done to address AI’s limitations, it’s clear that AI has a role to play in the future of financial advisory services. For human financial advisers, the focus should be on leveraging AI technology to provide even better advice to a broader clientele, ultimately improving financial well-being for all.

Reference: The Wall Street Journal 30 October 2023

#TheGlobalNewLightOfMyanmar