As banks and financial institutions such as insurance providers start to use artificial intelligence technologies to provide tasks and services, there is the inevitable question of how secure those services are.
With the likes of the Commonwealth's Bank's chatbot Ceba giving customers access to more than 200 banking tasks, these technologies have the same authentication and authorisation in place as any online banking app. So where do the risks lie?
We spoke to One Identity's ANZ country manager Richard Cookes about those chatbots, security within financial institutions and the problem of identity theft.
"The banking sector in Australia – and it's not dissimilar in New Zealand – is highly controlled by a limited number of big banks. Those banks also have high investment in technology and they control up to 80% of the consumer market.
With this in mind, banking maturity and technologies such as apps that use biometrics in this part of the world is much higher than in the United States. This is because many smaller banks don't have the funds to invest in technology, particularly when it comes to security.
As financial institutions across Australia start to pick up technologies such as bots, Cookes says it's important to note that even though bots may not seem secure, they are treated with the same level of security as any other function.
Customers approach banking services through channels such as internet, phone or in person, will often flip between those channels.
"As you flip between channels you can also trigger requirements for authentication and authorisation. Depending on what you do, that authentication may be elevated as you move from general advice to more specific tasks such as banking.
Only a few years ago, banks struck problems when they implemented the now-common technologies of usernames and logons to websites. Cookes says that most were unsophisticated and had no automation.
Once people started cloning websites and tricking users into handing over their usernames and passwords, the banks knew they had to deal with it.
"It wasn't unusual for banks to be doing 'takedowns', sometimes hundreds in a month. The first thing the banks had to do was respond to that threat by putting in place two-factor authorisation by SMS or tokens," Cookes says.
These fundamental techniques still exist today, but AI and chatbots are a relatively new phenomenon.
"A chatbot is about smart workflow and smart AI. The same principle of security applies. As soon as that application gets to a point in the workflow that it's going to cause an action where the risk might be high, the bank's existing authorisation infrastructure will kick in.
Whether it's opening a new account or making a transaction, people might be concerned that there is a risk associated with using bots, but Cookes says it's really no different to using a website.
We still see the barrage of cloned bank websites to this very day, hackers could do the same thing with chatbots.
They could use code to show their own fake chatbots. As the hackers gather information through those bots, it can lead to identity theft.
The victim still thinks they are talking to a genuine bank. They may be handing over information but it's unlikely people will face hacked bank accounts this way.
"The moment the user goes to make a transaction, it will be blocked. Unless you are the owner of the authorisation device, the bank will block it.
"The risk here is not that someone's going to use the bot to hack your account, the risk is the bot will potentially collect information about you and perform identity theft.
Cookes says that there's no question this kind of activity will be happening as part of identity theft.
"The reality is that it happened last time when websites came out and it will be happening now. People should be concerned about how much information they might give a fake bot.
If identity theft occurs, Cookes says banks are generally able to stop fraudulent transactions and recover money.
Users typically communicate with banks or financial institutions through a certain set of methods and if something seems out of the ordinary, banks are good at stopping it.
The problem arises when someone has created a fraudulent account using a stolen identity and made several transactions – sometimes it can be difficult to recover the money.
Cookes provides the example of how social engineering can make this a reality, particularly in a small country town.
"I happen to know that person X owns the local coffee shop and is two doors down from the local Westpac branch. I guarantee he banks with Westpac. I guarantee I could ring up the coffee shop and find out who the owner of the place is. As a hacker and I wanted to ring up someone, I could say I'm person X, I bank at Westpac and it's the branch in this location.
"Or, I could ring person X up and say I'm the Westpac branch in another location and you have you have an issue with your account; I need to validate some information.
This technique is much more efficient for hackers who try their luck with quoting four banks to guess who he banks with - and arousing suspicion in the process.
Add other publicly-accessible information sources into the mix and it becomes a haven for potential identity thieves. Birthdays, names, locations – all accessible and all the answers to the way banks ask people to reset their passwords.
"All the answers people freely post on social media or give out to websites are basically providing good answers to anyone who wants to reset a password.
"People shouldn't be worried about internet banking per se, they should be worried about what information they're giving people who can impersonate them. Impersonators can create or steal an account. Identity theft is the biggest risk. We need to be vigilant.