Mastercard just showed how a massive war chest can translate into huge first-mover advantage in AI. Here’s how companies can level the playing field – by using the right data security.
Mastercard just showed how a massive war chest can translate into huge first-mover advantage in AI. Here’s how companies can level the playing field – by using the right data security.
Mastercard, the second-largest payment-technology corporation worldwide, just announced the launch of its own proprietary artificial intelligence (AI) model to help thousands of banks in its network detect and root out fraudulent transactions.
It’s great to see that prominent payment-technology companies are making serious investments in solving the fraudulent transaction problem. In 2022, annual global fraud losses (credit & debit card) reached $34 billion. AI is a clear solution.
But will other financial services be able to catch up to Mastercard’s AI savvy? Only if they can get a handle around the security risks inherent to AI.
The high costs of securing generative AI
Generative AI opens enormous opportunities for businesses — but also poses risks of exposing data and IP as information passes between end-user, the business, and the algorithm which, often, is owned by a separate organization. Businesses face a real generative AI dilemma: embrace AI and take real security risks, or stay on the sidelines and lose out on innovation and an AI competitive edge. Given the risks, many leading companies have decided to limit their generative AI use, among them Samsung, Apple, Bank of America, and JPMorgan.
By building its own AI solutions in-house, Mastercard has sealed its AI off from that exposure. It’s also given itself an enormous AI competitive edge. Very few companies have the resources to invest in a proprietary AI model that manages roughly 125 trillion transactions annually, plus associated data. To pull it all off, Mastercard has invested $7 billion in cybersecurity and AI technologies over the last 5 years – money that few other organizations can spare.
Making AI security cost-effective with PETs
So can rivals (or others) with smaller AI war chests ever keep up? The answer is yes – with the help of a class of security systems called privacy enhancing technologies, or (PETs). PETs encrypt and protect IP and data before deploying AI models – allowing all the information that needs to be protected to stay fully in-house, and virtually impenetrable encrypted information to pass along. Companies can build their own AI models off of pre-existing solutions – a far cheaper option – while keeping sensitive information as guarded as fully in-house AI.
To learn more about how companies like Pyte are using PETs to make AI secure, read our analysis here.
Our latest funding milestone will enable us to expand into highly regulated sectors.
The latest funding will accelerate the commercialization of Pyte’s secure computation tech for data utilization and collaboration
Standard access management is not enough to protect data. Snowflake's recent hack is just another example.