Artificial Intelligence and Confidential Computing

Share on facebook
Share on twitter
Share on linkedin
Photo by Jorge Salvador on Unsplash

With 90% of workloads running on a hosted cloud service in 2020, more organizations rely on the cloud than ever before. Today the cloud is used to process sensitive information, high-value applications and proprietary data. But if the steep rise in cyberattacks we saw in 2020 was any indication, it’s safe to bet cloud environments remain high-risk targets for hackers. 

While most cloud providers today offer encryption services for protecting data when it’s at rest (stored) and in transit (on the move), many processing servers at cloud and data centers remain highly vulnerable. Before data can be processed by an application it’s decrypted in memory leaving its contents in the clear, and vulnerable just before, during, and after runtime.

Vulnerabilities include memory dumps, root user compromises, and other exploits, such as internal bad actors. When confidential computing is combined with storage encryption, network encryption, and a proper Hardware Security Module (HSM) for key management, it has the ability to provide robust end-to-end data security in the cloud.

What is Confidential Computing

Confidential computing is secure computing technology that is designed to ensure that sensitive data in use remains safe, confidential, and accessible to authorized users. Confidential computing eliminates the remaining vulnerabilities created by loosely-defined security protocols, weak perimeter filtering, and inadequate isolation tactics, by protecting data when it is processed in an external and non-secure environment. By isolating sensitive data as it’s being processed, confidential computing solves a host of security issues for IT and cybersecurity teams.

Approaches in Confidential Computing

Today, common confidential computing environments rely on a trusted execution environment (TEE), or a secure enclave within a CPU. Generally speaking, there are two methods to creating a confidential computing environment. While each has their benefits, they also both have their downsides. 

The first approach leverages zero-trust algorithms or Fully Homomorphic Encryption (FHE) algorithms, allowing organizations to execute and run applications while they’re encrypted. Since most data theft occurs while data is being temporarily decrypted or stored as plain text, homomorphic encryption allows authorized users to perform operations on data while it’s encrypted — without the need to decrypt it first. But while FHE protects data from exploitation while in use and in the cloud, it fails to protect applications that require high performance processing, or Accelerated Processing (AP), such as AI-based applications.

The second approach is for organizations to leverage a method which utilizes hardware-based memory encryption to isolate specific application code and data in memory. This can be done using technology like Intel’s Software Guard Extensions, which is a set of security-related instruction codes that are built into some modern Intel central processing units (CPUs). But there are still major problems with this method. For example, Intel’s SGX is built on standard CPUs which remain highly vulnerable to side channel attacks and have major limitations when it comes to processing high throughout and AI-based applications.

The Risks of Confidential Computing and AI

As discussed, today’s standard GPUs are designed to help accelerate processes and deploy high-throughput applications such as AI. But they are not designed with security in mind, and are highly vulnerable to cyberattacks. Meanwhile a cyberattack targeting the proprietary algorithms of an application can be detrimental to a company’s lifespan, especially if the data is a critical part of a company’s IP. Unlike security vulnerabilities in traditional systems, the root cause of security weaknesses in Machine Learning (ML) systems sometimes lies in the lack of explainability in AI systems. 

This lack of explainability leaves openings that can be exploited by adversarial machine learning methods such as evasion, poisoning, and backdoor attacks. An attack of this magnitude can be designed and executed in a number of ways, including stealing the algorithm itself, changing an algorithm’s data output, or injecting malicious data during the training stage of an algorithm in order to affect the inference of the model. Attackers may also work to implant backdoors in models and launch targeted attacks, or extract sensitive private training data from query results.

Regulatory Data Compliance 

An added component to all of this is the challenge organizations face when protecting internal company data. As mentioned, a data breach of this kind can be detrimental to a company, for both financial and reputational reasons. Today, many countries have data regulatory policies which impact the use of AI and the management of user data, such as GDPR laws in the EU and HIPPA laws in the US. 

These laws clearly outline the consequences for companies who fail to comply with their outlined standard of data protection, and if data is compromised, organizations are often liable to lawsuits amounting to exorbitant legal fees. Considering that security breaches and ransomware attacks have greatly increased in recent years, many companies may pay a hefty fee in 2021 in order to keep their sensitive data and prevent it from being sold on the darkweb.

Data breaches are more ubiquitous and damaging than you may think. Research shows that while many companies understand the importance of data regulation, most fail to properly comply with compliance standards. For example, in March 2020 an attack on T-mobile allowed hackers to access a number of customers’ sensitive information through a T‑Mobile employee email account. The company is now facing lawsuits over their mishandlings, as well as large investments in mitigating the reputational fallouts created by the attacks.

Conclusion

So what can companies do to protect their application data? It’s critical now more than ever for organizations to leverage security solutions that provide a secure environment that enable them to comply with regulation without compromising the performance of their AI/technology. As mentioned earlier on, when confidential computing is combined with the right tools, such as a resilient Hardware Security Module (HSM), it can provide robust end-to-end data security in the cloud for both AI and non-AI applications.

The HUB Vault HSM is a confidential computing platform designed to provide security and privacy for your most sensitive organizational applications and data while in use. The programmable and customizable MultiCore Vault HSM enables companies a secure, fast and flexible environment to execute valuable applications such as A.I. (training and inferencing), as well as general computing applications for telecom, financial institutions, healthcare, fintech, digital assets and blockchain.

To learn more contact us via email us at info@hubsecurity.io or submit the form below.

    First Name *

    Last Name *

    Company *

    Job Title *

    Country *

    Email *

    What's your interests in Hub Security?

    Your Message

    Get the latest news from us.

    As a subscriber you’ll get exclusive access to our products, updates and news!