Roseman Labs, a member of the Intel Liftoff Catalyst Track, allows users to conduct analysis across data sources without disclosing or sharing the underlying data sources.
Establishing data-driven partnerships is sped up by their unparalleled privacy and data security guarantees. This technology stands out for its scalability, versatility, and ease of use in connecting data sets from several sources and building secure, legal AI models on top.
Setting New Benchmarks and Accelerating Encrypted Computing Innovation
Roseman Labs tested the correctness and scalability of their Engine (v1.10) against a number of publicly available implementations in April 2024. Intel direct the reader to the benchmark whitepaper published by Roseman Labs in April 2024 for background information on the benchmark configuration. They discovered that, in terms of scalability and adaptability, Intel is among the industry leaders in encrypted computing.
There are several commercial benefits to being able to glean insights from disjointed sensitive data silos. The conventional method of centrally merging data sources into a single data set and then doing data analysis usually lacks the confidence of data owners and frequently violates data privacy regulations when it comes to personal data.
For precisely these kinds of multi-owner data analysis, the introduction of Privacy Enhancing Technologies (PETs) is revolutionary: encrypted computing technologies such as Secure MultiParty Computation (MPC) and Fully Homomorphic Encryption (FHE) offer innovative, reliable, and legally compliant solutions. By 2025, 60% of big businesses, according to Gartner, will employ one or more privacy-enhancing computation methods in cloud computing, analytics, or business intelligence.
Intel at Roseman Labs have developed a novel way to train and use AI/ML on data that is too private to be disclosed. More than 150 organisations in the public sector, healthcare, and financial services utilise Intel’s solution to address pressing issues.
You can encrypt, connect, and analyse numerous data sets using the Roseman Labs platform while protecting the underlying data’s privacy and commercial sensitivity. Without ever being able to see the contributions of other participants, you may compile data from many organisations, do detailed analysis on records, and provide fresh insights. While the data is kept safe, you obtain the insights you require.
With the impending v1.14 Engine release on 6th Gen Intel Xeon Granite Rapids Scalable Processor 6980P servers, they have replicated their safe table join, table filtering, and table group-by benchmark currently in partnership with Intel Liftoff.
The Result: 5x Faster and Redefining Scalability in Encrypted Computing
The graphs show how the throughput, expressed in the number of table rows per second, rises as the number of CPU cores used in the computation increases for commonly used table operations: filter and join in the left graph, and group-by in the right graph. They compare Engine versions v1.10 and v1.14 (April 24 and December 24 respectively).
The right graph displays almost two-fold throughput gains for filtering and joins, and a five-fold boost for group-by. The revised benchmark shows the substantial performance effects of the major algorithmic changes made to the Engine in recent months, as well as Roseman Labs‘ dedication to consistently providing customers with observable product enhancements.
These developments show how innovation can actually help Intel’s clients. Roseman Labs is still at the forefront of providing useful solutions for safe data analytics and artificial intelligence, with performance increasing by up to five times.
What’s next for Roseman Labs?
Concern over access to sensitive data is only going to increase in the future. 50% of AI will be taught on domain-specific data by 2027, up from 1% at present, predicts Gartner. Additionally, according to Gartner, 60% of big businesses will employ one or more privacy-enhancing computation methods in cloud computing, analytics, or business intelligence.
Roseman Labs‘ three-pillar strategy is driven by these trends:
- Allow customers to utilise their current models with the platform through interoperability (e.g. via ONNX) and secure inference and training of AI models on a variety of data modalities, which will further reduce clients’ time-to-compliance.
- Increase the scalability of their encrypted data environment in terms of cost-effectiveness, data quantities, and runtime performance for private analytics on structured data.
- Facilitate the integration process with partners, technology integrators, and cloud marketplaces by providing self-service installations and product features.
About Intel Liftoff
Artificial Intelligence and machine learning firms in their early stages are eligible to apply for Intel Liftoff for firms. Regardless of where you are in your entrepreneurial path, this free virtual curriculum helps you create and scale.