Intel Labs will feature 31 research projects
Intel Labs will feature 31 research projects that are defining the future of artificial intelligence innovation at NeurIPS 2023. These projects will be presented by Intel Labs as industry-leading artificial intelligence research.
The NeurIPS 2023 conference is the main worldwide event for developers, researchers, and academic professionals focusing on artificial intelligence and computer vision technologies. Intel Labs is demonstrating some of its most important and industry-first artificial intelligence advancements at this conference. The event will take place in New Orleans from December 10th to the 16th.
At NeurIPS 2023, Intel Labs will discuss the company’s “AI Everywhere” vision with a varied community of innovators and thought leaders, as well as showcase AI research that is at the forefront of the industry. Intel Labs will deliver a total of 31 presentations during the course of the conference. These papers will include 12 main conference talks, 19 workshop papers, and demonstrations in booth #405.
In addition to graph learning, multi-modal generative AI, and AI algorithms and optimization technologies for usage across a variety of AI use cases, such as climate modeling, drug discovery, and materials science, the research projects are centered on the development of innovative models, techniques, and tools for the use of artificial intelligence in scientific research.
Furthermore, on December 15th, Intel Labs will be hosting a workshop called “AI for Accelerated Materials Discovery (AI4Mat) Workshop.” This workshop will serve as a venue for artificial intelligence researchers and material scientists to address difficulties in the field of AI-driven materials discovery and development activities.
It is possible to classify the research that Intel Labs will present at NeurIPS 2023 into the following categories, each of which will have significant findings:
The Application of AI to Science
Brain encoding models are models that are based on multimodal transformers and were developed in collaboration with researchers at the University of Texas at Austin. These models have the ability to predict brain responses, particularly in cortical regions that represent conceptual meaning and provide insights into the brain’s capacity for multimodal processing.
ClimateSet is a large-scale climate model dataset that was produced for machine learning in collaboration with the Quebec Artificial Intelligence Institute (Mila). It has the capability to rapidly forecast new climate change scenarios and provide a foundation for the machine learning (ML) community to construct climate-centric applications that are disruptive.
HoneyBee provides scholars with a state-of-the-art LLM that was created in collaboration with Mila to facilitate a more rapid understanding of materials science.
The Multimodal Generative Artificial Intelligence
In order to enhance the performance of artificial intelligence models on a wide variety of downstream tasks, such as picture-text retrieval and image recognition, COCO-Counterfactuals is a multimodal approach that generates synthetic counterfactual data. This technique helps minimize inaccurate statistical biases in pre-trained multimodal models.
The Latent Diffusion Model for 3D Virtual Reality (VR) model is designed to facilitate the creation of 3D videos for artificial intelligence applications.
Reconstructing a three-dimensional representation of a scene from two-dimensional photographs is the goal of the CorresNeRF image rendering approach, which makes use of neural radiance fields.
Increasing the Capabilities of IA
The DiffPack technique is a generative artificial intelligence approach to protein modeling that helps to guarantee that the 3D structures created accurately represent the structural features of proteins in the actual world.
A technique known as InstaTune is a mechanism that, during the fine-tuning stage, builds a super-network in order to cut down on the total amount of time and compute resources that are necessary for networked attached storage (NAS).
Knowledge of Graphs
A*Net is the first path-based approach for knowledge graph reasoning in the industry, and it employs a million-scale dataset. This method enables scaling to datasets that are beyond the computational reach of the system, and it also improves the correctness of big language models using LLMs.
ULTRA is the first foundation model for knowledge graph reasoning in the industry, as well as a novel method for learning universal and transferable graph representations and the links between them.
To increase the capacity of machine learning techniques to reason about programming languages, a unique compiler graph-based program representation known as the perfograph has been developed. This representation has the capability to capture numerical information as well as the composite data structure.