Friday, March 28, 2025

Qiskit Code Assistant: Your Partner For Quantum Code Mastery

Building the most powerful quantum software or the largest fleet of utility-scale quantum processors is not enough to introduce practical quantum computing to the world. Additionally, IBM must enable consumers to utilise IBM’s developed technologies effectively and efficiently. IBM is accomplishing just that with the launch of Qiskit Code Assistant, which is currently accessible as a private preview through the IBM Quantum Premium Plan.

To help you develop better Qiskit code with less effort, Qiskit Code Assistant combines the pooled knowledge of Qiskit users throughout the quantum community with the advanced large language models (LLMs) of IBM Watsonx. Its ability to generate quantum code not only increases the efficiency and accessibility of quantum computing, but it also gives users a fresh, practical approach to learning how to develop Qiskit code.

In order to help users learn how to write better code, streamline their development process, optimise their quantum programs to produce better quantum circuits, and complete projects faster, IBM expect quantum computing will be accessible using Qiskit Code Assistant. The Qiskit Code Assistant’s documentation can be reviewed by IBM Quantum Premium Plan users who want to begin using it immediately.

IBM first demonstrated the Qiskit Code Assistant and other Artificial Intelligence-powered quantum software features. The Qiskit HumanEval assessment benchmark, which IBM created to evaluate the effectiveness of the generative AI models IBM trained to generate quantum code, was also previewed in that talk. Since then, Qiskit Code Assistant has been the best model for producing high-quality, useable Qiskit code with IBM’s quantum code generation models, as demonstrated by testing with the Qiskit Human Eval benchmark.

IBM think that as the quantum software stack develops, traditional AI tools like Qiskit Code Assistant will be essential. IBM want to make important parts of the Qiskit Code Assistant project open source in the future, such as the Qiskit Granite model, which serves as the foundation for Qiskit Code Assistant, and the Qiskit HumanEval dataset. It is hope that this will inspire others in the quantum community to work with us to further improve these tools.

IBM has been working to improve Qiskit Code Assistant’s usability, accessibility, and performance in the interim. IBM worked with volunteers from around IBM to evaluate the code assistant’s security and safety earlier, and IBM asked it Quantum Challenge competitors to test it out as part of the Challenge labs. Additionally, IBM published two papers outlining IBM’s work on the Qiskit HumanEval benchmarking method and the AI model that drives Qiskit Code Assistant.

How to begin with Qiskit Code Assistant

You may simply access Qiskit Code Assistant through your favourite user interface by integrating it with well-known coding environments such as Visual Studio Code (VS Code) and JupyterLab.

You can instruct the code assistant to produce Qiskit code in response to natural language prompts or function definitions after installing it in the environment of your choice. For instance, “#define a Bell circuit and run it on ibm_brisbane using the Qiskit Runtime Service.” As an alternative, you can enter your own rough or incomplete code and use the code assistant’s autocomplete feature to fill in the blanks or clean it up. With only a few keystrokes, you can effortlessly include the excellent code that Qiskit Code Assistant suggests into your current work in either scenario.

Users of Visual Studio Code must first install the Qiskit Code Assistant VS Code extension, either through the “Extensions” feature in Visual Studio Code or by looking for it on the Visual Studio Code Marketplace. Users of JupyterLab will run pip install qiskit_code_assistant_jupyterlab to install the Qiskit Code Assistant JupyterLab extension.

Qiskit Code Assistant will automatically try to authenticate you as an IBM Quantum service user when you install it. For information on how to manually authenticate, consult the documentation. The End User License Agreement, which outlines crucial guidelines and limitations to be aware of when using the code assistance, will then appear in a popup for new users. You can start writing code as soon as you agree to the license terms.

Qiskit Code Assistant under the hood

IBM discussed the design of the initial Qiskit Code Assistant model, its performance, and the particular difficulties thye face in the realm of quantum code production in a paper they released on arXiv. But first, let’s go over some of the fundamentals of how LLMs like Qiskit Code Assistant function.

Large language models are a subset of “generative AI,” which is the term for AI models that create text, graphics, and a wide range of other types of data through statistical data analysis. Large data sets are used to train LLMs, which then use the words that come before and after the input sequence as well as the model’s understanding of linguistic patterns in the training data to predict the next word in a text sequence. In essence, the model gives each phrase that might follow a probability, then outputs the word with the highest likelihood. Because of this, LLMs have rapidly become a potent tool for producing traditional software code.

However, creating quantum code has turned out to be a more difficult undertaking. For starters, developing a model that generates high-quality quantum code requires more than just training an LLM on code examples; it also requires a basic understanding of the context of quantum computing. If not, it could not be able to connect the input from the user with the intended result. The model requires background knowledge to link the user request with the code it must create, for instance, even if you ask an LLM to build the code for the Deutsch-Jozsa algorithm. The phrase “Deutsch-Jozsa algorithm” typically doesn’t appear anywhere in that code.

The fact that quantum computing is a considerably narrower field than classical computing presents another difficulty in the creation of quantum codes: there are very few training data and code samples available. At the same time, libraries are continuously being updated with new methods, and the subject is developing even more quickly than many other areas of classical computing. As a result, models for generating quantum codes must also be updated frequently, and IBM must exercise caution when utilising training data that is older than a few years.

Because training LLMs to write code for quantum computing has proven to be so difficult, it has become a valuable benchmark for classical developers evaluating LLM performance on normal coding tasks. For instance, a group of academics from Princeton and the University of Chicago reported last year that a staggering 13% of the SWE-bench benchmarking dataset consists of Qiskit Github problems.

An IBM Granite Code model, one of several created by the IBM Watsonx team for code generation activities, served as the training ground for the granite-8b-qiskit model that powers the Qiskit Code Assistant. The 8 billion parameters that affect granite-8b-qiskit’s code output are represented by the letter “8b.” IBM expanded the model’s training with more Qiskit data, which included a range of Python scripts and Jupyter notebooks, as well as data from open-source GitHub repositories that contained the term “Qiskit,” in order to further enhance the model’s performance.

Qiskit HumanEval

In addition to developing Qiskit Code Assistant, IBM also required a means of assessing its functionality. Since there doesn’t appear to have been any previous research especially on the examination of quantum code produced by LLMs, this created its own set of difficulties. IBM developed the Qiskit HumanEval dataset, IBM version of the well-known HumanEval dataset for assessing classical code produced by LLMs, to close this gap. Each assignment in the Qiskit Human Eval series is intended to assess how well quantum code LLMs perform.

One benefit of using code LLMs as opposed to natural language LLMs is that the former are designed to produce executable code rather than human speech. Thus, IBM can simply ask the model to generate code for various tasks, run the resulting code, and observe how well it performs in order to assess their performance.

To assess a model’s performance on various aspects of quantum programming, the Qiskit HumanEval dataset comprises over 150 unique tasks in eight categories. These comprise basic quantum programming activities in domains such as the creation of quantum circuits, circuit execution, state preparation and analysis, algorithm implementation, and more.

A group of real-world quantum computing specialists from IBM and outside the company, including Qiskit advocates, Qiskit community members, and members of it’s support and documentation teams, created the tests in Qiskit HumanEval. Together, the panel made sure that every task in the dataset was unique, fresh, and had a direct connection between the test or prompt and the task at hand.

IBM evaluated the Qiskit Code Assistant’s performance against that of cutting-edge open source code LLMs, like as CodeLlama, DeepseekCoder, and Starcoder. The comparison’s findings are displayed in the table below, along with the percentage of benchmarking tests that each LLM passed on the Qiskit HumanEval dataset of quantum coding workloads and the regular HumanEval dataset of classical coding benchmarks. As you can see, the granite-8b-qiskit model from Qiskit Code Assistant performed noticeably better than any other model in the quantum code generation tasks of Qiskit HumanEval.

ModelHumanEvalQiskit HumanEval
CODELLAMA-34B-PYTHON-HF52.43%26.73%
DEEPSEEK-CODER-33B-BASE49.39%39.6%
STARCODER2-15B45.12%37.62%
CODEGEMMA-7B42.68%24.75%
GRANITE-8B-CODE-BASE39.02%28.71%
GRANITE-8B-QISKIT38.41%46.53%
RELATED ARTICLES

Recent Posts

Popular Post