Friday, March 28, 2025

How To Enable Gemini Code Execution In Google AI Studio

Learn how to enable Gemini Code Execution in Google AI Studio for enhanced coding speed and efficiency.

Gemini models have access to a Python sandbox through code execution, which enables them to run code and analyze the outcomes. Enabling Google code execution allows Gemini models to carry out computations, evaluate intricate data sets, and produce visualizations instantly, all of which improve user inquiries. With the Gemini 2.0 models, it is now widely accessible in Google AI Studio and the Gemini API.

Gemini Code execution as a tool

Gemini Code execution can be enabled using a toggle in the “Tools” tab of Google AI Studio or as a tools variable from the Gemini API.

from google import genai
from google.genai import types

client = genai.Client(api_key="GEMINI_API_KEY")

response = client.models.generate_content(
model='gemini-2.0-flash',
contents="""
What is the sum of the first 50 prime numbers?
Generate and run code for the calculation.
""",
config=types.GenerateContentConfig(
tools=[types.Tool(
code_execution=types.ToolCodeExecution
)]
)
)

When code execution is included as a tool, the model can run code up to five times without prompting and for up to 30 seconds at a time using the code execution sandbox. Libraries like as Numpy, Pandas, and Matplotlib (for graph drawing) are part of the code execution environment. Gemini API documentation has the whole list of available libraries, and plan to quickly increase the number of libraries that are supported.

File IO and Graph Output

Gemini Code execution tool has been upgraded with Gemini 2.0 to enable file input into the code execution sandbox and Matplotlib-based graph and chart output. A wider range of code execution use cases results from these enhancements. These changes allow you to:

  • Analyze user-uploaded files logically to comprehend their complexity.
  • Use Matplotlib-supported charts and graphs to visualize data.
  • Examine local code files.
  • [Experimental] Use the Multimodal Live API to unlock the real-time capabilities of code execution.
  • [Experimental] Use Google Search in conjunction with tools such as Grounding to execute code.
  • [Experimental] Make use of Gemini 2.0 Thinking Mode code execution.
  • And more

Let’s examine two real-world instances of code execution in operation:

Real-time data analysis and visualization with Gemini models

This example shows a live discussion with the Gemini model using both code execution and voice input by merging the Multimodal Live API. After asking Gemini models to rate a list of Tom Cruise films according to runtime, it use Matplotlib to visualize the results in a bar chart. The Python code needed to complete these tasks is generated by Gemini models, and the chart is updated in response to further demands (like as altering the bar colors).

Combining Thinking model and code execution to solve complex problems

This demonstration solves a traditional optimization problem using the Gemini 2.0 Flash Thinking Experimental model and code execution. It ask Gemini to determine the quickest path for a salesman to take in order to visit five places in Spain and then return to their starting point. Gemini computes distances, iteratively debugs the Python code (fixing an initial library mistake), and then displays the best path on a Matplotlib graph.

Get started with Gemini 2.0 Code Execution

Do you want to give it a try? Visit the GitHub repository or Colab notebook to experience code execution firsthand. There, you can examine various code execution scenarios and more.

To discuss your use cases and provide input on how it might help you with code execution, invite you to join the Gemini API Developer forum. In the near future, they are looking into multi-tool use, further library support, and support for various input modalities including PDFs.

In conclusion

Code execution in Gemini 2.0 models is now generally available. With this improvement, the models may run code in a Python sandbox, allowing for real-time computations, data analysis, and visualizations to better response quality. Gemini Code execution can be enabled by users either the Gemini API or Google AI Studio. The tool has been upgraded to incorporate file input, graph output, and other experimental capabilities, and it supports libraries like as NumPy and Matplotlib. Examples demonstrate how to use the Multimodal Live API and Thinking Mode for real-time data analysis and problem-solving. For future enhancements, such as increased library support and multi-tool usage, Google invites developers to test the tool and offer input.

Drakshi
Drakshi
Since June 2023, Drakshi has been writing articles of Artificial Intelligence for govindhtech. She was a postgraduate in business administration. She was an enthusiast of Artificial Intelligence.
RELATED ARTICLES

Recent Posts

Popular Post