Skip to content

fix: update model id to gemini-2.0-flash-exp #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 9, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ from google.adk.tools import google_search

root_agent = Agent(
name="search_assistant",
model="gemini-2.0-flash",
model="gemini-2.0-flash-exp",
instruction="You are a helpful assistant. Answer user questions using Google Search when needed.",
description="An assistant that can search the web.",
tools=[google_search]
Expand Down
2 changes: 1 addition & 1 deletion docs/agents/custom-agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ These are standard `LlmAgent` definitions, responsible for specific tasks. Their
```python
# agent.py (LLM Agent Definitions part)

GEMINI_FLASH = "gemini-1.5-flash" # Define model constant
GEMINI_FLASH = "gemini-2.0-flash-exp" # Define model constant

story_generator = LlmAgent(
name="StoryGenerator", model=GEMINI_FLASH,
Expand Down
10 changes: 5 additions & 5 deletions docs/agents/llm-agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@ First, you need to establish what the agent *is* and what it's *for*.

* **`description` (Optional, Recommended for Multi-Agent):** Provide a concise summary of the agent's capabilities. This description is primarily used by *other* LLM agents to determine if they should route a task to this agent. Make it specific enough to differentiate it from peers (e.g., "Handles inquiries about current billing statements," not just "Billing agent").

* **`model` (Required):** Specify the underlying LLM that will power this agent's reasoning. This is a string identifier like `"gemini-2.0-flash-001"`. The choice of model impacts the agent's capabilities, cost, and performance. See the [Models](models.md) page for available options and considerations.
* **`model` (Required):** Specify the underlying LLM that will power this agent's reasoning. This is a string identifier like `"gemini-2.0-flash-exp"`. The choice of model impacts the agent's capabilities, cost, and performance. See the [Models](models.md) page for available options and considerations.

```python
# Example: Defining the basic identity
capital_agent = LlmAgent(
model="gemini-2.0-flash-001",
model="gemini-2.0-flash-exp",
name="capital_agent",
description="Answers user questions about the capital city of a given country."
# instruction and tools will be added next
Expand All @@ -46,7 +46,7 @@ The `instruction` parameter is arguably the most critical for shaping an `LlmAge
```python
# Example: Adding instructions
capital_agent = LlmAgent(
model="gemini-1.5-flash",
model="gemini-2.0-flash-exp",
name="capital_agent",
description="Answers user questions about the capital city of a given country.",
instruction="""You are an agent that provides the capital city of a country.
Expand Down Expand Up @@ -84,7 +84,7 @@ def get_capital_city(country: str) -> str:

# Add the tool to the agent
capital_agent = LlmAgent(
model="gemini-1.5-flash",
model="gemini-2.0-flash-exp",
name="capital_agent",
description="Answers user questions about the capital city of a given country.",
instruction="""You are an agent that provides the capital city of a country... (previous instruction text)""",
Expand Down Expand Up @@ -171,7 +171,7 @@ Here's the complete basic `capital_agent`:

```python
# Full example code for the basic capital agent
--8<-- "examples/python/snippets/agents/llm-agent/capital-agent.py"
--8<-- "examples/python/snippets/agents/llm-agent/capital_agent.py"
```

_(This example demonstrates the core concepts. More complex agents might incorporate schemas, context control, planning, etc.)_
Expand Down
2 changes: 1 addition & 1 deletion docs/agents/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ from google.adk.agents import LlmAgent
# --- Example using a stable Gemini Flash model ---
agent_gemini_flash = LlmAgent(
# Use the latest stable Flash model identifier
model="gemini-2.0-flash-001",
model="gemini-2.0-flash-exp",
name="gemini_flash_agent",
instruction="You are a fast and helpful Gemini assistant.",
# ... other agent parameters
Expand Down
12 changes: 6 additions & 6 deletions docs/agents/multi-agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,13 @@ The foundation for structuring multi-agent systems is the parent-child relations
from google.adk.agents import LlmAgent, BaseAgent

# Define individual agents
greeter = LlmAgent(name="Greeter", model="gemini-2.0-flash-001")
greeter = LlmAgent(name="Greeter", model="gemini-2.0-flash-exp")
task_doer = BaseAgent(name="TaskExecutor") # Custom non-LLM agent

# Create parent agent and assign children via sub_agents
coordinator = LlmAgent(
name="Coordinator",
model="gemini-2.0-flash-001",
model="gemini-2.0-flash-exp",
description="I coordinate greetings and tasks.",
sub_agents=[ # Assign sub_agents here
greeter,
Expand Down Expand Up @@ -196,7 +196,7 @@ image_tool = AgentTool(agent=image_agent) # Wrap the agent
# Parent agent uses the AgentTool
artist_agent = LlmAgent(
name="Artist",
model="gemini-1.5-flash",
model="gemini-2.0-flash-exp",
instruction="Create a prompt and use the ImageGen tool to generate the image.",
tools=[image_tool] # Include the AgentTool
)
Expand Down Expand Up @@ -229,7 +229,7 @@ support_agent = LlmAgent(name="Support", description="Handles technical support

coordinator = LlmAgent(
name="HelpDeskCoordinator",
model="gemini-1.5-flash",
model="gemini-2.0-flash-exp",
instruction="Route user requests: Use Billing agent for payment issues, Support agent for technical problems.",
description="Main help desk router.",
# allow_transfer=True is often implicit with sub_agents in AutoFlow
Expand Down Expand Up @@ -317,15 +317,15 @@ summarizer = LlmAgent(name="Summarizer", description="Summarizes text.")
# Mid-level agent combining tools
research_assistant = LlmAgent(
name="ResearchAssistant",
model="gemini-1.5-flash",
model="gemini-2.0-flash-exp",
description="Finds and summarizes information on a topic.",
tools=[AgentTool(agent=web_searcher), AgentTool(agent=summarizer)]
)

# High-level agent delegating research
report_writer = LlmAgent(
name="ReportWriter",
model="gemini-1.5-flash",
model="gemini-2.0-flash-exp",
instruction="Write a report on topic X. Use the ResearchAssistant to gather information.",
tools=[AgentTool(agent=research_assistant)]
# Alternatively, could use LLM Transfer if research_assistant is a sub_agent
Expand Down
2 changes: 1 addition & 1 deletion docs/agents/workflow-agents/loop-agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,5 +42,5 @@ In this setup, the `LoopAgent` would manage the iterative process. The `CriticA
#### Full code

```py
--8<-- "examples/python/snippets/agents/workflow-agents/loop-agent-doc-improv-agent.py"
--8<-- "examples/python/snippets/agents/workflow-agents/loop_agent_doc_improv_agent.py"
```
4 changes: 2 additions & 2 deletions docs/callbacks/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def my_before_model_logic(
# --- Register it during Agent creation ---
my_agent = LlmAgent(
name="MyCallbackAgent",
model="gemini-2.0-flash", # Or your desired model
model="gemini-2.0-flash-exp", # Or your desired model
instruction="Be helpful.",
# Other agent parameters...
before_model_callback=my_before_model_logic # Pass the function here
Expand Down Expand Up @@ -132,7 +132,7 @@ def block_forbidden_input(
# Agent definition using the callback
guardrail_agent = LlmAgent(
name="GuardrailAgent",
model="gemini-2.0-flash",
model="gemini-2.0-flash-exp",
instruction="Answer user questions.",
before_model_callback=block_forbidden_input
)
Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/running-the-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ from agents.sessions import InMemorySessionService

# Step 1: Define your agent:
root_agent = Agent(name="my_agent",
model="gemini-2.0-flash",
model="gemini-2.0-flash-exp",
instruction="Answer questions.")

# Step 2: Initiate Session
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/responsible-agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ def validate_tool_params(

# Hypothetical Agent setup
root_agent = LlmAgent( # Use specific agent type
model='gemini-1.5-flash-001',
model='gemini-2.0-flash-exp',
name='root_agent',
instruction="...",
before_tool_callback=validate_tool_params, # Assign the callback
Expand Down
4 changes: 2 additions & 2 deletions docs/runtime/artifacts.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ from google.adk.agents import LlmAgent # Any agent
from google.adk.sessions import InMemorySessionService

# Example: Configuring the Runner with an Artifact Service
my_agent = LlmAgent(name="artifact_user_agent", model="gemini-2.0-flash")
my_agent = LlmAgent(name="artifact_user_agent", model="gemini-2.0-flash-exp")
artifact_service = InMemoryArtifactService() # Choose an implementation
session_service = InMemorySessionService()

Expand Down Expand Up @@ -206,7 +206,7 @@ from google.adk.agents import LlmAgent
from google.adk.sessions import InMemorySessionService

# Your agent definition
agent = LlmAgent(name="my_agent", model="gemini-2.0-flash")
agent = LlmAgent(name="my_agent", model="gemini-2.0-flash-exp")

# Instantiate the desired artifact service
artifact_service = InMemoryArtifactService()
Expand Down
2 changes: 1 addition & 1 deletion docs/sessions/memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ from google.genai.types import Content, Part
# --- Constants ---
APP_NAME = "memory_example_app"
USER_ID = "mem_user"
MODEL = "gemini-1.5-flash" # Use a valid model
MODEL = "gemini-2.0-flash-exp" # Use a valid model

# --- Agent Definitions ---
# Agent 1: Simple agent to capture information
Expand Down
2 changes: 1 addition & 1 deletion docs/sessions/state.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ from google.genai.types import Content, Part
# Define agent with output_key
greeting_agent = LlmAgent(
name="Greeter",
model="gemini-1.5-flash", # Use a valid model
model="gemini-2.0-flash-exp", # Use a valid model
instruction="Generate a short, friendly greeting.",
output_key="last_greeting" # Save response to state['last_greeting']
)
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/google-cloud-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Note: this tutorial includes an agent creation. If you already have an agent, yo
from .tools import sample_toolset

root_agent = LlmAgent(
model='gemini-2.0-flash',
model='gemini-2.0-flash-exp',
name='enterprise_assistant',
instruction='Help user, leverage the tools you have access to',
tools=sample_toolset.get_tools(),)
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/mcp-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,7 @@ async def get_agent_async():
tools, exit_stack = await get_tools_async()
print(f"Fetched {len(tools)} tools from MCP server.")
root_agent = LlmAgent(
model='gemini-1.5-flash', # Adjust if needed
model='gemini-2.0-flash-exp', # Adjust if needed
name='maps_assistant',
instruction='Help user with mapping and directions using available tools.',
tools=tools,
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/openapi-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Follow these steps to integrate an OpenAPI spec into your agent:

my_agent = LlmAgent(
name="api_interacting_agent",
model="gemini-2.0-flash-001", # Or your preferred model
model="gemini-2.0-flash-exp", # Or your preferred model
tools=api_tools, # Pass the list of generated tools
# ... other agent config ...
)
Expand Down
4 changes: 2 additions & 2 deletions docs/tools/third-party-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ ADK provides the `LangchainTool` wrapper to integrate tools from the LangChain e
# Define the ADK agent, including the wrapped tool
my_agent = Agent(
name="langchain_tool_agent",
model="gemini-2.0-flash",
model="gemini-2.0-flash-exp",
description="Agent to answer questions using TavilySearch.",
instruction="I can answer your questions by searching the internet. Just ask me anything!",
tools=[adk_tavily_tool] # Add the wrapped tool here
Expand Down Expand Up @@ -125,7 +125,7 @@ ADK provides the `CrewaiTool` wrapper to integrate tools from the CrewAI library
# Define the ADK agent
my_agent = Agent(
name="crewai_search_agent",
model="gemini-2.0-flash",
model="gemini-2.0-flash-exp",
description="Agent to find recent news using the Serper search tool.",
instruction="I can find the latest news for you. What topic are you interested in?",
tools=[adk_serper_tool] # Add the wrapped tool here
Expand Down
12 changes: 8 additions & 4 deletions examples/python/snippets/agents/custom-agent/storyflow-agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from typing import AsyncGenerator
from typing_extensions import override

from google.adk.agents import Agent, LlmAgent, BaseAgent, LoopAgent, SequentialAgent
from google.adk.agents import LlmAgent, BaseAgent, LoopAgent, SequentialAgent
from google.adk.agents.invocation_context import InvocationContext
from google.genai import types
from google.adk.sessions import InMemorySessionService
Expand All @@ -14,7 +14,7 @@
APP_NAME = "story_app"
USER_ID = "12345"
SESSION_ID = "123344"
GEMINI_2_FLASH = "gemini-2.0-flash-001"
GEMINI_2_FLASH = "gemini-2.0-flash-exp"

# --- Configure Logging ---
logging.basicConfig(level=logging.INFO)
Expand Down Expand Up @@ -240,7 +240,9 @@ def call_agent(user_input_topic: str):
Sends a new topic to the agent (overwriting the initial one if needed)
and runs the workflow.
"""
current_session = session_service.get_session(APP_NAME, USER_ID, SESSION_ID)
current_session = session_service.get_session(app_name=APP_NAME,
user_id=USER_ID,
session_id=SESSION_ID)
if not current_session:
logger.error("Session not found!")
return
Expand All @@ -260,7 +262,9 @@ def call_agent(user_input_topic: str):
print("\n--- Agent Interaction Result ---")
print("Agent Final Response: ", final_response)

final_session = session_service.get_session(APP_NAME, USER_ID, SESSION_ID)
final_session = session_service.get_session(app_name=APP_NAME,
user_id=USER_ID,
session_id=SESSION_ID)
print("Final Session State:")
import json
print(json.dumps(final_session.state, indent=2))
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# --- Full example code demonstrating LlmAgent with Tools vs. Output Schema ---
import asyncio
import json # Needed for pretty printing dicts

from google.adk.agents import LlmAgent
Expand All @@ -13,7 +12,7 @@
USER_ID = "test_user_456"
SESSION_ID_TOOL_AGENT = "session_tool_agent_xyz"
SESSION_ID_SCHEMA_AGENT = "session_schema_agent_xyz"
MODEL_NAME = "gemini-2.0-flash-001"
MODEL_NAME = "gemini-2.0-flash-exp"

# --- 2. Define Schemas ---

Expand Down Expand Up @@ -117,7 +116,9 @@ async def call_agent_and_print(

print(f"<<< Agent '{agent_instance.name}' Response: {final_response_content}")

current_session = session_service.get_session(APP_NAME, USER_ID, session_id)
current_session = session_service.get_session(app_name=APP_NAME,
user_id=USER_ID,
session_id=session_id)
stored_output = current_session.state.get(agent_instance.output_key)

# Pretty print if the stored output looks like JSON (likely from output_schema)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
APP_NAME = "doc_writing_app"
USER_ID = "dev_user_01"
SESSION_ID = "session_01"
GEMINI_MODEL = "gemini-2.0-flash-001"
GEMINI_MODEL = "gemini-2.0-flash-exp"

# --- State Keys ---
STATE_INITIAL_TOPIC = "quantum physics"
Expand Down Expand Up @@ -55,12 +55,12 @@

# Agent Interaction
def call_agent(query):
content = types.Content(role='user', parts=[types.Part(text=query)])
events = runner.run(user_id=USER_ID, session_id=SESSION_ID, new_message=content)
content = types.Content(role='user', parts=[types.Part(text=query)])
events = runner.run(user_id=USER_ID, session_id=SESSION_ID, new_message=content)

for event in events:
if event.is_final_response():
final_response = event.content.parts[0].text
print("Agent Response: ", final_response)
for event in events:
if event.is_final_response():
final_response = event.content.parts[0].text
print("Agent Response: ", final_response)

call_agent("execute")
call_agent("execute")
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
APP_NAME = "parallel_research_app"
USER_ID = "research_user_01"
SESSION_ID = "parallel_research_session"
GEMINI_MODEL = "gemini-2.0-flash-001"
GEMINI_MODEL = "gemini-2.0-flash-exp"

# --- Define Researcher Sub-Agents ---

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
APP_NAME = "code_pipeline_app"
USER_ID = "dev_user_01"
SESSION_ID = "pipeline_session_01"
GEMINI_MODEL = "gemini-2.0-flash-001"
GEMINI_MODEL = "gemini-2.0-flash-exp"

# --- 1. Define Sub-Agents for Each Pipeline Stage ---

Expand Down Expand Up @@ -77,12 +77,12 @@

# Agent Interaction
def call_agent(query):
content = types.Content(role='user', parts=[types.Part(text=query)])
events = runner.run(user_id=USER_ID, session_id=SESSION_ID, new_message=content)
content = types.Content(role='user', parts=[types.Part(text=query)])
events = runner.run(user_id=USER_ID, session_id=SESSION_ID, new_message=content)

for event in events:
if event.is_final_response():
final_response = event.content.parts[0].text
print("Agent Response: ", final_response)
for event in events:
if event.is_final_response():
final_response = event.content.parts[0].text
print("Agent Response: ", final_response)

call_agent("perform math addition")
call_agent("perform math addition")
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
# A unique name for the agent.
name="basic_search_agent",
# The Large Language Model (LLM) that agent will use.
model="gemini-2.0-flash",
model="gemini-2.0-flash-exp",
# A short description of the agent's purpose.
description="Agent to answer questions using Google Search.",
# Instructions to set the agent's behavior.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def get_current_time(city: str) -> dict:

root_agent = Agent(
name="weather_time_agent",
model="gemini-2.0-flash",
model="gemini-2.0-flash-exp",
description=(
"Agent to answer questions about the time and weather in a city."
),
Expand Down
Loading