Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add Gemini API integration #650

Merged
merged 44 commits into from
Jan 18, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
1939b6d
feat: Add Gemini API integration
devin-ai-integration[bot] Jan 17, 2025
9e4f471
fix: Pass session correctly to track LLM events in Gemini provider
devin-ai-integration[bot] Jan 17, 2025
b95fe6e
feat: Add Gemini integration with example notebook
devin-ai-integration[bot] Jan 17, 2025
72e985a
fix: Add null checks and improve test coverage for Gemini provider
devin-ai-integration[bot] Jan 17, 2025
6df9b7e
style: Add blank lines between test functions
devin-ai-integration[bot] Jan 17, 2025
200dcf1
test: Improve test coverage for Gemini provider
devin-ai-integration[bot] Jan 17, 2025
cd31098
style: Fix formatting in test_gemini.py
devin-ai-integration[bot] Jan 17, 2025
fef63a9
test: Add comprehensive test coverage for edge cases and error handling
devin-ai-integration[bot] Jan 17, 2025
10900f5
test: Add graceful API key handling and skip tests when key is missing
devin-ai-integration[bot] Jan 17, 2025
4b96b0f
style: Fix formatting issues in test files
devin-ai-integration[bot] Jan 17, 2025
062f82d
style: Remove trailing whitespace in test_gemini.py
devin-ai-integration[bot] Jan 17, 2025
d418202
test: Add coverage for error handling, edge cases, and argument handl…
devin-ai-integration[bot] Jan 17, 2025
a9cea74
test: Add streaming exception handling test coverage
devin-ai-integration[bot] Jan 17, 2025
11c7343
style: Apply ruff auto-formatting to test_gemini.py
devin-ai-integration[bot] Jan 17, 2025
4f0b0fe
test: Fix type errors and improve test coverage for Gemini provider
devin-ai-integration[bot] Jan 17, 2025
1a6e1ca
test: Add comprehensive error handling test coverage for Gemini provider
devin-ai-integration[bot] Jan 17, 2025
9efc0f1
style: Apply ruff-format fixes to test_gemini.py
devin-ai-integration[bot] Jan 17, 2025
071a610
fix: Configure Gemini API key before model initialization
devin-ai-integration[bot] Jan 17, 2025
970c318
fix: Update GeminiProvider to properly handle instance methods
devin-ai-integration[bot] Jan 17, 2025
18143b5
fix: Use provider instance in closure for proper method binding
devin-ai-integration[bot] Jan 17, 2025
a27b2e4
fix: Use class-level storage for original method
devin-ai-integration[bot] Jan 17, 2025
aed3a1b
fix: Use module-level storage for original method
devin-ai-integration[bot] Jan 17, 2025
8297371
style: Apply ruff-format fixes to Gemini integration
devin-ai-integration[bot] Jan 17, 2025
9c9af3a
fix: Move Gemini tests to unit test directory for proper coverage rep…
devin-ai-integration[bot] Jan 17, 2025
bff477c
fix: Update Gemini provider to properly handle prompt extraction and …
devin-ai-integration[bot] Jan 17, 2025
f8fd56d
test: Add comprehensive test coverage for Gemini provider session han…
devin-ai-integration[bot] Jan 17, 2025
59db821
style: Apply ruff-format fixes to test files
devin-ai-integration[bot] Jan 17, 2025
f163e23
fix: Pass LlmTracker client to GeminiProvider constructor
devin-ai-integration[bot] Jan 17, 2025
6d7ee0f
remove extra files
areibman Jan 17, 2025
6e4d965
fix: Improve code efficiency and error handling in Gemini provider
devin-ai-integration[bot] Jan 17, 2025
54a9d36
chore: Clean up test files and merge remote changes
devin-ai-integration[bot] Jan 17, 2025
c845a34
test: Add comprehensive test coverage for Gemini provider
devin-ai-integration[bot] Jan 17, 2025
973e59f
fix: Set None as default values and improve test coverage
devin-ai-integration[bot] Jan 17, 2025
481a8d7
build: Add google-generativeai as test dependency
devin-ai-integration[bot] Jan 17, 2025
0871398
docs: Update examples and README for Gemini integration
devin-ai-integration[bot] Jan 17, 2025
cddab5b
add gemini logo image
the-praxs Jan 18, 2025
681cd18
add gemini to examples
the-praxs Jan 18, 2025
9e8e85e
add gemini to docs
the-praxs Jan 18, 2025
e75fa84
refactor handle_response method
the-praxs Jan 18, 2025
86dec80
cleanup gemini tracking code
the-praxs Jan 18, 2025
3384b2d
delete unit test for gemini
the-praxs Jan 18, 2025
392677a
rename and clean gemini example notebook
the-praxs Jan 18, 2025
38e2621
ruff
the-praxs Jan 18, 2025
9e3393d
update docs
the-praxs Jan 18, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
194 changes: 194 additions & 0 deletions agentops/llms/providers/gemini.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
from typing import Optional, Any, Dict, Union

from agentops.llms.providers.base import BaseProvider
from agentops.event import LLMEvent, ErrorEvent
from agentops.session import Session
from agentops.helpers import get_ISO_time, check_call_stack_for_agent_id
from agentops.log_config import logger
from agentops.singleton import singleton


@singleton
class GeminiProvider(BaseProvider):
original_generate_content = None
original_generate_content_async = None

"""Provider for Google's Gemini API.

This provider is automatically detected and initialized when agentops.init()
is called and the google.generativeai package is imported. No manual
initialization is required."""

def __init__(self, client=None):
"""Initialize the Gemini provider.

Args:
client: Optional client instance. If not provided, will be set during override.
"""
super().__init__(client)
self._provider_name = "Gemini"

Check warning on line 29 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L28-L29

Added lines #L28 - L29 were not covered by tests

def handle_response(self, response, kwargs, init_timestamp, session: Optional[Session] = None) -> dict:
"""Handle responses from Gemini API for both sync and streaming modes.

Args:
response: The response from the Gemini API
kwargs: The keyword arguments passed to generate_content
init_timestamp: The timestamp when the request was initiated
session: Optional AgentOps session for recording events

Returns:
For sync responses: The original response object
For streaming responses: A generator yielding response chunks
"""
llm_event = LLMEvent(init_timestamp=init_timestamp, params=kwargs)
if session is not None:
llm_event.session_id = session.session_id

Check warning on line 46 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L44-L46

Added lines #L44 - L46 were not covered by tests

accumulated_content = ""

Check warning on line 48 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L48

Added line #L48 was not covered by tests

def handle_stream_chunk(chunk):

Check warning on line 50 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L50

Added line #L50 was not covered by tests
nonlocal llm_event, accumulated_content
try:
if llm_event.returns is None:
llm_event.returns = chunk
llm_event.agent_id = check_call_stack_for_agent_id()
llm_event.model = getattr(chunk, "model", None) or "gemini-1.5-flash"
llm_event.prompt = kwargs.get("prompt", kwargs.get("contents", None)) or []

Check warning on line 57 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L52-L57

Added lines #L52 - L57 were not covered by tests

# Accumulate text from chunk
if hasattr(chunk, "text") and chunk.text:
accumulated_content += chunk.text

Check warning on line 61 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L60-L61

Added lines #L60 - L61 were not covered by tests

# Extract token counts if available
if hasattr(chunk, "usage_metadata"):
llm_event.prompt_tokens = getattr(chunk.usage_metadata, "prompt_token_count", None)
llm_event.completion_tokens = getattr(chunk.usage_metadata, "candidates_token_count", None)

Check warning on line 66 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L64-L66

Added lines #L64 - L66 were not covered by tests

# If this is the last chunk
if hasattr(chunk, "finish_reason") and chunk.finish_reason:
llm_event.completion = accumulated_content
llm_event.end_timestamp = get_ISO_time()
self._safe_record(session, llm_event)

Check warning on line 72 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L69-L72

Added lines #L69 - L72 were not covered by tests

except Exception as e:
self._safe_record(session, ErrorEvent(trigger_event=llm_event, exception=e))
logger.warning(

Check warning on line 76 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L74-L76

Added lines #L74 - L76 were not covered by tests
f"Unable to parse chunk for Gemini LLM call. Error: {str(e)}\n"
f"Response: {chunk}\n"
f"Arguments: {kwargs}\n"
)

# For streaming responses
if kwargs.get("stream", False):

Check warning on line 83 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L83

Added line #L83 was not covered by tests

def generator():
for chunk in response:
handle_stream_chunk(chunk)
yield chunk

Check warning on line 88 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L85-L88

Added lines #L85 - L88 were not covered by tests

return generator()

Check warning on line 90 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L90

Added line #L90 was not covered by tests

# For synchronous responses
try:
llm_event.returns = response
llm_event.agent_id = check_call_stack_for_agent_id()
llm_event.prompt = kwargs.get("prompt", kwargs.get("contents", None)) or []
llm_event.completion = response.text
llm_event.model = getattr(response, "model", None) or "gemini-1.5-flash"

Check warning on line 98 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L93-L98

Added lines #L93 - L98 were not covered by tests

# Extract token counts from usage metadata if available
if hasattr(response, "usage_metadata"):
llm_event.prompt_tokens = getattr(response.usage_metadata, "prompt_token_count", None)
llm_event.completion_tokens = getattr(response.usage_metadata, "candidates_token_count", None)

Check warning on line 103 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L101-L103

Added lines #L101 - L103 were not covered by tests

llm_event.end_timestamp = get_ISO_time()
self._safe_record(session, llm_event)
except Exception as e:
self._safe_record(session, ErrorEvent(trigger_event=llm_event, exception=e))
logger.warning(

Check warning on line 109 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L105-L109

Added lines #L105 - L109 were not covered by tests
f"Unable to parse response for Gemini LLM call. Error: {str(e)}\n"
f"Response: {response}\n"
f"Arguments: {kwargs}\n"
)

return response

Check warning on line 115 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L115

Added line #L115 was not covered by tests

def override(self):
"""Override Gemini's generate_content method to track LLM events."""
self._override_gemini_generate_content()
self._override_gemini_generate_content_async()

Check warning on line 120 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L119-L120

Added lines #L119 - L120 were not covered by tests

def _override_gemini_generate_content(self):
"""Override synchronous generate_content method"""
import google.generativeai as genai

Check warning on line 124 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L124

Added line #L124 was not covered by tests

# Store original method if not already stored
if self.original_generate_content is None:
self.original_generate_content = genai.GenerativeModel.generate_content

Check warning on line 128 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L127-L128

Added lines #L127 - L128 were not covered by tests

provider = self # Store provider instance for closure

Check warning on line 130 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L130

Added line #L130 was not covered by tests

def patched_function(model_self, *args, **kwargs):
init_timestamp = get_ISO_time()
session = kwargs.pop("session", None)

Check warning on line 134 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L132-L134

Added lines #L132 - L134 were not covered by tests

# Handle positional prompt argument
event_kwargs = kwargs.copy()
if args and len(args) > 0:
prompt = args[0]
if "contents" not in kwargs:
kwargs["contents"] = prompt
event_kwargs["prompt"] = prompt
args = args[1:]

Check warning on line 143 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L137-L143

Added lines #L137 - L143 were not covered by tests

result = provider.original_generate_content(model_self, *args, **kwargs)
return provider.handle_response(result, event_kwargs, init_timestamp, session=session)

Check warning on line 146 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L145-L146

Added lines #L145 - L146 were not covered by tests

# Override the method at class level
genai.GenerativeModel.generate_content = patched_function

Check warning on line 149 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L149

Added line #L149 was not covered by tests

def _override_gemini_generate_content_async(self):
"""Override asynchronous generate_content method"""
import google.generativeai as genai

Check warning on line 153 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L153

Added line #L153 was not covered by tests

# Store original async method if not already stored
if self.original_generate_content_async is None:
self.original_generate_content_async = genai.GenerativeModel.generate_content_async

Check warning on line 157 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L156-L157

Added lines #L156 - L157 were not covered by tests

provider = self # Store provider instance for closure

Check warning on line 159 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L159

Added line #L159 was not covered by tests

async def patched_function(model_self, *args, **kwargs):
init_timestamp = get_ISO_time()
session = kwargs.pop("session", None)

Check warning on line 163 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L161-L163

Added lines #L161 - L163 were not covered by tests

# Handle positional prompt argument
event_kwargs = kwargs.copy()
if args and len(args) > 0:
prompt = args[0]
if "contents" not in kwargs:
kwargs["contents"] = prompt
event_kwargs["prompt"] = prompt
args = args[1:]

Check warning on line 172 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L166-L172

Added lines #L166 - L172 were not covered by tests

result = await provider.original_generate_content_async(model_self, *args, **kwargs)
return provider.handle_response(result, event_kwargs, init_timestamp, session=session)

Check warning on line 175 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L174-L175

Added lines #L174 - L175 were not covered by tests

# Override the async method at class level
genai.GenerativeModel.generate_content_async = patched_function

Check warning on line 178 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L178

Added line #L178 was not covered by tests

def undo_override(self):
"""Restore original Gemini methods.

Note:
This method is called automatically by AgentOps during cleanup.
Users should not call this method directly."""
import google.generativeai as genai

Check warning on line 186 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L186

Added line #L186 was not covered by tests

if self.original_generate_content is not None:
genai.GenerativeModel.generate_content = self.original_generate_content
self.original_generate_content = None

Check warning on line 190 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L188-L190

Added lines #L188 - L190 were not covered by tests

if self.original_generate_content_async is not None:
genai.GenerativeModel.generate_content_async = self.original_generate_content_async
self.original_generate_content_async = None

Check warning on line 194 in agentops/llms/providers/gemini.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/gemini.py#L192-L194

Added lines #L192 - L194 were not covered by tests
14 changes: 14 additions & 0 deletions agentops/llms/tracker.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
from .providers.ai21 import AI21Provider
from .providers.llama_stack_client import LlamaStackClientProvider
from .providers.taskweaver import TaskWeaverProvider
from .providers.gemini import GeminiProvider

original_func = {}
original_create = None
Expand All @@ -24,6 +25,9 @@

class LlmTracker:
SUPPORTED_APIS = {
"google.generativeai": {
"0.1.0": ("GenerativeModel.generate_content", "GenerativeModel.generate_content_stream"),
},
"litellm": {"1.3.1": ("openai_chat_completions.completion",)},
"openai": {
"1.0.0": (
Expand Down Expand Up @@ -210,6 +214,15 @@
else:
logger.warning(f"Only TaskWeaver>=0.0.1 supported. v{module_version} found.")

if api == "google.generativeai":
module_version = version(api)

Check warning on line 218 in agentops/llms/tracker.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/tracker.py#L217-L218

Added lines #L217 - L218 were not covered by tests

if Version(module_version) >= parse("0.1.0"):
provider = GeminiProvider(self.client)
provider.override()

Check warning on line 222 in agentops/llms/tracker.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/tracker.py#L220-L222

Added lines #L220 - L222 were not covered by tests
else:
logger.warning(f"Only google.generativeai>=0.1.0 supported. v{module_version} found.")

Check warning on line 224 in agentops/llms/tracker.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/tracker.py#L224

Added line #L224 was not covered by tests

def stop_instrumenting(self):
OpenAiProvider(self.client).undo_override()
GroqProvider(self.client).undo_override()
Expand All @@ -221,3 +234,4 @@
AI21Provider(self.client).undo_override()
LlamaStackClientProvider(self.client).undo_override()
TaskWeaverProvider(self.client).undo_override()
GeminiProvider(self.client).undo_override()

Check warning on line 237 in agentops/llms/tracker.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/tracker.py#L237

Added line #L237 was not covered by tests
Binary file added docs/images/external/deepmind/gemini-logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@
"v1/integrations/camel",
"v1/integrations/cohere",
"v1/integrations/crewai",
"v1/integrations/gemini",
"v1/integrations/groq",
"v1/integrations/langchain",
"v1/integrations/llama_stack",
Expand Down
4 changes: 4 additions & 0 deletions docs/v1/examples/examples.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,10 @@ mode: "wide"
Ultra-fast LLM inference with Groq Cloud
</Card>

<Card title="Gemini" icon={<img src="https://www.github.com/agentops-ai/agentops/blob/main/docs/images/external/deepmind/gemini-logo.png?raw=true" alt="Gemini" />} iconType="image" href="/v1/integrations/gemini">
Explore Google DeepMind's Gemini with observation via AgentOps
</Card>

<Card title="LangChain" icon={<img src="https://www.github.com/agentops-ai/agentops/blob/main/docs/images/external/langchain/langchain-logo.png?raw=true" alt="LangChain" />} iconType="image" href="/v1/examples/langchain">
Jupyter Notebook with a sample LangChain integration
</Card>
Expand Down
118 changes: 118 additions & 0 deletions docs/v1/integrations/gemini.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
---
title: Gemini
description: "Explore Google DeepMind's Gemini with observation via AgentOps"
---

import CodeTooltip from '/snippets/add-code-tooltip.mdx'
import EnvTooltip from '/snippets/add-env-tooltip.mdx'

[Gemini (Google Generative AI)](https://ai.google.dev/gemini-api/docs/quickstart) is a leading provider of AI tools and services.
Explore the [Gemini API](https://ai.google.dev/docs) for more information.

<Note>
`google-generativeai>=0.1.0` is currently supported.
</Note>

<Steps>
<Step title="Install the AgentOps SDK">
<CodeGroup>
```bash pip
pip install agentops
```
```bash poetry
poetry add agentops
```
</CodeGroup>
</Step>
<Step title="Install the Gemini SDK">
<Note>
`google-generativeai>=0.1.0` is required for Gemini integration.
</Note>
<CodeGroup>
```bash pip
pip install google-generativeai
```
```bash poetry
poetry add google-generativeai
```
</CodeGroup>
</Step>
<Step title="Add 3 lines of code">
<CodeTooltip/>
<CodeGroup>
```python python
import google.generativeai as genai
import agentops

agentops.init(<INSERT YOUR API KEY HERE>)
model = genai.GenerativeModel("gemini-1.5-flash")
...
# End of program (e.g. main.py)
agentops.end_session("Success") # Success|Fail|Indeterminate
```
</CodeGroup>
<EnvTooltip />
<CodeGroup>
```python .env
AGENTOPS_API_KEY=<YOUR API KEY>
GEMINI_API_KEY=<YOUR GEMINI API KEY>
```
</CodeGroup>
Read more about environment variables in [Advanced Configuration](/v1/usage/advanced-configuration)
</Step>
<Step title="Run your Agent">
Execute your program and visit [app.agentops.ai/drilldown](https://app.agentops.ai/drilldown) to observe your Agent! 🕵️
<Tip>
After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard
</Tip>
<div/>
<Frame type="glass" caption="Clickable link to session">
<img height="200" src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/link-to-session.gif?raw=true" />
</Frame>
</Step>
</Steps>

## Full Examples

<CodeGroup>
```python sync
import google.generativeai as genai
import agentops

agentops.init(<INSERT YOUR API KEY HERE>)
model = genai.GenerativeModel("gemini-1.5-flash")

response = model.generate_content(
"Write a haiku about AI and humans working together"
)

print(response.text)
agentops.end_session('Success')
```

```python stream
import google.generativeai as genai
import agentops

agentops.init(<INSERT YOUR API KEY HERE>)
model = genai.GenerativeModel("gemini-1.5-flash")

response = model.generate_content(
"Write a haiku about AI and humans working together",
stream=True
)

for chunk in response:
print(chunk.text, end="")

agentops.end_session('Success')
```
</CodeGroup>

You can find more examples in the [Gemini Examples](/v1/examples/gemini_examples) section.

<script type="module" src="/scripts/github_stars.js"></script>
<script type="module" src="/scripts/scroll-img-fadein-animation.js"></script>
<script type="module" src="/scripts/button_heartbeat_animation.js"></script>
<script type="css" src="/styles/styles.css"></script>
<script type="module" src="/scripts/adjust_api_dynamically.js"></script>
Loading
Loading