This project explores how AI chat applications handle memory through two different implementations: a traditional chat with persistent memory and an experimental chat where memories can be manipulated in real-time.
memory-chat
: A traditional chat application with persistent memory storagefalse-memory-chat
: An experimental chat where memory can be altered during conversations
This project is optimized for GitHub Codespaces, providing a pre-configured environment with all necessary dependencies.
-
Clone the repository
-
Install Python dependencies
pip install
This project integrates with GitHub Models, making it immediately usable for GitHub users:
- No API key required
- Authentication handled through GitHub
- Automatic endpoint configuration
The false-memory-chat
demonstrates how AI memory can be manipulated in real-time, allowing for exploration of how context affects AI responses.
python false_memory_chat.py
- Start the chat application
- Open
chat_memory.json
in your preferred editor - Modify the conversation history:
[ {"role": "user", "content": "What's your favorite color?"}, {"role": "assistant", "content": "I love blue!"}, // Add or modify messages while chat is running ]
- Changes take effect in the next interaction
history
: Display current chat historycount
: Show number of messages in memoryexit
: Close the application
The memory-chat
demonstrates conventional chat memory implementation where conversation history is maintained internally during the session.
python memory_chat.py
base_url = "https://models.inference.ai.azure.com"
model_name = "gpt-4o-mini"
Contributions are welcome! Feel free to:
- Submit bug reports
- Propose new features
- Create pull requests
This project is open source and available under the GNU GPL 3.0 License.