This open notebook chronicles the ongoing development of Milo, a long-term agent project, detailing its evolution alongside other agent experiments and the tools I'm creating. I also dive into broader subjects related to AI architecture.
Telegram for Agent Communication
Setting up Telegram as the communication layer for Milo using BotFather and pyTelegramBotAPI, with message polling and deduplication.
PyTorch For Training Neural Networks
What PyTorch actually gives you — tensors, layers, automatic gradients, and optimizers — explained plainly.
Setting up a tokenizer
An LLM never sees text; only integers. Its inputs and outputs are all represented as sequences of token IDs. This is why we use a tokenizer to convert strings of text into number IDs.
Building an LLM from scratch
My pipeline plan for learning the inner workings of a large language model by building one from scratch.
Raspberry Pi Setup with Living Memory (Flat-File Approach)
How I configured a Raspberry Pi 5 with a flat-file living memory system using LangChain and Gemini to let an autonomous agent run continuously.
Building a Persistent AI Agent (Milo) with Self Awareness
How I architected an AI agent that runs continuously on a Raspberry Pi, talks over Telegram, grounds replies in semantic memory (ChromaDB + RAG), and maintains internal state—hormones, self-model, dreams, and tinkering—so it behaves like a single ongoing presence rather than stateless chat.