I ran into this curated collection while following agentic experiments. The repo pulls together practical LLM apps across RAG, AI Agents, Multi-agent Teams, MCP, Voice Agents, and more. What caught my attention is a recent claim that a China model beat Claude Sonnet 4.5 and GPT-5.2 on the OpenHands agentic coding benchmark, and many linked projects point to open-weights models. That makes the...
How to Build RAG Apps with BRAG LangChain Notebooks
BRAG LangChain provides five notebooks that walk from basic RAG setups to advanced multi-query, routing, indexing, and reranking techniques. This compact, hands-on guide helps developers learn and build real RAG apps without hunting through scattered examples. For a broader view of how retrieval systems fit into agentic workflows, see how to use Chroma Context-1 for local agentic search. Notebook...
How to Use Activepieces for No-Code AI Agent Automation
Activepieces is an open source alternative to Zapier that provides a visual canvas to build AI agent workflows and exposes hundreds of MCP servers as ready-made connectors. The promise is simple: build automations with a type-safe pieces framework, reuse templates, and run everything locally or self-hosted. For another approach to agent-driven automation, see how to automate desktop tasks with UI...
GitAgent — Git-native Standard for AI Agents (Docker for Agents)
GitAgent is an open-source specification and CLI tool that introduces a framework-agnostic format for defining AI agents. By treating an agent as a structured directory within a Git repository, GitAgent decouples agent logic from execution environments, allowing developers to define once and deploy across LangChain, AutoGen, CrewAI, OpenAI Assistants, or Claude Code. Component-Based Architecture...
How to Use Chroma Context-1 for Local Agentic Search
Chroma just open-sourced Context-1, a 20B parameter agentic search model designed to retrieve supporting documents for complex, multi-hop queries. The model is intended as a retrieval subagent that decomposes queries, iteratively searches a corpus, and edits its own context to free capacity for further exploration. The result is retrieval performance comparable to frontier LLMs at much lower cost...
