Skip to main content
aifeed.dev the frontpage of AI
1

AgentSwarms: free hands-on playground to learn agentic AI

agentswarms.fyi is an interactive, browser-first sandbox designed entirely for learning and experimenting with multi-agent architectures. It is built on a simple premise: complex AI routing needs to be seen, not just read. Instead of fighting with boilerplate code, you drop into a visual node-graph IDE where you can wire up Orchestrator Agents, give them specific tools, and physically watch the context payloads flow between them in real-time. And because the barrier to entry should be zero, the entire platform runs directly in your browser. Here is how we are changing the way developers and founders learn Agentic AI. 1. Visualizing the "Black Box" When you use a framework to build a multi-agent system, the state management happens behind the scenes. If your agent gets stuck in an infinite hallucination loop, debugging it in a terminal is miserable. With our visual canvas, you can watch the exact execution path. You can see the moment an Orchestrator decides to route a task to a Sub-Agent, and you can inspect the exact JSON payload that gets passed back. It turns abstract architecture into a tangible, observable machine. 2. Real Data, Zero Setup (Thanks, DuckDB) Toy datasets don't teach you how to build enterprise AI. If an LLM can fit the entire dataset into its context window, it isn't using tools—it's just reading. To teach agents how to use tools properly, you need messy, heavy data. We integrated DuckDB WASM directly into the browser. This means you can load up our pre-configured labs (like a 200-row B2B SaaS CRM file) and watch your SQL Agent write and execute raw analytical queries locally, in milliseconds, without you ever having to provision a cloud database. 3. The Multimodal Engine: Text, Generation, and Vision This is the feature I am most excited about. Text-to-text routing is becoming standard, but multimodal orchestration is where the magic happens. We recently launched our Image Playground. You can now chain LLMs to generate, critique, and iterate on visual assets autonomously. 1. Have a Copywriter Agent draft an ad brief. 2. Route that text to an Art Director Agent to generate an image. 3. Route both the text and the new image to a Vision Agent to verify that the generated image actually matches the brand guidelines—and force a re-roll if it doesn't. Wiring this up in code is a routing nightmare. On agentswarms.fyi, you just connect the nodes and watch the AI critique its own artwork. I built this platform because I believe hands-on, interactive play is the only way to truly internalize how Agentic AI works. Whether you are a developer trying to master LangGraph, a RevOps manager wanting to automate CRM analysis, or a founder exploring AI-assisted QA, you need a safe sandbox to test your logic. The beta is officially live. I would love for you to jump in, try one of the pre-loaded Swarm Templates, and tell me exactly what breaks.

// 0 comments

> login to comment