Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

n8n-nodes-n8ntools-agno

paulocmbraga1.3kMIT1.2.4TypeScript support: included

N8N Tools AI Agent powered by Agno framework - Ultra-fast multi-agent systems with advanced reasoning, memory, and multi-modal capabilities

n8n, n8n-community-node-package, n8ntools, agno, ai, agent, multi-agent, reasoning, memory, multi-modal, anthropic, openai, claude, gemini, performance, ultra-fast

readme

N8N Tools - Agno Agent

Ultra-fast AI Agent powered by the Agno framework - Multi-agent systems with ~3μs performance, advanced reasoning, memory, and multi-modal capabilities.

🚀 Features

🎯 Agent Levels (Progressive Complexity)

  • ⚡ Level 1: Basic Agent - Tools + instructions with ultra-fast execution
  • 📚 Level 2: Knowledge Agent - Knowledge base + storage capabilities
  • 🧮 Level 3: Reasoning Agent - Memory + advanced reasoning capabilities
  • 👥 Level 4: Team Agent - Collaborative multi-agent team systems
  • 🚀 Level 5: Workflow Agent - State management + automation workflows

⚡ Performance & Capabilities

  • 🏆 Ultra-fast Performance: ~3μs agent instantiation (powered by Agno)
  • 🌐 23+ Model Providers: Anthropic, OpenAI, Google, Groq, Perplexity, Ollama, and more
  • 🎬 Multi-Modal: Native support for text, images, audio, and video
  • 🧠 Advanced Reasoning: Built-in reasoning tools and logic capabilities
  • 💾 Smart Memory: Conversation context and persistent long-term memory
  • 🛠️ Native Tools: 10+ ultra-fast tool categories optimized for performance
  • 👥 Team Collaboration: Multi-agent systems with different collaboration strategies
  • 🔄 Real-time Streaming: Live response streaming capabilities
  • 🛡️ Fallback Protection: Automatic fallback to secondary models

📦 Installation

  1. Install the package:

    npm install n8n-nodes-n8ntools-agno-agent
  2. Add to your N8N installation or restart N8N to load the node.

  3. Configure your N8N Tools API credentials in N8N.

🔧 Configuration

🔌 Required Inputs

  • 🤖 AI Language Model (Required): Connect any N8N LLM node (Claude, OpenAI, Gemini, etc.)
  • 📊 Main Data (Standard): Regular workflow data flow

🔑 Required Credentials

💡 Simplified Architecture

The Agno Agent has 2 clean inputs:

  1. 🤖 AI Language Model (Required)

    • Connect Claude, OpenAI, Gemini, Groq, or any LLM node
    • Automatically inherits model configuration (temperature, max tokens, etc.)
    • Supports 23+ model providers for maximum flexibility
  2. 📊 Main Data (Standard)

    • Regular data flow from previous nodes
    • Used as context and input for agent processing

🛠️ Native Tools (Built-in)

Unlike traditional agents, Agno uses native ultra-fast tools selected via dropdown:

  • 🧮 Reasoning Tools - Advanced logic and chain-of-thought processing
  • 🔍 Web Search - Search engines and web information retrieval
  • 📊 Data Analysis - Statistical analysis and data processing
  • 💰 Finance Tools - Stock prices and market analysis (YFinance)
  • 📧 Email Tools - Send emails and manage communications
  • 🌐 HTTP Request - Make API calls and HTTP requests
  • 📁 File System - Cloud storage operations (S3, R2, MinIO, GCS)
  • 🗃️ Database - SQL queries and database operations
  • 🧠 Knowledge - Knowledge base search and retrieval
  • 🔄 Shell Tools - Execute shell commands and scripts

🎯 Agent Types

⚡ Basic Agent (Level 1)

Simple agent with tools and instructions - Ultra-fast execution for straightforward tasks.

📚 Knowledge Agent (Level 2)

Agent with knowledge base integration and storage capabilities for information retrieval.

🧮 Reasoning Agent (Level 3)

Advanced agent with memory and reasoning capabilities for complex problem-solving.

👥 Team Agent (Level 4)

Collaborative multi-agent team system with different strategies:

  • Sequential: Agents work in sequence
  • Parallel: Agents work simultaneously
  • Hierarchical: Manager-worker structure
  • Democratic: Consensus-based decisions

🚀 Workflow Agent (Level 5)

Agentic workflow with state management and automation:

  • Linear: Step-by-step execution
  • Branching: Conditional paths
  • Loop: Iterative processing
  • State Machine: Complex state management

☁️ Cloud Storage Support

File System Tools support multiple cloud storage providers:

  • Amazon S3 - Native AWS S3 storage
  • Cloudflare R2 - Cost-effective S3-compatible storage
  • MinIO - Self-hosted S3-compatible storage
  • Google Cloud Storage - GCS with S3-compatible API

🎨 Examples

⚡ Simple Setup (Current Architecture)

{
  "workflow": "Ultra-fast Agno Agent",
  "nodes": {
    "llm": "Claude 3.5 Sonnet node → AI Language Model input",
    "agno_agent": {
      "agentType": "reasoningAgent",
      "instructions": "You are a research assistant with web search and reasoning capabilities.",
      "message": "Research quantum computing developments and analyze the data",
      "agnoNativeTools": ["reasoning", "webSearch", "dataAnalysis"],
      "enableMemory": true,
      "memoryKey": "quantum_research"
    }
  }
}

🏢 Enterprise File Processing

{
  "agentType": "basicAgent",
  "instructions": "Process documents from cloud storage and analyze content.",
  "message": "Extract insights from uploaded reports",
  "agnoNativeTools": ["fileSystem", "dataAnalysis", "reasoning"],
  "toolsConfig": {
    "fileSystemConfig": {
      "storageType": "s3",
      "s3BucketName": "company-documents",
      "s3Region": "us-east-1"
    }
  }
}

Team Agent Configuration

{
  "agentType": "teamAgent",
  "teamConfig": {
    "teamSize": 3,
    "collaborationStrategy": "sequential",
    "specialistRoles": ["researcher", "contentWriter", "designer"]
  }
}

Advanced Options

{
  "advancedOptions": {
    "temperature": 0.7,
    "maxTokens": 4000,
    "enableMemory": true,
    "memoryKey": "conversation_1",
    "enableStreaming": true,
    "outputFormat": "markdown",
    "fallbackModel": "gpt-4o-mini"
  }
}

🔗 Model Providers

Supported Providers

  • Anthropic: Claude Sonnet, Haiku, Opus
  • OpenAI: GPT-4, GPT-3.5, o1 models
  • Google: Gemini Pro, Flash, Ultra
  • Groq: Ultra-fast inference
  • Perplexity: Search-powered responses
  • Ollama: Local models (Llama, Mistral, etc.)
  • 20+ Others: Including Azure OpenAI, AWS Bedrock, Hugging Face, etc.

📊 Response Format

{
  "response": "Agent's response text",
  "usage": {
    "promptTokens": 150,
    "completionTokens": 300,
    "totalTokens": 450
  },
  "model": "claude-3-5-sonnet-20241022",
  "agentType": "basicAgent", 
  "executionTime": 1250,
  "reasoning": ["Step 1: Analysis", "Step 2: Synthesis"],
  "toolsUsed": ["reasoning", "dataAnalysis"],
  "memoryKey": "conversation_1"
}

🚀 Performance

  • Agent Instantiation: ~3 microseconds (Agno framework optimization)
  • Memory Usage: ~6.5KiB per agent instance
  • Concurrent Agents: Supports hundreds of simultaneous agents
  • Response Time: Optimized for sub-second responses

🆚 vs N8N LangChain Agent

Feature N8N LangChain Agent N8N Tools Agno Agent
Performance Standard (~200ms) ⚡ Ultra-fast (~3μs)
Model Providers 15+ 🌐 23+
Multi-Modal Limited 🎬 Native support
Memory External connection required 💾 Built-in + External
Team Agents 👥 Yes (Level 4)
Reasoning Basic 🧮 Advanced built-in
Tool System N8N tool connections 🛠️ Native ultra-fast tools
Architecture 4 inputs (LLM, Memory, Tools, Main) 🔌 2 inputs (LLM, Main)
Workflow Agents 🚀 Yes (Level 5)
Cloud Storage ☁️ S3, R2, MinIO, GCS
Cost Model Free 💰 Performance tiers

🔌 Architecture Comparison

N8N LangChain Agent (Complex):

  • Inputs: LLM + Memory + Tools + Main Data (4 connections)
  • Processing: LangChain framework (slower)
  • Setup: Multiple node connections required

N8N Tools Agno Agent (Simplified):

  • Inputs: LLM + Main Data (2 connections)
  • Processing: Agno framework (~3μs execution)
  • Setup: Simple connection, native tools built-in
  • Memory: Built-in + configurable persistence
  • Tools: Native ultra-fast tools via dropdown

📚 Documentation

🤝 Support

📄 License

MIT License - see LICENSE file for details.


Powered by Agno Framework | Created by N8N Tools