Council: AI Agent Platform for Scalable Control
Project Overview
GitHub Stats | Value |
---|---|
Stars | 814 |
Forks | 33 |
Language | Python |
Created | 2023-07-10 |
License | Apache License 2.0 |
Introduction
Council is an open-source platform designed for the rapid development and robust deployment of customized generative AI applications. It utilizes teams of agents
built in Python (and soon in Rust) to extend the Large Language Model (LLM) tool ecosystem. Council enhances AI agent management by providing advanced control and scalable oversight through features like Controllers, Filters, Evaluators, and Budgets. This enables automated routing between agents, comparison, evaluation, and selection of the best results for tasks. The platform integrates with various LLMs and popular libraries such as LangChain, making it a valuable tool for those looking to create sophisticated AI applications with predictable behavior.
Key Features
Overview
Council is an open-source platform designed for the rapid development and robust deployment of customized generative AI applications. It utilizes teams of agents
built in Python and soon in Rust.
Key Features
- Sophisticated Agents: Enables the creation of reliable agents that can handle complex tasks, including exploring alternatives, creating subgoals, and evaluating quality under budget constraints.
- Data Scientist Friendly: Provides a Python library, a local development environment, and integration with popular frameworks.
- Seamless Production Deployments: Facilitates easy packaging, deployment, and monitoring at scale on multiple platforms via Kubernetes integration.
- Ecosystem Connectivity: Connects with a growing AI agent ecosystem, integrating with LangChain, LlamaIndex, and leading AI models.
- Scalable Oversight: Includes tooling for managing, versioning, monitoring, evaluating, and controlling deployed agents.
Main Capabilities
- Control Flow: Uses Controllers, Filters, Evaluators, and Budgets to manage agent behavior, including automated routing between agents and evaluating results.
- Agent Composition: Agents can be recursively nested within other agents as AgentChains, allowing complex task execution.
- Skill Integration: Skills can wrap various tasks, including calls to public language model APIs or local models, and provide interfaces to knowledge bases or code generation.
- State Management: Provides native objects for managing agent, chain, and skill context, including message history and intermediate results.
Key Concepts
- Agent: Encapsulates end-to-end application logic.
- Controller: Determines user intent and routes prompts to appropriate chains.
- Skill: Services that receive input and return output, such as language model calls or knowledge base interfaces.
- Chain: Directed graphs of skills with a single entry point for execution.
- Evaluator: Assesses the quality of skills or chains at runtime.
- Filter: Filters responses given back to the controller.
Council aims to enhance the LLM tool ecosystem by providing advanced control and scalable oversight for AI agents, making it a powerful tool for developing and deploying sophisticated AI applications.
Real-World Applications
Developing Custom AI Agents
- Create Sophisticated Agents: Use Council to build agents that can handle complex tasks by iterating over subgoals, evaluating quality, and managing budget constraints.
from council.agents import Agent from council.controllers import LLMController from council.evaluators import LLMEvaluator from council.filters import BasicFilter
Setup API keys and LLM instance
openai_llm = OpenAILLM.from_env()
Define Skills and Chains
hw_skill = LLMSkill(llm=openai_llm, system_prompt=“You are responding to every prompt with a short poem titled hello world”) hw_chain = Chain(name=“Hello World”, description=“Answers with a poem about titled Hello World”, runners=[hw_skill])
Create Controller and Evaluator
controller = LLMController(llm=openai_llm, chains=[hw_chain], response_threshold=5) evaluator = LLMEvaluator(llm=openai_llm)
Finalize Agent setup
agent = Agent(controller=controller, evaluator=evaluator, filter=BasicFilter())
Execute the agent
result = agent.execute_from_user_message(“hello world?!”) print(result.best_message.message)
#### Seamless Deployment
- **Deploy Agents at Scale**: Use Council's integration with Kubernetes to package and deploy agents on multiple platforms.
```bash
## Install Council
pip install council-ai
## Set up API keys and environment
dotenv.load_dotenv()
## Define and deploy your agent
## Refer to the detailed documentation for deployment steps
Integrating with AI Ecosystem
- Connect with Popular Libraries: Integrate Council with libraries like LangChain, LlamaIndex, and leading AI models to leverage a broader ecosystem.
Example integration with LangChain
from langchain import LLMChain
Define a LangChain skill
langchain_skill = LLMSkill(llm=openai_llm, system_prompt=“Use LangChain to generate responses”) langchain_chain = Chain(name=“LangChain Agent”, description=“Uses LangChain for responses”, runners=[langchain_skill])
Integrate with Council
controller = LLMController(llm=openai_llm, chains=[langchain_chain], response_threshold=5)
## Conclusion
#### Key Points
- **Advanced AI Agent Control**: Council enables sophisticated control and scalable oversight for AI agents, allowing predictable behavior through Controllers, Filters, Evaluators, and Budgets.
- **Integration and Connectivity**: Supports multiple Large Language Models (LLMs) and integrates with popular libraries like LangChain, facilitating a robust AI agent ecosystem.
- **Scalable Deployments**: Facilitates easy packaging, deployment, and monitoring of agents on multiple platforms via Kubernetes integration.
- **Enterprise-Grade Monitoring**: Aims to provide advanced quality control and monitoring features in future releases.
- **Community and Development**: Open-source, with active development and a welcoming community for contributions.
#### Future Potential
- **Enhanced Enterprise Features**: Upcoming releases will include enterprise-grade monitoring and quality control.
- **Expanded Ecosystem**: Continued integration with more AI models and libraries to enhance the platform's capabilities.
- **Performance Optimization**: Future support for Rust to meet performance-critical application needs.
For further insights and to explore the project further, check out the original [**chain-ml/council**](https://github.com/chain-ml/council) repository.
## Attributions
Content derived from the [**chain-ml/council**](https://github.com/chain-ml/council) repository on GitHub. Original materials are licensed under their respective terms.
----
Stay Updated with the Latest AI & ML Insights
Subscribe to receive curated project highlights and trends delivered straight to your inbox.