Home » Beginning with AutoGen — Automated multi-agent | by Ravindra Elicherla | Nov, 2023

Beginning with AutoGen — Automated multi-agent | by Ravindra Elicherla | Nov, 2023

by Narnia
0 comment

“let the brokers communicate to one another and resolve issues with out human interplay”

What is AutoGen?

AutoGen was introduced by Microsoft on September twenty fifth, 2023. While the use instances are nonetheless at a primitive stage however as per Doug Burger, Technical Fellow, Microsoft “AutoGen is poised to basically rework and prolong what massive language fashions are able to. This is without doubt one of the most enjoyable developments I’ve seen in AI just lately”

AutoGen is a framework that permits the event of LLM functions utilizing a number of brokers that may converse with one another to resolve duties. AutoGen brokers are customizable, conversable, and seamlessly permit human participation. They can function in numerous modes that make use of mixtures of LLMs, human inputs, and instruments.

Pic: Ref https://github.com/microsoft/autogen

This allows constructing next-gen LLM functions primarily based on multi-agent conversations. What does this imply? We could make agent1 to talk to agent2 and get the work completed. It is similar as two Robo’s talking to one another similar to people and finishing the work they have been allotted. Imagine that you’ve a maid and prepare dinner at residence. How is your life in the event that they communicate to one another and end cooking and but maintain the kitchen tidy.

There are two kinds of brokers Autogen gives

  • Conversable: Agents in AutoGen are conversable, which implies that any agent can ship and obtain messages from different brokers to provoke or proceed a dialog
  • Customizable: Agents in AutoGen might be personalized to combine LLMs, people, instruments, or a mix of them.

In this weblog let’s create a multi-agent bot

python3 -m venv autogentest
supply autogentest/bin/activate

You also can use Conda to create digital setting

Let’s set up pyautogen.

pip set up pyautogen

This allows Next-Gen LLM Applications by way of Multi-Agent Conversation Framework. pyautogen requires Open AI.

  1. Import Autogen
import autogen

2. Create a config listing

config_list = [
{
"model": 'gpt-3.5-turbo-16k',
"api_key": 'sk-asv4bePDDgxxxxxxxxxxxxxxxxxxxxx',
}
]

3. llm configuration

llm_config = {
"request_timeout": 600,
"seed": 42,
"config_list": config_list,
"temperature": 0,
}

Seed is for caching and reproducibility

Temperature for sampling. 0- is low on creativity and 1 is excessive in creativity. if we want definitive solutions giving 0 is nice

4. Define Assistant

assistant = autogen.AssistantAgent(
title ="Assistant",
llm_config = llm_config,
system_message = "I'm Assistant",
)

5. Create person proxy. This is person agent ans works as proxy for person

user_proxy = autogen.UserProxyAgent(
title="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content material", "").rstrip().endswith("TERMINATE"),
llm_config=llm_config,
code_execution_config={
"work_dir": "coding",
"use_docker": False, # set to True or picture title like "python:3" to make use of docker
},
system_message = "Reply TERMINATE if the duty has been solved on your full satisfaction. Or reply CONTINUE if the duty isn't solved but",
)

Plan is to not have any person enter and the dialogue will get terminated if the proxy person says “TERMINATE”.

6. We are actually good with the code. One last step is ask agent to do somthing helpful. I’ve requested the agent to summarise the ICC World cup match that India gained couple of minutes in the past (on fifth November) towards South Africa.

user_proxy.initiate_chat(
assistant,
message="""Give me abstract of this information merchandise https://timesofindia.indiatimes.com/sports activities/cricket/icc-world-cup/information/how-indias-juggernaut-continued-at-world-cup-with-rout-of-south-africa/articleshow/104991030.cms """,

)

7. Now run this system

python autogentest.py

Agent did pretty effectively. under is the output.

Use case: This agent can be utilized to create information shorts for fast consumtion of reports.

Ref:

  1. https://www.microsoft.com/en-us/analysis/weblog/autogen-enabling-next-generation-large-language-model-applications/
  2. https://github.com/microsoft/autogen

This story is revealed on Generative AI. Connect with us on LinkedIn and comply with Zeniteq to remain within the loop with the newest AI tales. Let’s form the way forward for AI collectively!

You may also like

Leave a Comment