A Coding Guide to Design and Orchestrate Advanced ReAct-Based Multi-Agent Workflows with AgentScope and OpenAI

A Coding Guide to Design and Orchestrate Advanced ReAct-Based Multi-Agent Workflows with AgentScope and OpenAI


In this tutorial, we build an advanced multi-agent incident response system using AgentScope. We orchestrate multiple ReAct agents, each with a clearly defined role such as routing, triage, analysis, writing, and review, and connect them through structured routing and a shared message hub. By integrating OpenAI models, lightweight tool calling, and a simple internal runbook, we demonstrate how complex, real-world agentic workflows can be composed in pure Python without heavy infrastructure or brittle glue code. Check out the FULL CODES here.

!pip -q install “agentscope>=0.1.5” pydantic nest_asyncio

import os, json, re
from getpass import getpass
from typing import Literal
from pydantic import BaseModel, Field
import nest_asyncio
nest_asyncio.apply()

from agentscope.agent import ReActAgent
from agentscope.message import Msg, TextBlock
from agentscope.model import OpenAIChatModel
from agentscope.formatter import OpenAIChatFormatter
from agentscope.memory import InMemoryMemory
from agentscope.tool import Toolkit, ToolResponse, execute_python_code
from agentscope.pipeline import MsgHub, sequential_pipeline

okex

if not os.environ.get(“OPENAI_API_KEY”):
os.environ[“OPENAI_API_KEY”] = getpass(“Enter OPENAI_API_KEY (hidden): “)

OPENAI_MODEL = os.environ.get(“OPENAI_MODEL”, “gpt-4o-mini”)

We set up the execution environment and install all required dependencies so the tutorial runs reliably on Google Colab. We securely load the OpenAI API key and initialize the core AgentScope components that will be shared across all agents. Check out the FULL CODES here.

RUNBOOK = [
{“id”: “P0”, “title”: “Severity Policy”, “text”: “P0 critical outage, P1 major degradation, P2 minor issue”},
{“id”: “IR1”, “title”: “Incident Triage Checklist”, “text”: “Assess blast radius, timeline, deployments, errors, mitigation”},
{“id”: “SEC7”, “title”: “Phishing Escalation”, “text”: “Disable account, reset sessions, block sender, preserve evidence”},
]

def _score(q, d):
q = set(re.findall(r”[a-z0-9]+”, q.lower()))
d = re.findall(r”[a-z0-9]+”, d.lower())
return sum(1 for w in d if w in q) / max(1, len(d))

async def search_runbook(query: str, top_k: int = 2) -> ToolResponse:
ranked = sorted(RUNBOOK, key=lambda r: _score(query, r[“title”] + r[“text”]), reverse=True)[: max(1, int(top_k))]
text = “\n\n”.join(f”[{r[‘id’]}] {r[‘title’]}\n{r[‘text’]}” for r in ranked)
return ToolResponse(content=[TextBlock(type=”text”, text=text)])

toolkit = Toolkit()
toolkit.register_tool_function(search_runbook)
toolkit.register_tool_function(execute_python_code)

We define a lightweight internal runbook and implement a simple relevance-based search tool over it. We register this function along with a Python execution tool, enabling agents to retrieve policy knowledge or compute results dynamically. It demonstrates how we augment agents with external capabilities beyond pure language reasoning. Check out the FULL CODES here.

def make_model():
return OpenAIChatModel(
model_name=OPENAI_MODEL,
api_key=os.environ[“OPENAI_API_KEY”],
generate_kwargs={“temperature”: 0.2},
)

class Route(BaseModel):
lane: Literal[“triage”, “analysis”, “report”, “unknown”] = Field(…)
goal: str = Field(…)

router = ReActAgent(
name=”Router”,
sys_prompt=”Route the request to triage, analysis, or report and output structured JSON only.”,
model=make_model(),
formatter=OpenAIChatFormatter(),
memory=InMemoryMemory(),
)

triager = ReActAgent(
name=”Triager”,
sys_prompt=”Classify severity and immediate actions using runbook search when useful.”,
model=make_model(),
formatter=OpenAIChatFormatter(),
memory=InMemoryMemory(),
toolkit=toolkit,
)

analyst = ReActAgent(
name=”Analyst”,
sys_prompt=”Analyze logs and compute summaries using python tool when helpful.”,
model=make_model(),
formatter=OpenAIChatFormatter(),
memory=InMemoryMemory(),
toolkit=toolkit,
)

writer = ReActAgent(
name=”Writer”,
sys_prompt=”Write a concise incident report with clear structure.”,
model=make_model(),
formatter=OpenAIChatFormatter(),
memory=InMemoryMemory(),
)

reviewer = ReActAgent(
name=”Reviewer”,
sys_prompt=”Critique and improve the report with concrete fixes.”,
model=make_model(),
formatter=OpenAIChatFormatter(),
memory=InMemoryMemory(),
)

We construct multiple specialized ReAct agents and a structured router that decides how each user request should be handled. We assign clear responsibilities to the triage, analysis, writing, and review agents, ensuring separation of concerns. Check out the FULL CODES here.

LOGS = “””timestamp,service,status,latency_ms,error
2025-12-18T12:00:00Z,checkout,200,180,false
2025-12-18T12:00:05Z,checkout,500,900,true
2025-12-18T12:00:10Z,auth,200,120,false
2025-12-18T12:00:12Z,checkout,502,1100,true
2025-12-18T12:00:20Z,search,200,140,false
2025-12-18T12:00:25Z,checkout,500,950,true
“””

def msg_text(m: Msg) -> str:
blocks = m.get_content_blocks(“text”)
if blocks is None:
return “”
if isinstance(blocks, str):
return blocks
if isinstance(blocks, list):
return “\n”.join(str(x) for x in blocks)
return str(blocks)

We introduce sample log data and a utility function that normalizes agent outputs into clean text. We ensure that downstream agents can safely consume and refine earlier responses without format issues. It focuses on making inter-agent communication robust and predictable. Check out the FULL CODES here.

async def run_demo(user_request: str):
route_msg = await router(Msg(“user”, user_request, “user”), structured_model=Route)
lane = (route_msg.metadata or {}).get(“lane”, “unknown”)

if lane == “triage”:
first = await triager(Msg(“user”, user_request, “user”))
elif lane == “analysis”:
first = await analyst(Msg(“user”, user_request + “\n\nLogs:\n” + LOGS, “user”))
elif lane == “report”:
draft = await writer(Msg(“user”, user_request, “user”))
first = await reviewer(Msg(“user”, “Review and improve:\n\n” + msg_text(draft), “user”))
else:
first = Msg(“system”, “Could not route request.”, “system”)

async with MsgHub(
participants=[triager, analyst, writer, reviewer],
announcement=Msg(“Host”, “Refine the final answer collaboratively.”, “assistant”),
):
await sequential_pipeline([triager, analyst, writer, reviewer])

return {“route”: route_msg.metadata, “initial_output”: msg_text(first)}

result = await run_demo(
“We see repeated 5xx errors in checkout. Classify severity, analyze logs, and produce an incident report.”
)
print(json.dumps(result, indent=2))

We orchestrate the full workflow by routing the request, executing the appropriate agent, and running a collaborative refinement loop using a message hub. We coordinate multiple agents in sequence to improve the final output before returning it to the user. It brings together all earlier components into a cohesive, end-to-end agentic pipeline.

In conclusion, we showed how AgentScope enables us to design robust, modular, and collaborative agent systems that go beyond single-prompt interactions. We routed tasks dynamically, invoked tools only when needed, and refined outputs through multi-agent coordination, all within a clean and reproducible Colab setup. This pattern illustrates how we can scale from simple agent experiments to production-style reasoning pipelines while maintaining clarity, control, and extensibility in our agentic AI applications.

Check out the FULL CODES here. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest