Skip to content

UN-3266 [FEAT] Async Executor Backend for Prompt Studio#1849

Open
harini-venkataraman wants to merge 91 commits intomainfrom
feat/async-prompt-service-v2
Open

UN-3266 [FEAT] Async Executor Backend for Prompt Studio#1849
harini-venkataraman wants to merge 91 commits intomainfrom
feat/async-prompt-service-v2

Conversation

@harini-venkataraman
Copy link
Contributor

@harini-venkataraman harini-venkataraman commented Mar 11, 2026

What

Introduces a pluggable executor system that replaces Docker-container-based tool execution with Celery worker tasks, and migrates the Prompt Studio IDE to an async execution model using Socket.IO for result delivery. Gated behind the async_prompt_execution feature flag for safe rollout.

Why

The existing architecture has several limitations:

  • Prompt Studio IDE executions block HTTP connections — Django workers are tied up waiting for LLM responses (up to minutes per prompt), limiting concurrency and causing timeouts
  • Docker-container-based tool execution requires spinning up containers per workflow step, adding overhead and complicating deployments
  • No real-time feedback — the frontend polls for results, wasting resources and providing poor UX
  • Tight coupling between prompt-service HTTP calls and the Django backend makes it hard to scale execution independently

How

Backend (65 files)

  • Async Prompt Studio views: index_document, fetch_response, single_pass_extraction now return HTTP 202 (accepted) with a task_id instead of blocking. Gated by async_prompt_execution feature flag — old sync path preserved as fallback
  • Celery callback tasks (backend/prompt_studio/prompt_studio_core_v2/tasks.py): ide_index_complete, ide_prompt_complete, ide_prompt_error etc. run on prompt_studio_callback queue, perform ORM writes via OutputManagerHelper, and emit prompt_studio_result Socket.IO events
  • Worker dispatch Celery app (backend/backend/worker_celery.py): A second Celery app instance that coexists with Django's Celery app, configured to route tasks to executor workers
  • prompt_studio_helper.py rewrite: Removed PromptTool HTTP calls entirely. New build_index_payload(), build_fetch_response_payload(), build_single_pass_payload() methods construct ExecutionContext objects with all ORM data pre-loaded
  • Removed: backend/backend/workers/, file_execution_tasks.py, celery_task.py (old in-process workers)

Workers (70 files, ~19,500 new lines)

  • Executor Worker (workers/executor/): New WorkerType.EXECUTOR Celery worker with LegacyExecutor handling all operations: extract, index, answer_prompt, single_pass_extraction, summarize, agentic_extraction, structure_pipeline
  • Pluggable Executor Framework: BaseExecutorExecutorRegistry (class-decorator self-registration) → ExecutionOrchestratorExecutionDispatcher (Celery send_task)
  • ExecutorToolShim: Lightweight stand-in for BaseTool that satisfies SDK1 adapter interfaces without Docker context
  • Structure tool task (workers/file_processing/structure_tool_task.py): Celery-native replacement for Docker-based StructureTool.run() with profile overrides, smart table detection, and output file management
  • 26 test files (~10,000+ lines): Comprehensive coverage from unit tests through full Celery eager-mode integration tests

SDK1 (22 files)

  • Execution framework (unstract/sdk1/src/unstract/sdk1/execution/): ExecutionContext, ExecutionResult (serializable DTOs for Celery JSON transport), ExecutionDispatcher (dispatch() + dispatch_with_callback()), BaseExecutor, ExecutorRegistry

Frontend (275 files)

  • Async prompt execution: usePromptStudioSocket hook listens for prompt_studio_result Socket.IO events. usePromptRun rewritten from polling to fire-and-forget. PromptRun.jsx conditionally renders async or sync path based on feature flag
  • CRA → Vite migration: Build tooling migrated to Vite + Bun with Biome linter replacing ESLint
  • Dashboard metrics UI: New metrics dashboard with charts, LLM usage table, and recent activity
  • Card-based layouts: New card grid views for pipelines and API deployments

Docker / Infrastructure

  • Added: worker-executor-v2, worker-prompt-studio-callback, worker-metrics
  • Promoted: All workers-v2 services from opt-in (profiles: [workers-v2]) to default

Architecture Change

BEFORE:  FE → Django (blocks) → PromptTool HTTP → prompt-service → LLM
AFTER:   FE → Django (HTTP 202) → ExecutionDispatcher → Executor Worker → LLM
              ↑ Socket.IO result    (Celery send_task)    (LegacyExecutor)

Can this PR break any existing features? If yes, please list possible items. If no, please explain why.

Yes, potential breaking changes — mitigated by feature flag:
Prompt Studio IDE async path — gated by async_prompt_execution feature flag. When flag is OFF (default), all 3 endpoints (index_document, fetch_response, single_pass_extraction) use the old sync path returning HTTP 200. No behavior change for existing users.

Review Guidelines

This PR touches 441 files across backend, frontend, workers, and SDK1. Below is a structured review path to navigate it efficiently.

Code Structure Overview

unstract/sdk1/src/unstract/sdk1/execution/   ← Core abstractions (review FIRST)
    context.py          ExecutionContext dataclass (the universal payload)
    result.py           ExecutionResult dataclass (success/failure container)
    executor.py         BaseExecutor ABC (the executor contract)
    registry.py         ExecutorRegistry (class-decorator self-registration)
    dispatcher.py       ExecutionDispatcher (Celery send_task, 3 dispatch modes)
    orchestrator.py     ExecutionOrchestrator (worker-side: find executor → execute)

workers/executor/                            ← Executor worker (review SECOND)
    worker.py           Celery app entry point
    tasks.py            Single task: execute_extraction (deserialize → orchestrate → return)
    executor_tool_shim.py   BaseTool substitute for worker context
    executors/
        legacy_executor.py  Main executor: 7 operations via _OPERATION_MAP strategy pattern
        answer_prompt.py    Prompt answering pipeline (retrieve → LLM → postprocess)
        index.py            Document indexing (vectorDB operations)
        retrieval.py        RetrievalService + 7 retriever strategies
        variable_replacement.py, postprocessor.py, json_repair_helper.py, usage.py

backend/prompt_studio/prompt_studio_core_v2/ ← Django async wiring (review THIRD)
    views.py            3 endpoints return HTTP 202 (gated by feature flag)
    prompt_studio_helper.py   build_*_payload() methods construct ExecutionContext
    tasks.py            Celery callbacks: ORM writes + Socket.IO emission

frontend/src/                                ← Frontend async path (review FOURTH)
    hooks/usePromptRun.js           Fire-and-forget POST + 5-min timeout safety net
    hooks/usePromptStudioSocket.js  Socket.IO listener for prompt_studio_result
    components/.../PromptRun.jsx    Headless queue manager (dequeues + calls runPrompt)

Recommended Review Order

Review in dependency order — each layer builds on the previous:

Step Area Key Files What to Look For
1 SDK1 Execution Framework execution/context.py, result.py, dispatcher.py, registry.py Contract stability: are to_dict()/from_dict() round-trips correct? Is the Operation enum complete? Queue naming (celery_executor_{name}).
2 Executor Worker Entry executor/tasks.py, executor/worker.py Single entry point execute_extraction: retry policy, error handling, log correlation.
3 LegacyExecutor Core executors/legacy_executor.py (focus on _OPERATION_MAP + execute()) Strategy pattern routing. Unsupported operation handling. Error wrapping.
4 LegacyExecutor Handlers answer_prompt.py, index.py, retrieval.py Parameter contracts: do the keys in executor_params match what build_*_payload() sends? Lazy import pattern (_get_prompt_deps(), _get_indexing_deps()).
5 Backend Views (async path) views.py lines 351–583 Feature flag gating. 202 vs 200 response. dispatch_with_callback usage with correct callback task names and queue.
6 Backend Payload Builders prompt_studio_helper.py (build_index_payload, build_fetch_response_payload, build_single_pass_payload) ORM data loading. Are all required params packed into executor_params? Key compatibility with executor handlers.
7 Backend Callbacks tasks.py (callback tasks) ide_prompt_complete: ORM writes via OutputManagerHelper. Socket.IO emission shape. Error callback cleanup. State store setup/teardown.
8 Frontend usePromptRun.js, usePromptStudioSocket.js, PromptRun.jsx Socket event shape matches backend _emit_result(). Timeout handling. Status cleanup on failure.
9 Docker/Infra docker/docker-compose.yaml New services: worker-executor-v2, worker-prompt-studio-callback. Removed old workers. Queue bindings.
10 Tests workers/tests/test_sanity_phase*.py Integration tests validate end-to-end Celery chains in eager mode.

Data Flow (End-to-End)

User clicks "Run" in Prompt Studio IDE
  │
  ▼
[Frontend] PromptRun.jsx dequeues → usePromptRun.runPromptApi()
  │  POST /fetch_response/{tool_id}  (fire-and-forget)
  ▼
[Django View] views.py:fetch_response()
  │  if feature_flag ON → build_fetch_response_payload() → dispatch_with_callback()
  │  Returns HTTP 202 {task_id, run_id, status: "accepted"}
  ▼
[RabbitMQ] → celery_executor_legacy queue
  ▼
[Executor Worker] tasks.py:execute_extraction()
  │  ExecutionOrchestrator → ExecutorRegistry.get("legacy") → LegacyExecutor
  │  → _handle_answer_prompt() → RetrievalService → LLM call → postprocess
  │  Returns ExecutionResult.to_dict()
  ▼
[Celery link callback] → prompt_studio_callback queue
  ▼
[Django Callback Worker] tasks.py:ide_prompt_complete()
  │  OutputManagerHelper.handle_prompt_output_update() (ORM write)
  │  _emit_result() → Socket.IO "prompt_studio_result" event
  ▼
[Frontend] usePromptStudioSocket.onResult()
  │  handleCompleted("fetch_response", result)
  │  → updatePromptOutputState(data) → clears spinner
  ▼
User sees result in UI

Known Code Duplication

Where What's Duplicated Severity Notes
views.py — 3 view actions Dispatch pattern: build_payload → get_dispatcher → dispatch_with_callback → return 202 Low Each view has different ORM/param resolution before the common dispatch. Could be a helper but manageable at 3 instances.
tasks.py — callback tasks ide_index_complete and ide_prompt_complete follow same structure: extract kwargs → setup state → check result → ORM work → emit → cleanup Low Different ORM logic per callback type. Acceptable for 2 callbacks; monitor if more are added.
tasks.py — legacy tasks run_index_document, run_fetch_response, run_single_pass_extraction kept alongside new callback tasks Intentional Legacy tasks retained for backward compatibility during feature flag rollout. Can be removed once flag is permanently ON.

Files Safe to Skim

  • workers/tests/ — 24 test files, ~10,000 lines. Well-structured but high volume. Focus on test_sanity_phase2.py (full Celery chain) and test_sanity_phase4.py (IDE payload compatibility) as representative examples.
  • workers/executor/executors/retrievers/ — 7 retriever implementations. All follow the same pattern. Reviewing one (simple.py) covers the pattern.
  • Architecture docs at repo root (architecture-*.md, phase*.md) — Reference material, not code.

Relevant Docs

  • Architecture: architecture-executor-system.md, architecture-flow-diagram.md, architecture-sequence-diagrams.md in repo root
  • Migration phases: architecture-migration-phases.md
  • Rollout: rollout-plan.md

Related Issues or PRs

  • Async Prompt Studio Execution epic

Dependencies Versions / Env Variables

New env variables:

Variable Purpose Default
FLIPT_SERVICE_AVAILABLE Enable Flipt feature flag service false

Notes on Testing

  • Workers: cd workers && uv run pytest -v — 490+ tests (444 in workers/tests/ + extras)
  • SDK1: cd unstract/sdk1 && uv run pytest -v — 146+ tests
  • Backend callbacks: cd backend && python -m pytest prompt_studio/prompt_studio_core_v2/test_tasks.py -v
  • Manual testing: Enable flag in Flipt (async_prompt_execution=true), trigger prompt runs in IDE, verify Socket.IO events deliver results via Network → WS → Messages tab
  • Feature flag OFF: Verify all sync paths still work identically to main branch

Screenshots

N/A (primarily backend/worker architecture change; frontend UX unchanged when feature flag is off)

Checklist

I have read and understood the Contribution Guidelines.

harini-venkataraman and others added 30 commits February 19, 2026 20:39
Conflicts resolved:
- docker-compose.yaml: Use main's dedicated dashboard_metric_events queue for worker-metrics
- PromptCard.jsx: Keep tool_id matching condition from our async socket feature
- PromptRun.jsx: Merge useEffect import from main with our branch
- ToolIde.jsx: Keep fire-and-forget socket approach (spinner waits for socket event)
- SocketMessages.js: Keep both session-store and socket-custom-tool imports + updateCusToolMessages dep
- SocketContext.js: Keep simpler path-based socket connection approach
- usePromptRun.js: Keep Celery fire-and-forget with socket delivery over polling
- setupProxy.js: Accept main's deletion (migrated to Vite)
# Check if highlight data should be removed using configuration registry
# Ensure workflow identification keys are always in item metadata
organization = api.organization if api else None
org_id = str(organization.organization_id) if organization else ""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t think this should be allowed when the organization is missing. Also how it works with an empty org_id?
cc: @vishnuszipstack

) -> None:
"""Inject per-model usage breakdown into item['result']['metadata']."""
inner_result = item.get("result")
if not isinstance(inner_result, dict):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: improve/enhance class ExecutionResponse by adding a dto for result

)
return APIExecutionResponseSerializer(result).data

@staticmethod
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hope there is no structure change of result here .. Can you please add the model/sample in descreption. or along the class ExecutionResponse

_worker_app: Celery | None = None


class _WorkerDispatchCelery(Celery):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why celery here ? We already moved it from backend . What this methods do here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@muhammad-ali-e The backend Celery worker handles fire-and-forget callback tasks that run after the executor worker finishes. Here's the flow:

Backend dispatches task → Executor Worker (does the heavy lifting)
↓ (Celery link/link_error)
Backend Callback Task (lightweight)
├── ORM writes (persist results to DB)
└── WebSocket push (notify frontend in real-time)

Why these run on the backend (not the executor worker):

  • They need Django ORM access (database models, services) — the executor worker doesn't have Django loaded
  • They need the Socket.IO emitter to push real-time updates to the frontend
  • They're lightweight — just DB writes + WebSocket emit, no heavy computation
  • Keeps the executor worker stateless and focused on execution only

the action.
"""
profile_manager_owner = profile_manager.created_by
if profile_manager_owner is None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this created_by a default value , right? when will it be None?

@athul-rs athul-rs self-requested a review March 18, 2026 04:51
Comment on lines +61 to +63
global _worker_app
if _worker_app is not None:
return _worker_app
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Unsynchronised singleton initialisation — race condition under concurrent requests

get_worker_celery_app() uses the classic double-check-without-lock pattern:

if _worker_app is not None:
    return _worker_app

Under gunicorn with threaded workers (or any multi-threaded Django deployment), two threads can simultaneously see _worker_app is None and both proceed to create a new _WorkerDispatchCelery instance. The second assignment overwrites the first (last-writer-wins), so each thread may end up holding a reference to a different object than what ends up in the module global. This is benign in practice because both instances are configured identically, but it is wasteful and could cause subtle issues if Celery connection pools are per-instance.

The idiomatic Python fix is to use a module-level lock:

import threading
_worker_app: Celery | None = None
_worker_app_lock = threading.Lock()

def get_worker_celery_app() -> Celery:
    global _worker_app
    if _worker_app is not None:
        return _worker_app
    with _worker_app_lock:
        if _worker_app is None:   # re-check inside lock
            ...
            _worker_app = app
    return _worker_app
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/backend/worker_celery.py
Line: 61-63

Comment:
**Unsynchronised singleton initialisation — race condition under concurrent requests**

`get_worker_celery_app()` uses the classic double-check-without-lock pattern:

```python
if _worker_app is not None:
    return _worker_app
```

Under gunicorn with threaded workers (or any multi-threaded Django deployment), two threads can simultaneously see `_worker_app is None` and both proceed to create a new `_WorkerDispatchCelery` instance. The second assignment overwrites the first (last-writer-wins), so each thread may end up holding a reference to a *different* object than what ends up in the module global. This is benign in practice because both instances are configured identically, but it is wasteful and could cause subtle issues if Celery connection pools are per-instance.

The idiomatic Python fix is to use a module-level lock:

```python
import threading
_worker_app: Celery | None = None
_worker_app_lock = threading.Lock()

def get_worker_celery_app() -> Celery:
    global _worker_app
    if _worker_app is not None:
        return _worker_app
    with _worker_app_lock:
        if _worker_app is None:   # re-check inside lock
            ...
            _worker_app = app
    return _worker_app
```

How can I resolve this? If you propose a fix, please make it concise.

@athul-rs
Copy link
Contributor

Code review

Found 1 issue:

  1. Missing feature flag gate on async endpoints. The PR description states that all three endpoints (index_document, fetch_response, single_pass_extraction) are gated behind the async_prompt_execution Flipt feature flag, with the old sync path as fallback when the flag is OFF. However, the actual code contains no feature flag check — all three endpoints unconditionally dispatch to Celery and return HTTP 202. The old sync path is fully replaced. This means merging this PR immediately switches all users to the async path with no rollback mechanism, contradicting the stated safety guarantee of "When flag is OFF (default), all 3 endpoints use the old sync path returning HTTP 200."

@action(detail=True, methods=["post"])
def index_document(self, request: HttpRequest, pk: Any = None) -> Response:
"""API Entry point method to index input file.
Builds the full execution payload (ORM work), then fires a
single executor task with Celery link/link_error callbacks.
The backend worker slot is freed immediately.
Args:
request (HttpRequest)
Raises:
IndexingError
ValidationError
Returns:
Response
"""
tool = self.get_object()
serializer = PromptStudioIndexSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
document_id: str = serializer.validated_data.get(ToolStudioPromptKeys.DOCUMENT_ID)
document: DocumentManager = DocumentManager.objects.get(pk=document_id)
file_name: str = document.document_name
run_id = CommonUtils.generate_uuid()
context, cb_kwargs = PromptStudioHelper.build_index_payload(
tool_id=str(tool.tool_id),
file_name=file_name,
org_id=UserSessionUtils.get_organization_id(request),
user_id=tool.created_by.user_id,
document_id=document_id,
run_id=run_id,
)
dispatcher = PromptStudioHelper._get_dispatcher()
# Pre-generate task ID so callbacks can reference it
import uuid as _uuid
executor_task_id = str(_uuid.uuid4())
cb_kwargs["executor_task_id"] = executor_task_id
task = dispatcher.dispatch_with_callback(
context,
on_success=signature(
"ide_index_complete",
kwargs={"callback_kwargs": cb_kwargs},
queue="prompt_studio_callback",
),
on_error=signature(
"ide_index_error",
kwargs={"callback_kwargs": cb_kwargs},
queue="prompt_studio_callback",
),
task_id=executor_task_id,
)
return Response(
{"task_id": task.id, "run_id": run_id, "status": "accepted"},
status=status.HTTP_202_ACCEPTED,
)

@action(detail=True, methods=["post"])
def fetch_response(self, request: HttpRequest, pk: Any = None) -> Response:
"""API Entry point method to fetch response to prompt.
Builds the full execution payload (ORM work), then fires a
single executor task with Celery link/link_error callbacks.
Args:
request (HttpRequest)
Returns:
Response
"""
custom_tool = self.get_object()
document_id: str = request.data.get(ToolStudioPromptKeys.DOCUMENT_ID)
prompt_id: str = request.data.get(ToolStudioPromptKeys.ID)
run_id: str = request.data.get(ToolStudioPromptKeys.RUN_ID)
profile_manager_id: str = request.data.get(
ToolStudioPromptKeys.PROFILE_MANAGER_ID
)
if not run_id:
run_id = CommonUtils.generate_uuid()
org_id = UserSessionUtils.get_organization_id(request)
user_id = custom_tool.created_by.user_id
# Resolve prompt
prompt = ToolStudioPrompt.objects.get(pk=prompt_id)
# Build file path
doc_path = PromptStudioFileHelper.get_or_create_prompt_studio_subdirectory(
org_id,
is_create=False,
user_id=user_id,
tool_id=str(custom_tool.tool_id),
)
document: DocumentManager = DocumentManager.objects.get(pk=document_id)
doc_path = str(Path(doc_path) / document.document_name)
context, cb_kwargs = PromptStudioHelper.build_fetch_response_payload(
tool=custom_tool,
doc_path=doc_path,
doc_name=document.document_name,
prompt=prompt,
org_id=org_id,
user_id=user_id,
document_id=document_id,
run_id=run_id,
profile_manager_id=profile_manager_id,
)
# If document is being indexed, return pending status
if context is None:
return Response(cb_kwargs, status=status.HTTP_200_OK)
dispatcher = PromptStudioHelper._get_dispatcher()
import uuid as _uuid
executor_task_id = str(_uuid.uuid4())
cb_kwargs["executor_task_id"] = executor_task_id
task = dispatcher.dispatch_with_callback(
context,
on_success=signature(
"ide_prompt_complete",
kwargs={"callback_kwargs": cb_kwargs},
queue="prompt_studio_callback",
),
on_error=signature(
"ide_prompt_error",
kwargs={"callback_kwargs": cb_kwargs},
queue="prompt_studio_callback",
),
task_id=executor_task_id,
)
return Response(
{"task_id": task.id, "run_id": run_id, "status": "accepted"},
status=status.HTTP_202_ACCEPTED,
)

@action(detail=True, methods=["post"])
def single_pass_extraction(self, request: HttpRequest, pk: uuid) -> Response:
"""API Entry point method for single pass extraction.
Builds the full execution payload (ORM work), then fires a
single executor task with Celery link/link_error callbacks.
Args:
request (HttpRequest)
pk: Primary key of the CustomTool
Returns:
Response
"""
custom_tool = self.get_object()
document_id: str = request.data.get(ToolStudioPromptKeys.DOCUMENT_ID)
run_id: str = request.data.get(ToolStudioPromptKeys.RUN_ID)
if not run_id:
run_id = CommonUtils.generate_uuid()
org_id = UserSessionUtils.get_organization_id(request)
user_id = custom_tool.created_by.user_id
# Build file path
doc_path = PromptStudioFileHelper.get_or_create_prompt_studio_subdirectory(
org_id,
is_create=False,
user_id=user_id,
tool_id=str(custom_tool.tool_id),
)
document: DocumentManager = DocumentManager.objects.get(pk=document_id)
doc_path = str(Path(doc_path) / document.document_name)
# Fetch prompts eligible for single-pass extraction.
# Mirrors the filtering in _execute_prompts_in_single_pass:
# only active, non-NOTES, non-TABLE/RECORD prompts.
prompts = list(
ToolStudioPrompt.objects.filter(tool_id=custom_tool.tool_id).order_by(
"sequence_number"
)
)
prompts = [
p
for p in prompts
if p.prompt_type != ToolStudioPromptKeys.NOTES
and p.active
and p.enforce_type != ToolStudioPromptKeys.TABLE
and p.enforce_type != ToolStudioPromptKeys.RECORD
]
if not prompts:
return Response(
{"error": "No active prompts found for single pass extraction."},
status=status.HTTP_400_BAD_REQUEST,
)
context, cb_kwargs = PromptStudioHelper.build_single_pass_payload(
tool=custom_tool,
doc_path=doc_path,
doc_name=document.document_name,
prompts=prompts,
org_id=org_id,
document_id=document_id,
run_id=run_id,
)
dispatcher = PromptStudioHelper._get_dispatcher()
import uuid as _uuid
executor_task_id = str(_uuid.uuid4())
cb_kwargs["executor_task_id"] = executor_task_id
task = dispatcher.dispatch_with_callback(
context,
on_success=signature(
"ide_prompt_complete",
kwargs={"callback_kwargs": cb_kwargs},
queue="prompt_studio_callback",
),
on_error=signature(
"ide_prompt_error",
kwargs={"callback_kwargs": cb_kwargs},
queue="prompt_studio_callback",
),
task_id=executor_task_id,
)
return Response(
{"task_id": task.id, "run_id": run_id, "status": "accepted"},
status=status.HTTP_202_ACCEPTED,
)

🤖 Generated with Claude Code

- If this code review was useful, please react with 👍. Otherwise, react with 👎.

Comment on lines +44 to +76
def _is_safe_public_url(?url=https%3A%2F%2Fgithub.com%2FZipstack%2Funstract%2Fpull%2F1849%2Furl%3A+str) -> bool:
"""Validate webhook URL for SSRF protection.

Only allows HTTPS and blocks private/loopback/internal addresses.
"""
try:
p = urlparse(url)
if p.scheme not in ("https",):
return False
host = p.hostname or ""
if host in ("localhost",):
return False

addrs = _resolve_host_addresses(host)
if not addrs:
return False

for addr in addrs:
try:
ip = ipaddress.ip_address(addr)
except ValueError:
return False
if (
ip.is_private
or ip.is_loopback
or ip.is_link_local
or ip.is_reserved
or ip.is_multicast
):
return False
return True
except Exception:
return False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 DNS rebinding (TOCTOU) bypasses SSRF protection

_is_safe_public_url resolves the webhook hostname via DNS at validation time, but the actual HTTP request to the webhook (via postprocess_data) happens afterwards. An attacker who controls a DNS server can perform a DNS rebinding attack:

  1. During validation: attacker.com1.2.3.4 (a public IP) — validation passes.
  2. Attacker flips DNS TTL to 0 and rebinds attacker.com169.254.169.254 (AWS metadata), 10.0.0.1 (internal service), etc.
  3. During the actual HTTP request: the OS re-resolves attacker.com and connects to the internal address.

The current is_private / is_loopback / etc. checks are bypassed entirely because they only guard the validation-time resolution. This is a real risk in a multi-tenant environment where arbitrary webhook URLs can be registered.

The standard mitigation is to make the HTTP request through a socket-level wrapper that re-validates the resolved IP at connection time — for example by overriding the socket's connect() to check the destination address immediately before the connection is established, or by using a library such as urllib3-pyOpenSSL with a custom ProxyManager, or setting a fixed DNS-resolved IP on the requests session host header.

Prompt To Fix With AI
This is a comment left during a code review.
Path: workers/executor/executors/answer_prompt.py
Line: 44-76

Comment:
**DNS rebinding (TOCTOU) bypasses SSRF protection**

`_is_safe_public_url` resolves the webhook hostname via DNS at *validation time*, but the actual HTTP request to the webhook (via `postprocess_data`) happens *afterwards*. An attacker who controls a DNS server can perform a DNS rebinding attack:

1. During validation: `attacker.com``1.2.3.4` (a public IP) — validation passes.
2. Attacker flips DNS TTL to 0 and rebinds `attacker.com``169.254.169.254` (AWS metadata), `10.0.0.1` (internal service), etc.
3. During the actual HTTP request: the OS re-resolves `attacker.com` and connects to the internal address.

The current `is_private` / `is_loopback` / etc. checks are bypassed entirely because they only guard the validation-time resolution. This is a real risk in a multi-tenant environment where arbitrary webhook URLs can be registered.

The standard mitigation is to make the HTTP request through a socket-level wrapper that re-validates the resolved IP *at connection time* — for example by overriding the socket's `connect()` to check the destination address immediately before the connection is established, or by using a library such as [urllib3-pyOpenSSL](https://github.com/urllib3/urllib3) with a custom `ProxyManager`, or setting a fixed DNS-resolved IP on the `requests` session host header.

How can I resolve this? If you propose a fix, please make it concise.

harini-venkataraman and others added 2 commits March 19, 2026 14:39
Signed-off-by: harini-venkataraman <115449948+harini-venkataraman@users.noreply.github.com>
@harini-venkataraman
Copy link
Contributor Author

@claude review

Comment on lines +392 to +395
DocumentIndexingService.set_document_indexing(
org_id=org_id, user_id=user_id, doc_id_key=doc_id_key
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 set_document_indexing not rolled back on broker failure

DocumentIndexingService.set_document_indexing(...) is called at the end of build_index_payload (before dispatch_with_callback is called in the view). If dispatch_with_callback subsequently raises — for example, because the broker is unavailable, the Celery app is not configured, or any other exception — the document is permanently left in the "indexing in progress" state. The ide_index_error errback is never invoked because no task was dispatched, so there is no mechanism to clear the stuck flag.

The view code does not wrap dispatch_with_callback in a try/except that would call DocumentIndexingService.remove_document_indexing(...) on failure. The result is that the user sees an infinite "indexing in progress" indicator and cannot re-index the document without manual DB intervention.

To fix this, either:

  1. Move set_document_indexing to after successful dispatch (wrap the dispatch, set the flag only on success), or
  2. Add a try/except around dispatch_with_callback in the view that calls remove_document_indexing on error.
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/prompt_studio/prompt_studio_core_v2/prompt_studio_helper.py
Line: 392-395

Comment:
**`set_document_indexing` not rolled back on broker failure**

`DocumentIndexingService.set_document_indexing(...)` is called at the end of `build_index_payload` (before `dispatch_with_callback` is called in the view). If `dispatch_with_callback` subsequently raises — for example, because the broker is unavailable, the Celery app is not configured, or any other exception — the document is permanently left in the "indexing in progress" state. The `ide_index_error` errback is never invoked because no task was dispatched, so there is no mechanism to clear the stuck flag.

The view code does not wrap `dispatch_with_callback` in a try/except that would call `DocumentIndexingService.remove_document_indexing(...)` on failure. The result is that the user sees an infinite "indexing in progress" indicator and cannot re-index the document without manual DB intervention.

To fix this, either:
1. Move `set_document_indexing` to after successful dispatch (wrap the dispatch, set the flag only on success), or
2. Add a `try/except` around `dispatch_with_callback` in the view that calls `remove_document_indexing` on error.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +855 to +865
cb_kwargs = {
"log_events_id": log_events_id,
"request_id": request_id,
"org_id": org_id,
"operation": "single_pass_extraction",
"run_id": run_id,
"document_id": document_id,
"tool_id": tool_id,
"prompt_ids": [str(p.prompt_id) for p in prompts],
"is_single_pass": True,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Missing profile_manager_id in single_pass_extraction cb_kwargs

build_single_pass_payload does not include profile_manager_id in its cb_kwargs (lines 855–865). When ide_prompt_complete processes this callback it reads:

profile_manager_id = cb.get("profile_manager_id")  # always None for single-pass

and passes profile_manager_id=None to OutputManagerHelper.handle_prompt_output_update. Depending on how that helper uses the field, single-pass outputs may not be correctly associated with the profile manager, producing a different storage behavior than the fetch_response path (which always passes the explicit profile_manager_id).

More concretely, when ide_prompt_error fires for a single-pass failure, the emitted error event also lacks profile_manager_id. The frontend's handleFailed falls through to the broad clearPromptStatusById(promptId) fallback, which clears ALL doc/profile status combinations for those prompts — not just the one that was actually running. This means an error in one single-pass run would cancel the loading spinners for unrelated concurrent runs.

Consider adding the default profile's profile_id to cb_kwargs:

cb_kwargs = {
    ...
    "profile_manager_id": str(default_profile.profile_id),
    ...
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/prompt_studio/prompt_studio_core_v2/prompt_studio_helper.py
Line: 855-865

Comment:
**Missing `profile_manager_id` in `single_pass_extraction` `cb_kwargs`**

`build_single_pass_payload` does not include `profile_manager_id` in its `cb_kwargs` (lines 855–865). When `ide_prompt_complete` processes this callback it reads:

```python
profile_manager_id = cb.get("profile_manager_id")  # always None for single-pass
```

and passes `profile_manager_id=None` to `OutputManagerHelper.handle_prompt_output_update`. Depending on how that helper uses the field, single-pass outputs may not be correctly associated with the profile manager, producing a different storage behavior than the `fetch_response` path (which always passes the explicit `profile_manager_id`).

More concretely, when `ide_prompt_error` fires for a single-pass failure, the emitted error event also lacks `profile_manager_id`. The frontend's `handleFailed` falls through to the broad `clearPromptStatusById(promptId)` fallback, which clears ALL doc/profile status combinations for those prompts — not just the one that was actually running. This means an error in one single-pass run would cancel the loading spinners for unrelated concurrent runs.

Consider adding the default profile's `profile_id` to `cb_kwargs`:

```python
cb_kwargs = {
    ...
    "profile_manager_id": str(default_profile.profile_id),
    ...
}
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +131 to +149
const onResult = useCallback(
(payload) => {
try {
const msg = payload?.data || payload;
const { status, operation, result, error, ...extra } = msg;

if (status === "completed") {
handleCompleted(operation, result);
} else if (status === "failed") {
handleFailed(operation, error, extra);
}
} catch (err) {
setAlertDetails(
handleException(err, "Failed to process prompt studio result"),
);
}
},
[handleCompleted, handleFailed, setAlertDetails, handleException],
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Socket result event not scoped to the current tool — multi-tab state corruption

prompt_studio_result events are emitted to the log_events_id Socket.IO room, which is per-user-session, not per-tool or per-tab. If a user has two Prompt Studio tools open simultaneously in separate tabs (both sharing the same Socket.IO connection and log_events_id), a result from Tool A's execution will be received and processed by Tab B's usePromptStudioSocket listener as well.

In handleCompleted("fetch_response", result):

updatePromptOutputState(data, false);  // writes Tool A's outputs into Tab B's store
clearResultStatuses(data);             // tries to clear statuses using Tool A's prompt IDs

updatePromptOutputState in Tab B would overwrite prompt output state with data belonging to Tool A's prompts. This can cause phantom outputs to appear under the wrong tool and leave Tab B in an inconsistent state.

The socket event payload (_emit_result in tasks.py) does not include a tool_id field, so the frontend has no way to discard irrelevant events. Consider adding tool_id (or custom_tool_id) to the emitted payload and filtering it in onResult:

const onResult = useCallback((payload) => {
  const msg = payload?.data || payload;
  if (msg.tool_id && msg.tool_id !== details?.tool_id) return; // ignore events for other tools
  ...
}, [..., details?.tool_id]);
Prompt To Fix With AI
This is a comment left during a code review.
Path: frontend/src/hooks/usePromptStudioSocket.js
Line: 131-149

Comment:
**Socket result event not scoped to the current tool — multi-tab state corruption**

`prompt_studio_result` events are emitted to the `log_events_id` Socket.IO room, which is per-user-session, not per-tool or per-tab. If a user has two Prompt Studio tools open simultaneously in separate tabs (both sharing the same Socket.IO connection and `log_events_id`), a result from Tool A's execution will be received and processed by Tab B's `usePromptStudioSocket` listener as well.

In `handleCompleted("fetch_response", result)`:
```js
updatePromptOutputState(data, false);  // writes Tool A's outputs into Tab B's store
clearResultStatuses(data);             // tries to clear statuses using Tool A's prompt IDs
```

`updatePromptOutputState` in Tab B would overwrite prompt output state with data belonging to Tool A's prompts. This can cause phantom outputs to appear under the wrong tool and leave Tab B in an inconsistent state.

The socket event payload (`_emit_result` in `tasks.py`) does not include a `tool_id` field, so the frontend has no way to discard irrelevant events. Consider adding `tool_id` (or `custom_tool_id`) to the emitted payload and filtering it in `onResult`:

```js
const onResult = useCallback((payload) => {
  const msg = payload?.data || payload;
  if (msg.tool_id && msg.tool_id !== details?.tool_id) return; // ignore events for other tools
  ...
}, [..., details?.tool_id]);
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +28 to +44
const clearResultStatuses = useCallback(
(data) => {
if (!Array.isArray(data)) {
return;
}
data.forEach((item) => {
const promptId = item?.prompt_id;
const docId = item?.document_manager;
const profileId = item?.profile_manager;
if (promptId && docId && profileId) {
const statusKey = generateApiRunStatusId(docId, profileId);
removePromptStatus(promptId, statusKey);
}
});
},
[removePromptStatus],
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 clearResultStatuses spinner-clearing may permanently fail

clearResultStatuses derives the status key from item.profile_manager on the result data items. The status was originally stored using a profileId taken directly from the queue item string — a raw UUID string. For clearResultStatuses to match and call removePromptStatus, item.profile_manager in the result data must be the exact same UUID string.

If OutputManagerHelper.handle_prompt_output_update returns serialized objects where profile_manager is an integer PK, a nested object, or null, the condition if (promptId && docId && profileId) will be false, removePromptStatus will never be called, and the loading spinner for the prompt will remain active forever. The user would be unable to re-run the prompt without a page refresh.

The old polling path avoided this by explicitly removing the status with the IDs already available in the callback closure. The new socket path has no such explicit fallback.

Consider including prompt_ids, document_id, and profile_manager_id in the socket event payload (they are already present in cb_kwargs) so the frontend can always do a direct cleanup regardless of the result data format, rather than relying on parsing the ORM-serialized result items.

Prompt To Fix With AI
This is a comment left during a code review.
Path: frontend/src/hooks/usePromptStudioSocket.js
Line: 28-44

Comment:
**`clearResultStatuses` spinner-clearing may permanently fail**

`clearResultStatuses` derives the status key from `item.profile_manager` on the result data items. The status was originally stored using a `profileId` taken directly from the queue item string — a raw UUID string. For `clearResultStatuses` to match and call `removePromptStatus`, `item.profile_manager` in the result data must be the exact same UUID string.

If `OutputManagerHelper.handle_prompt_output_update` returns serialized objects where `profile_manager` is an integer PK, a nested object, or `null`, the condition `if (promptId && docId && profileId)` will be false, `removePromptStatus` will never be called, and the loading spinner for the prompt will remain active forever. The user would be unable to re-run the prompt without a page refresh.

The old polling path avoided this by explicitly removing the status with the IDs already available in the callback closure. The new socket path has no such explicit fallback.

Consider including `prompt_ids`, `document_id`, and `profile_manager_id` in the socket event payload (they are already present in `cb_kwargs`) so the frontend can always do a direct cleanup regardless of the result data format, rather than relying on parsing the ORM-serialized result items.

How can I resolve this? If you propose a fix, please make it concise.

harini-venkataraman and others added 2 commits March 19, 2026 15:17
…t drift

- Remove redundant inline `import uuid as _uuid` in views.py (use module-level uuid)
- URL-encode DB_USER in worker_celery.py result backend connection string
- Remove misleading task_queues=[Queue("executor")] from dispatch-only Celery app
- Remove dead `if not tool:` guards after objects.get() (already raises DoesNotExist)
- Move profile_manager/default_profile null checks before first dereference
- Reorder ProfileManager.objects.get before mark_document_indexed in tasks.py
- Handle ProfileManager.DoesNotExist as warning, not hard failure
- Wrap PostHog analytics in try/catch so failures don't block prompt execution
- Handle pending-indexing 200 response in usePromptRun.js (clear RUNNING status)
- Reset formData when metadata is missing in ConfigureDs.jsx
- Fix test_should_skip_extraction tests: function now takes 1 arg (outputs only)
- Fix agentic routing tests: mock X2Text.process, remove stale platform_helper kwarg

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Comment on lines +56 to +70
const statusKey = generateApiRunStatusId(docId, profileId);
removePromptStatus(promptId, statusKey);
setAlertDetails({
type: "info",
content:
res?.data?.message || "Document is being indexed. Please wait.",
});
return;
}

// Timeout safety net: clear stale status if socket event never arrives.
setTimeout(() => {
const statusKey = generateApiRunStatusId(docId, profileId);
const current = usePromptRunStatusStore.getState().promptRunStatus;
if (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 5-minute timeout can falsely cancel a later re-run of the same prompt

The setTimeout closure captures promptId, docId, and profileId from run N. When it fires 5 minutes later it reads the current store state and checks whether current?.[promptId]?.[statusKey] === PROMPT_RUN_API_STATUSES.RUNNING. If the user triggered another run of the same (promptId, docId, profileId) combination (run N+1) within that 5-minute window, the timer from run N will see run N+1's RUNNING state, clear it, and display a spurious "timed out" warning — even though run N+1 may still be processing and will later receive a valid socket event. The result is a permanently stuck spinner for run N+1 (status cleared by the stale timer) while the socket result arrives and tries to call clearResultStatuses on an already-removed entry.

Mitigation: tag each dispatch with a unique runId and include it in the status store key, so the timeout only clears the specific run it corresponds to:

const runNonce = generateUUID();
addPromptStatus(promptId, statusKey, PROMPT_RUN_API_STATUSES.RUNNING, runNonce);

setTimeout(() => {
  const current = usePromptRunStatusStore.getState().promptRunStatus;
  if (current?.[promptId]?.[statusKey]?.nonce === runNonce) {
    removePromptStatus(promptId, statusKey);
    setAlertDetails({ type: "warning", content: "Prompt execution timed out. Please try again." });
  }
}, SOCKET_TIMEOUT_MS);
Prompt To Fix With AI
This is a comment left during a code review.
Path: frontend/src/hooks/usePromptRun.js
Line: 56-70

Comment:
**5-minute timeout can falsely cancel a later re-run of the same prompt**

The `setTimeout` closure captures `promptId`, `docId`, and `profileId` from run N. When it fires 5 minutes later it reads the *current* store state and checks whether `current?.[promptId]?.[statusKey] === PROMPT_RUN_API_STATUSES.RUNNING`. If the user triggered another run of the same `(promptId, docId, profileId)` combination (run N+1) within that 5-minute window, the timer from run N will see run N+1's `RUNNING` state, clear it, and display a spurious "timed out" warning — even though run N+1 may still be processing and will later receive a valid socket event. The result is a permanently stuck spinner for run N+1 (status cleared by the stale timer) while the socket result arrives and tries to call `clearResultStatuses` on an already-removed entry.

Mitigation: tag each dispatch with a unique `runId` and include it in the status store key, so the timeout only clears the *specific* run it corresponds to:

```js
const runNonce = generateUUID();
addPromptStatus(promptId, statusKey, PROMPT_RUN_API_STATUSES.RUNNING, runNonce);

setTimeout(() => {
  const current = usePromptRunStatusStore.getState().promptRunStatus;
  if (current?.[promptId]?.[statusKey]?.nonce === runNonce) {
    removePromptStatus(promptId, statusKey);
    setAlertDetails({ type: "warning", content: "Prompt execution timed out. Please try again." });
  }
}, SOCKET_TIMEOUT_MS);
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +297 to +302
)
return str(platform_key.key)

# ------------------------------------------------------------------
# Phase 5B — Payload builders for fire-and-forget dispatch
# ------------------------------------------------------------------
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 default_profile dereferenced before null guard in build_index_payload

build_index_payload calls ProfileManager.get_default_llm_profile(tool), then immediately passes the result to validate_adapter_status and validate_profile_manager_owner_access without any null check. If no default LLM profile is configured for the tool, get_default_llm_profile returns None and both validators will raise AttributeError deep inside the helper, surfacing as an opaque 500 error instead of the intended DefaultProfileError.

The same defensiveness present in build_single_pass_payload (if not default_profile: raise DefaultProfileError()) should be applied here:

default_profile = ProfileManager.get_default_llm_profile(tool)
if not default_profile:
    raise DefaultProfileError()

PromptStudioHelper.validate_adapter_status(default_profile)
PromptStudioHelper.validate_profile_manager_owner_access(default_profile)
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/prompt_studio/prompt_studio_core_v2/prompt_studio_helper.py
Line: 297-302

Comment:
**`default_profile` dereferenced before null guard in `build_index_payload`**

`build_index_payload` calls `ProfileManager.get_default_llm_profile(tool)`, then immediately passes the result to `validate_adapter_status` and `validate_profile_manager_owner_access` without any null check. If no default LLM profile is configured for the tool, `get_default_llm_profile` returns `None` and both validators will raise `AttributeError` deep inside the helper, surfacing as an opaque 500 error instead of the intended `DefaultProfileError`.

The same defensiveness present in `build_single_pass_payload` (`if not default_profile: raise DefaultProfileError()`) should be applied here:

```python
default_profile = ProfileManager.get_default_llm_profile(tool)
if not default_profile:
    raise DefaultProfileError()

PromptStudioHelper.validate_adapter_status(default_profile)
PromptStudioHelper.validate_profile_manager_owner_access(default_profile)
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 430 to +443

Raises:
FilenameMissingError: _description_
Args:
request (HttpRequest)

Returns:
Response
"""
custom_tool = self.get_object()
tool_id: str = str(custom_tool.tool_id)
document_id: str = request.data.get(ToolStudioPromptKeys.DOCUMENT_ID)
id: str = request.data.get(ToolStudioPromptKeys.ID)
prompt_id: str = request.data.get(ToolStudioPromptKeys.ID)
run_id: str = request.data.get(ToolStudioPromptKeys.RUN_ID)
profile_manager: str = request.data.get(ToolStudioPromptKeys.PROFILE_MANAGER_ID)
profile_manager_id: str = request.data.get(
ToolStudioPromptKeys.PROFILE_MANAGER_ID
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 HubSpot first-run analytics event silently dropped in async path

The old sync fetch_response path tracked output_count_before and called notify_hubspot_event(user, "PROMPT_RUN", is_first_for_org=..., ...) to fire a business analytics event on the first prompt run for an organisation. The new async path removes both the count query and the notification call entirely with no comment or TODO.

If this is intentional (e.g., to be re-added once the async path is stable), a comment noting this would prevent it from being permanently lost. If it is unintentional, first-run HubSpot events will silently stop firing for any organisation that has async_prompt_execution enabled, skewing adoption metrics.

Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/prompt_studio/prompt_studio_core_v2/views.py
Line: 430-443

Comment:
**HubSpot first-run analytics event silently dropped in async path**

The old sync `fetch_response` path tracked `output_count_before` and called `notify_hubspot_event(user, "PROMPT_RUN", is_first_for_org=..., ...)` to fire a business analytics event on the first prompt run for an organisation. The new async path removes both the count query and the notification call entirely with no comment or TODO.

If this is intentional (e.g., to be re-added once the async path is stable), a comment noting this would prevent it from being permanently lost. If it is unintentional, first-run HubSpot events will silently stop firing for any organisation that has `async_prompt_execution` enabled, skewing adoption metrics.

How can I resolve this? If you propose a fix, please make it concise.

@github-actions
Copy link
Contributor

Frontend Lint Report (Biome)

All checks passed! No linting or formatting issues found.

@github-actions
Copy link
Contributor

Test Results

Summary
  • Runner Tests: 11 passed, 0 failed (11 total)
  • SDK1 Tests: 142 passed, 0 failed (142 total)

Runner Tests - Full Report
filepath function $$\textcolor{#23d18b}{\tt{passed}}$$ SUBTOTAL
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_logs}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_cleanup}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_cleanup\_skip}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_client\_init}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_container\_run\_config}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_container\_run\_config\_without\_mount}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_run\_container}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image\_for\_sidecar}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_sidecar\_container}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{TOTAL}}$$ $$\textcolor{#23d18b}{\tt{11}}$$ $$\textcolor{#23d18b}{\tt{11}}$$
SDK1 Tests - Full Report
filepath function $$\textcolor{#23d18b}{\tt{passed}}$$ SUBTOTAL
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_round\_trip\_serialization}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_json\_serializable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_enum\_values\_normalized}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_string\_values\_accepted}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_auto\_generates\_request\_id}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_explicit\_request\_id\_preserved}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_optional\_organization\_id}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_empty\_executor\_params\_default}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_complex\_executor\_params}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_validation\_rejects\_empty\_required\_fields}}$$ $$\textcolor{#23d18b}{\tt{4}}$$ $$\textcolor{#23d18b}{\tt{4}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_all\_operations\_accepted}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_from\_dict\_missing\_optional\_fields}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_success\_round\_trip}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_failure\_round\_trip}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_json\_serializable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_failure\_requires\_error\_message}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_success\_allows\_no\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_failure\_factory}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_failure\_factory\_no\_metadata}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_error\_not\_in\_success\_dict}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_error\_in\_failure\_dict}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_default\_empty\_dicts}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_from\_dict\_missing\_optional\_fields}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_response\_contract\_extract}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_response\_contract\_index}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_response\_contract\_answer\_prompt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestBaseExecutor.test\_cannot\_instantiate\_abstract}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestBaseExecutor.test\_concrete\_subclass\_works}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestBaseExecutor.test\_execute\_returns\_result}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_register\_and\_get}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_get\_returns\_fresh\_instance}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_register\_as\_decorator}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_list\_executors}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_list\_executors\_empty}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_get\_unknown\_raises\_key\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_get\_unknown\_lists\_available}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_duplicate\_name\_raises\_value\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_register\_non\_subclass\_raises\_type\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_register\_non\_class\_raises\_type\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_clear}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_execute\_through\_registry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_dispatches\_to\_correct\_executor}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_unknown\_executor\_returns\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_executor\_exception\_returns\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_exception\_result\_has\_elapsed\_metadata}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_successful\_result\_passed\_through}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_executor\_returning\_failure\_is\_not\_wrapped}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_sends\_task\_and\_returns\_result}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_uses\_default\_timeout}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_timeout\_from\_env}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_explicit\_timeout\_overrides\_env}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_timeout\_returns\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_generic\_exception\_returns\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_async\_returns\_task\_id}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_no\_app\_raises\_value\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_async\_no\_app\_raises\_value\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_failure\_result\_from\_executor}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_context\_serialized\_correctly}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_sends\_link\_and\_link\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_success\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_error\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_no\_callbacks}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_returns\_async\_result}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_no\_app\_raises\_value\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_context\_serialized}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_custom\_task\_id}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_no\_task\_id\_omits\_kwarg}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_platform\_api\_key\_returned}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_platform\_api\_key\_missing\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_other\_env\_var\_from\_environ}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_missing\_env\_var\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_empty\_env\_var\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_stream\_log\_routes\_to\_logging}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_stream\_log\_respects\_level}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_stream\_error\_and\_exit\_raises\_sdk\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_stream\_error\_and\_exit\_wraps\_original}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_success\_on\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retry\_on\_connection\_error}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_non\_retryable\_http\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retryable\_http\_errors}}$$ $$\textcolor{#23d18b}{\tt{3}}$$ $$\textcolor{#23d18b}{\tt{3}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_post\_method\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retry\_logging}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_success\_on\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_retry\_on\_errors}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_wrapper\_methods\_retry}}$$ $$\textcolor{#23d18b}{\tt{4}}$$ $$\textcolor{#23d18b}{\tt{4}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_connection\_error\_is\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_timeout\_is\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_retryable\_status\_codes}}$$ $$\textcolor{#23d18b}{\tt{3}}$$ $$\textcolor{#23d18b}{\tt{3}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_non\_retryable\_status\_codes}}$$ $$\textcolor{#23d18b}{\tt{5}}$$ $$\textcolor{#23d18b}{\tt{5}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_without\_response}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_os\_error\_retryable\_errno}}$$ $$\textcolor{#23d18b}{\tt{5}}$$ $$\textcolor{#23d18b}{\tt{5}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_os\_error\_non\_retryable\_errno}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_other\_exception\_not\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_exponential\_backoff\_without\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_exponential\_backoff\_with\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_max\_delay\_cap}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_max\_delay\_cap\_with\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_successful\_call\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_retry\_after\_transient\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_max\_retries\_exceeded}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_retry\_with\_custom\_predicate}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_no\_retry\_with\_predicate\_false}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_exception\_not\_in\_tuple\_not\_retried}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_default\_configuration}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_environment\_variable\_configuration}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_max\_retries}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_base\_delay}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_multiplier}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_jitter\_values}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_custom\_exceptions\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_custom\_predicate\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_both\_exceptions\_and\_predicate}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_exceptions\_match\_but\_predicate\_false}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_retry\_platform\_service\_call\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_retry\_prompt\_service\_call\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_platform\_service\_decorator\_retries\_on\_connection\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_prompt\_service\_decorator\_retries\_on\_timeout}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_warning\_logged\_on\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_info\_logged\_on\_success\_after\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_exception\_logged\_on\_giving\_up}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{TOTAL}}$$ $$\textcolor{#23d18b}{\tt{142}}$$ $$\textcolor{#23d18b}{\tt{142}}$$

Comment on lines +484 to +491
dp = ProfileManager.get_default_llm_profile(tool)
monitor_llm = str(dp.llm.id)
if challenge_llm_instance:
challenge_llm = str(challenge_llm_instance.id)
else:
dp = ProfileManager.get_default_llm_profile(tool)
challenge_llm = str(dp.llm.id)
return monitor_llm, challenge_llm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 AttributeError on None default profile in _resolve_llm_ids

When tool.monitor_llm or tool.challenge_llm is None (not explicitly set), the code falls through to ProfileManager.get_default_llm_profile(tool). If that also returns None (no default profile configured), the next line str(dp.llm.id) immediately raises AttributeError: 'NoneType' object has no attribute 'llm'. This surfaces as an opaque 500 error rather than the expected DefaultProfileError.

This can happen when a user creates a Prompt Studio tool, sets a per-prompt profile manager, but has never configured a default tool-level profile AND has not set explicit monitor_llm/challenge_llm adapters.

Note that _resolve_llm_ids is called in build_fetch_response_payload before the if not profile_manager: raise DefaultProfileError() guard (line 536), so a missing default profile causes an AttributeError that bypasses the intended error handling entirely.

Suggested change
dp = ProfileManager.get_default_llm_profile(tool)
monitor_llm = str(dp.llm.id)
if challenge_llm_instance:
challenge_llm = str(challenge_llm_instance.id)
else:
dp = ProfileManager.get_default_llm_profile(tool)
challenge_llm = str(dp.llm.id)
return monitor_llm, challenge_llm
if monitor_llm_instance:
monitor_llm = str(monitor_llm_instance.id)
else:
dp = ProfileManager.get_default_llm_profile(tool)
if not dp:
raise DefaultProfileError()
monitor_llm = str(dp.llm.id)
if challenge_llm_instance:
challenge_llm = str(challenge_llm_instance.id)
else:
dp = ProfileManager.get_default_llm_profile(tool)
if not dp:
raise DefaultProfileError()
challenge_llm = str(dp.llm.id)
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/prompt_studio/prompt_studio_core_v2/prompt_studio_helper.py
Line: 484-491

Comment:
**`AttributeError` on `None` default profile in `_resolve_llm_ids`**

When `tool.monitor_llm` or `tool.challenge_llm` is `None` (not explicitly set), the code falls through to `ProfileManager.get_default_llm_profile(tool)`. If that also returns `None` (no default profile configured), the next line `str(dp.llm.id)` immediately raises `AttributeError: 'NoneType' object has no attribute 'llm'`. This surfaces as an opaque 500 error rather than the expected `DefaultProfileError`.

This can happen when a user creates a Prompt Studio tool, sets a per-prompt profile manager, but has never configured a default tool-level profile AND has not set explicit `monitor_llm`/`challenge_llm` adapters.

Note that `_resolve_llm_ids` is called in `build_fetch_response_payload` **before** the `if not profile_manager: raise DefaultProfileError()` guard (line 536), so a missing default profile causes an `AttributeError` that bypasses the intended error handling entirely.

```suggestion
        if monitor_llm_instance:
            monitor_llm = str(monitor_llm_instance.id)
        else:
            dp = ProfileManager.get_default_llm_profile(tool)
            if not dp:
                raise DefaultProfileError()
            monitor_llm = str(dp.llm.id)
        if challenge_llm_instance:
            challenge_llm = str(challenge_llm_instance.id)
        else:
            dp = ProfileManager.get_default_llm_profile(tool)
            if not dp:
                raise DefaultProfileError()
            challenge_llm = str(dp.llm.id)
```

How can I resolve this? If you propose a fix, please make it concise.

Add PSKeys.LLM_USAGE_REASON to usage_kwargs in _handle_summarize() so
summarization costs appear under summarize_llm in API response metadata.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Comment on lines +766 to +773
PromptStudioHelper.dynamic_extractor(
profile_manager=default_profile,
file_path=doc_path,
org_id=org_id,
document_id=document_id,
run_id=run_id,
enable_highlight=tool.enable_highlight,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 dynamic_extractor blocks Django worker in async path

build_single_pass_payload calls PromptStudioHelper.dynamic_extractor(...) synchronously before returning the ExecutionContext. Because build_single_pass_payload is called directly from the single_pass_extraction view (inside the HTTP request–response cycle), this blocking document-extraction call — which can be a long-running x2text adapter operation on a cache miss — ties up a Django worker thread for potentially minutes, directly contradicting the PR's stated goal of freeing Django workers immediately.

The comment # Extract (blocking, usually cached) acknowledges this, but "usually cached" is not a correctness guarantee: first-run documents, cache invalidations, or simply a cache miss will exhibit the same blocking behavior the async architecture was designed to eliminate.

The extraction step should either be moved into the executor worker (as part of the Celery task), or preceded by a cache check that returns an early HTTP 202 / queued status if the extracted text is not already available, rather than extracting inline in the view's call stack.

Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/prompt_studio/prompt_studio_core_v2/prompt_studio_helper.py
Line: 766-773

Comment:
**`dynamic_extractor` blocks Django worker in async path**

`build_single_pass_payload` calls `PromptStudioHelper.dynamic_extractor(...)` synchronously before returning the `ExecutionContext`. Because `build_single_pass_payload` is called directly from the `single_pass_extraction` view (inside the HTTP request–response cycle), this blocking document-extraction call — which can be a long-running x2text adapter operation on a cache miss — ties up a Django worker thread for potentially minutes, directly contradicting the PR's stated goal of freeing Django workers immediately.

The comment `# Extract (blocking, usually cached)` acknowledges this, but "usually cached" is not a correctness guarantee: first-run documents, cache invalidations, or simply a cache miss will exhibit the same blocking behavior the async architecture was designed to eliminate.

The extraction step should either be moved into the executor worker (as part of the Celery task), or preceded by a cache check that returns an early `HTTP 202` / queued status if the extracted text is not already available, rather than extracting inline in the view's call stack.

How can I resolve this? If you propose a fix, please make it concise.

harini-venkataraman and others added 2 commits March 23, 2026 21:43
- Route _handle_structure_pipeline to _handle_single_pass_extraction when
  is_single_pass=True (was always calling _handle_answer_prompt)
- Delegate _handle_single_pass_extraction to cloud plugin via ExecutorRegistry,
  falling back to _handle_answer_prompt if plugin not installed

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
6.9% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Comment on lines +592 to +620
@action(detail=True, methods=["get"])
def task_status(
self, request: HttpRequest, pk: Any = None, task_id: str = None
) -> Response:
"""Poll the status of an async Prompt Studio task.

Task IDs now point to executor worker tasks dispatched via the
worker-v2 Celery app. Both apps share the same PostgreSQL
result backend, so we use the worker app to look up results.

Args:
request (HttpRequest)
pk: Primary key of the CustomTool (for permission check)
task_id: Celery task ID returned by the 202 response

Returns:
Response with {task_id, status} and optionally result or error
"""
from celery.result import AsyncResult

from backend.worker_celery import get_worker_celery_app

# Verify the user has access to this tool (triggers permission check)
self.get_object()

result = AsyncResult(task_id, app=get_worker_celery_app())
if not result.ready():
return Response({"task_id": task_id, "status": "processing"})
if result.successful():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 task_status returns raw executor result — data shape differs from socket event

The endpoint looks up the executor Celery task (on the celery_executor_legacy queue) and returns its raw result. However, the actual prompt output that the frontend cares about is produced by the callback task (ide_prompt_complete) — which calls OutputManagerHelper.handle_prompt_output_update() and then emits the processed data via Socket.IO.

The task_status endpoint therefore returns the unprocessed ExecutionResult.to_dict() from the executor worker (containing raw LLM output and metadata), while the socket event delivers the ORM-serialised PromptStudioOutputManager records that the frontend can actually render.

If any client uses task_status as a polling fallback expecting the same data shape as the socket event (e.g., for recovery after a missed socket connection), it will receive incompatible data, silently rendering nothing or throwing a client-side error.

Consider either:

  • Tracking the callback task ID instead of the executor task ID, so the polling endpoint returns the ORM-processed result, or
  • Clearly documenting in the response that result is a raw executor payload and not the rendered output, and ensuring the frontend never tries to use it for output rendering.
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/prompt_studio/prompt_studio_core_v2/views.py
Line: 592-620

Comment:
**`task_status` returns raw executor result — data shape differs from socket event**

The endpoint looks up the executor Celery task (on the `celery_executor_legacy` queue) and returns its raw `result`. However, the actual prompt output that the frontend cares about is produced by the **callback task** (`ide_prompt_complete`) — which calls `OutputManagerHelper.handle_prompt_output_update()` and then emits the processed data via Socket.IO.

The `task_status` endpoint therefore returns the unprocessed `ExecutionResult.to_dict()` from the executor worker (containing raw LLM output and metadata), while the socket event delivers the ORM-serialised `PromptStudioOutputManager` records that the frontend can actually render.

If any client uses `task_status` as a polling fallback expecting the same data shape as the socket event (e.g., for recovery after a missed socket connection), it will receive incompatible data, silently rendering nothing or throwing a client-side error.

Consider either:
- Tracking the **callback task ID** instead of the executor task ID, so the polling endpoint returns the ORM-processed result, or
- Clearly documenting in the response that `result` is a raw executor payload and not the rendered output, and ensuring the frontend never tries to use it for output rendering.

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants