Alliedium is an artifact-based test automation environment. It adopts AI test automation agents to work effectively with humans — and vice versa.

What "Artifact-Based" Means for Test Automation

Traditional test automation tools produce logs. Alliedium produces artifacts — human-reviewable deliverables that let you understand, verify, and trust what the AI agent did.

📐
Test Plans
Before writing a single line of code, agents generate a structured test architecture — a 3-layer design with actions, steps, and test cases. You review, edit, and approve the plan before the agent proceeds.
📸
Visual Logs
Every test step is captured as an annotated screenshot with highlighted elements, timing data, and system metrics. You see exactly what the agent saw and did — not just whether a step passed or failed.
🎬
Browser Recordings
Agents produce walkthrough videos of test execution. Review a 30-second clip instead of reading 200 lines of log output.
🧩
Fixture Artifacts
When the agent encounters an unknown GUI element, it pauses and asks for your input. You select the element in Inspector, the agent downloads the fixture, and continues. The artifact documents the decision.
Validation Reports
A dedicated validator agent checks every test file against your original requirements and reports a clear X/Y verdict — not a wall of green checkmarks.

How It Works

Alliedium ships as a containerized environment with seven specialized AI agents — each producing its own artifacts:

Designer Executor Validator Debugger Explorer Issue Analyzer Desktop Manager

A companion VS Code extension — Ally Test Monitor — renders these artifacts in real time: a live execution tree, annotated screenshots, and one-click report access.

The agents don't just run tests. They pause on failures, surface the relevant artifact (screenshot, element state, execution trace), and propose a fix. You approve, adjust, or redirect — then the agent resumes.

This is artifact-driven trust: every decision is documented, every action is verifiable, every result is reviewable.

Mission-Based Delegation for QA

Tell the agent what to test, not how to test it. Describe a user journey:

"Test the checkout flow from cart to confirmation."

The agent then:

  • Designs the test architecture
  • Creates the test files
  • Validates them against your requirements
  • Collects GUI fixtures interactively via Inspector
  • Executes the tests
  • Delivers a portfolio of artifacts you can review in under a minute

This is not code completion for test scripts. This is autonomous test creation, execution, and verification — with a human in the loop at every decision point that matters.

Why this matters for enterprise QA teams

The traditional objection to AI-generated tests is: "How do I know what it did?" Artifact-based automation answers that directly. Every artifact is a reviewable record. Your QA lead can approve a test plan before a single line is committed. Your compliance team can audit the execution record. Your developers can see the annotated screenshot of exactly what failed and why.

AI automation doesn't have to be a black box. With the right artifact layer, it becomes more auditable than a human-written test suite.