AI-Driven Automation
For Every Platform
# No more coding
# No more block schemes
With our product ANYONE can automate
Picture a high-performance android—capable, flexible, and built for any task. It moves with robo-legs, works with robo-arms, and thinks with a robo-mind (AI):
Need the whole system? It's yours. Only require a single part? Plug in a robo-arm and keep moving.Customize your solution, your way. Full flexibility, zero limitations.
You're test automation engineer
- Robust test scenarios
- Access to all the different types of UI (not just web)
- Cross reference between web/desktop/mobile — all within same control tree/test
You're QA engineer (limited coding skills)
- Write automation tests in simple text
- Plus all that you would get as automation engineer
You're Business
- Finally, you can inspect, review and understand all that's being written by Engineers and even easily contribute if you want
You already have automation team, your product testing is already automated, it works and all is fine.
<Learn more>
You don’t have automation team and want to automate.
<Learn more>
You are in a UAT phase and want to contribute with automated scenarios.
<Learn more>
We've built a suite of tools – every piece is great on its own, but they really shine as a team. No lock-in, no fluff – just grab what works for you.
Driver +
Inspector
Everyone knows Selenium for web automation. We make it just as simple but much more universal – work with Windows desktop, Java Swing, Web, macOS* and Mobile*, all from the same framework. Access to pretty much everything your OS can see with a standard Selenium-like interface – we did our best to make the transition seamless.
- Windows apps
- Web*- Java Swing
- Java (Oracle forms)
You get a driver to deploy and a client:
- Java client
- C# client
- Python client
For web you probably inspect pages to find your XPath. We give you a similar tool for Windows Desktop: just click the element and get its metadata (even with POM generation).
Available standalone or as a part of a solution.


Loc
AI
tor
If you do a lot of automation, you already know that references to the elements tend to degrade. Each new version of the UI may move, change, or rename something — and your xpaths are ruined. Your automation breaks and you waste time fixing references.
Our LocAltor comes as a savior. It provides a proxy layer of permanent LocAltor Smart IDs. Use them as anchors and forget about UI changes.
We use a smart combination of search techniques — from LLM embeddings to image recognition — to ensure you always refer to the same element.
Get LocAltor + Driver: it pairs perfectly with our Driver and is the default combo we recommend.
LocAltor + Selenium: instantly transform your legacy automation into a robust solution.
Hosted solution requires a moderate GPU with at least 1 GB VRAM (we’ve optimized it to ease the burden).
Test
Lab
Management
It’s like Docker Orchestrator, but with a clean UI and smart logic:
• Optimized image size and snapshot-based instant startup
• API-based install, update, and launch of apps
• Capture test results and screenshots via API
Environments with high-level isolation: Reduce security risks while keeping full control over data flow and AI agent permissions within your secured network.
Integrate into your Jenkins pipeline to spin up isolated test environments on demand — and destroy them when done. We’ve built a UI on top of it for full transparency and centralized management.
Your DevOps team will love it — it’s efficient, secure, and seamless.
AutoMagic uses LLMs to function.
We use MCP for LLM integration with full flexibility:
– local models
– cloud models
Or deploy MCP server to connect to your SRM-approved providers like ChatGPT.
Auto
Magic
We use local or cloud LLMs to parse your manual tests (steps, instructions, etc.) and convert them into automation-ready code.
Choose your technology: Java, C#, Python — or decide later using JSON format.
Turning natural text into executable code requires detailed instructions. “Make a pizza” won’t cut it — “Make dough from 2 spoons of flour...” will.
Use Alliedium Studio to debug, preview steps, and guide the generation process. Rephrase, override, or fine-tune instructions with full control.
Set breakpoints, record & replay tests to pinpoint and edit.
Integrate test management tools like:
– Jira (Zephyr)
– TestRail
– More coming soon
- Exposing data to the cloud?
- SRM conflicts over access configs?
- Token bills keeping you up at night?
We give you LOCAL models that perform just as well!
Build an on-premise server inside your secure test lab.
No usage fees (okay, except electricity 😉).
Models run on 80GB VRAM with affordable hardware like Nvidia H100.
Blessed by SRM to use cloud LLMs in your projects?
Maybe you’re using a custom proxy for privacy and ethical compliance?
Perfect! Get best-in-class text-to-code conversions with full-scale cloud models.
Not as paranoid as us? You’re okay with storing in the cloud?
Or maybe you just don’t want to deal with infrastructure?
We’ll host your test lab + models and handle everything — so you can focus on automation.
<Our services>
With years of experience in QA, we want to share a vision of the ideal AI-powered automation workflow.
The traditional automation setup might look like this:
1.Manual tests are stored in Zephyr or TestRail.
2. Automation engineers convert them into automated tests.
3. Test code is version-controlled in Git.
4. Jenkins Pipelines trigger test execution.
Now, let’s add some Alliedium magic:
1. Replace manual automation with AutoMagic.
– Our LocAIator + Drivers parse your app, and the LLM generates the automation code.
2. Jenkins launches test plans directly from TestRail and converts them into automation on the fly.
– Pipelines dynamically spin up and destroy environments via our TestLab Management.
– Debug and fine-tune AI automation in Alliedium Studio.
We provide you full flexibility in what you can procure depending on your needs and skills that your team has:
If you have legacy automation tests that you did in Selenium, and want to expand test scope to more than just web apps (include desktop apps, Java or mobile* and macOS* ) and have a team that is proficient in automation, you may want to just replace your Selenium WebDriver with ours and get access to plenty more controls.
Our smart layer for element search. Instead of relying on fragile xPaths, we suggest using embeddings created by LLM that contain metadata (name, location, neighbors, hierarchy, etc). This resilient method maintains element recognition even after UI changes, achieving up to 95% reliability over 2 years of updates.
AI text-to-code is the coolest part – it translates human-readable test steps into executable automation. Use your manual scripts directly and let our engine convert and run them automatically.
A powerful debugging companion for AutoMagic. While AI handles most of the heavy lifting, sometimes manual tuning is needed. Studio helps monitor, adjust and train the AI, ensuring consistent improvement and adaptability with local LLMs.
A key advantage of our product is that it can run entirely within your security perimeter. No data, UI, IP, or source code ever leaves your organization. The default delivery mode is on-premises, and pricing includes licenses + your hardware.
US $35.00
Per Month
US $65.00
Per Month
US $125.00
Per Month
US $150.00
Per Month


1. Case
All running on the local host
Min reqs:
8 vCPU @ 2.5GHz
16GB RAM
Recommended reqs:
8 vCPU @ 2.5GHz
16GB RAM
2. Case
Dev tools on local host.
App in scope within Docker running on Local host
Min reqs:
8 vCPU @ 2.5GHz
16GB RAM
Recommended reqs:
8 vCPU @ 2.5GHz
16GB RAM
*Assuming Docker with Windows @ 4vCPU + 8GB RAM
3. Case
Dev tools on local host.
App in scope within Docker running on Remote host/server
Min reqs:
8 vCPU @ 2.5GHz
16GB RAM
Recommended reqs:
16 vCPU @ 2.5GHz
16GB RAM
There are 3 elements that you need to consider when deploying Alliedium AI automation
Host Components: Driver/ LocAItor
1 Case:
You plan to use team’s workstations to execute tests. → Then you don’t need separate infrastructure. Just deploy Drivers and LocAItors on the workstations.
2 Case:
You already have a lab with required number of stations that host your applications and can be used for parallel runs. → Then you don’t need separate infrastructure. Just deploy Drivers and LocAItor to these stations.
3 Case:
You want a dedicated lab to execute your automation and want it to be flexible, configurable, scalable and deployable on demand.
Use our Test Lab Management to deploy easily managed Dockers with Windows and your apps. Each such container will host your app and our Driver and LocAItor agents.
Recommended reqs for 10 virtual stations: 48vCPU, 256GB RAM, 1TB+ of disk space
Hosts components: LocAItor/ Licensing Server / HUB / MCP Server
Mandatory component
Recommended reqs:16vCPU, 32GB RAMGPU with 32GB VRAM
Hosts components: AutoMagic / Alliedium Studio
You can skip if you plan to use MCP and your own LLM (cloud or local), like OpenAI GPT or Anthropic Claude.
Only required if you want to use on-premises LLM models.
Recommended reqs:48vCPU128GB RAMNvidia RTX H100 with 80GB VRAM
Our solution assumes the following “consumers” for on-premises hosting:
1. LLM
We are using LLMs in the following scenarios:
a) Converting human readable text to code.
b) For LocAItor SmartSearch.
Depending on your organization policy, you might either consider running a local model or a cloud-based LLM. By default, we operate with small models that fit in 32GB VRAM, but larger models may also be used.
a) Our product is optimized to run LLM only once when converting text to code. If you have thousands of test cases, we recommend scheduling overnight batch conversions to avoid overloading.
b) Smart Search invokes embedding generation and lookup on every automation run. For < 10 parallel runs, we recommend an NvidiaRTX 5090 GPU.
Recommended minimum for < 10 parallel runs with < 5000 test cases per run: Intel Core i9, 32GB RAM, and RTX 5090 (32GB VRAM).
2. Test Lab
If you already have a test lab and plan to deploy drivers there – no extra hardware needed. If not, and you consider using our Docker infrastructure:
– Each image for test execution takes 50GB of disk space (depending on OS and app size).
– LLM and LocAItor components require an additional ~5GB.
We recommend 2–4 Intel Core i9 CPUs, 256GB RAM, and >1TB of disk space for an environment hosting up to 10 parallel test executions.
We can deploy everything to AWS (or your preferred cloud provider), fully configure it, and let you enjoy a hassle-free setup. Reach out to us, and we'll build an optimal solution together.
Alliedium Studio
AutoMagic
Windows + Web Driver
Java Swing Driver
3rd Party LLM
MCP Server
LocAItor Smart Search
Selenium Adapter
Oracle Forms Driver
OpenAI, Anthropic