Stabilized flaky automation suites
Diagnose root causes — selectors, timing, test data, environment — and rebuild trust in the signal.
QA Automation Consulting · Web · API · Mobile · Performance · AI
I help engineering teams design, build, stabilize, and scale test automation across UI, API, mobile, performance, and CI/CD pipelines.
Senior QA Automation Consultant · ISTQB Certified · Founder of TestBlocks · individual senior engineer — no agency overhead, no junior handoffs.
These are the patterns I see in almost every engagement. None are unfixable — but they don't fix themselves either.
QA runs the clock on every sprint. Releases slip or ship without coverage.
Re-runs are routine, the suite is red more than green, and engineers stopped trusting the signal.
Tests are pasted-together scripts. Adding a new flow takes days; maintaining old ones takes more.
Automation runs on someone's laptop. Pipelines don't gate on quality. Bugs reach production.
UI tests don't catch backend regressions. Contract breaks are found by users.
There's no load testing strategy. Outages and slowdowns surface during launches and traffic spikes.
Manual QA wants to grow into automation, but there's no senior in-house to guide framework decisions.
The way I work is the difference. You're hiring an engineer, not a vendor — and that changes how every part of the engagement runs.
You work with the engineer doing the work — not a PM, not a junior, not a slide deck. The person scoping is the person writing the code.
I write production-grade framework code, review PRs, and ship. The deliverable is real software, not an architecture diagram and a goodbye.
Audits land in 1–2 weeks. Framework builds run on weekly check-ins. You always know what's coming next and what's done.
Engagements end with documented code in your repository, a handover session, and a team that can keep extending the framework. No vendor lock-in.
Alongside consulting, I am building TestBlocks — an AI-assisted testing platform that helps teams import existing test assets, generate baseline UI/API coverage, run tests locally or in CI, and maintain automation with AI support.
That product work keeps my consulting close to real QA problems: slow regression, flaky tests, weak coverage, poor handover, and automation frameworks that are hard to scale.
I build automation systems hands-on: framework architecture, Playwright/Cypress/Selenium implementation, Rest Assured API suites, CI/CD integration, reporting, test data strategy, and team handover.
Public examples and templates are being prepared on GitHub to show the same engineering standards I use in client work.
Diagnose root causes — selectors, timing, test data, environment — and rebuild trust in the signal.
Playwright, Selenium, Cypress, Rest Assured — modular, reviewed, owned by the team after handover.
Workshops, pair-programming, and code review that turn manual QA engineers into automation contributors.
Engagements grouped by the business outcome they deliver — not just the tools involved. Click the one closest to your situation; the contact form will know what you meant.
Production-grade Playwright, Selenium, Cypress, WebdriverIO, and Appium frameworks engineered like real software — modular, reviewed, owned by your team.
Deliverables
Outcome
Your team owns a stable, extensible automation framework wired into delivery — not a brittle script collection.
Reliable, environment-aware API suites that catch contract breaks before production — built with Rest Assured, Postman, Supertest, or SoapUI.
Deliverables
Outcome
Backend regressions are caught at commit time, not in production — and the API layer is covered independently of the UI.
Load, stress, spike, and smoke testing with a strategy and reports that engineers and stakeholders can both act on.
Deliverables
Outcome
Performance risks are surfaced before launches and load events — not discovered during them.
Strategy, architecture review, and stabilization for teams who already have automation but aren't getting value from it.
Deliverables
Outcome
A clear, defensible plan for what to fix first — and the engineering depth to execute it.
Bring AI into your QA process where it actually saves time — test design, coverage analysis, and authoring acceleration through TestBlocks.
Deliverables
Outcome
Your QA team moves faster on the parts that benefit from AI — without overpromising what AI can replace.
Automation goes beyond testing. I build internal tools and scripts that take repetitive web, mobile, and desktop work off your team's plate.
Deliverables
Outcome
Repetitive operational work runs reliably in the background — your team gets that time back.
Upskill your manual QA engineers into automation engineers, or coach an existing team on best practices, framework design, and CI integration.
Deliverables
Outcome
Manual QA engineers become automation contributors — and your team stops depending on external help to extend the framework.
Tooling is a means to an end. Below is what I work with day-to-day — selected based on your stack, team, risk profile, and where you actually want to be in 12 months.
Every engagement follows the same six steps. You always know what's happening this week and what you're getting at the end.
30 minutes. Your product, stack, release cadence, and where QA hurts most. No deck, no pitch.
Tests, framework, CI/CD, and process — reviewed and written up with prioritized findings.
A concrete proposal: tooling, structure, scope, timeline. Tied to outcomes, not buzzwords.
Production-grade automation, written like real software — modular, reviewed, in your repository.
Pipelines, gates, dashboards, Allure / Qase / TestRail. Failures point to the broken line.
Your team owns it. Documented, taught, and supported as long as you need.
Engagements end with real artifacts your team owns — not slide decks. Below is what typically lands in your repo and your pipelines after a build or audit.
Production-grade framework code, modular and reviewed, committed directly to your Git repository.
Written explanation of structure, layering, conventions, and extension points — for your engineers, not just for you.
Tests wired into GitHub Actions, Jenkins, GitLab, Azure DevOps, or AWS — with parallelization, sharding, and gating.
Allure, HTML reports, or test-management dashboards — pointed at the right environments and the right people.
How tests get the data they need, how environments are switched, how secrets are handled. Documented and repeatable.
Environment-aware API tests covering critical endpoints — Rest Assured, Postman, Supertest, or SoapUI.
Smoke load tests in CI, baseline metrics, and a written report on where the system bends and where it breaks.
Live walkthrough of the framework with the engineers who will own it. Recorded if useful.
Where useful, structured notes and exercises for upskilling manual QA into automation contributors.
No lock-in
The framework is delivered to your repository, documented, and structured so your team can continue extending it. No vendor lock-in, no recurring license, no hostage code.
Engagements vary, but these are the outcomes I aim for from the first week onwards. They map to release confidence, engineering velocity, and reduced QA cost.
Replace manual cycles with pipelines that run on every commit and gate every release.
Diagnose root causes — selectors, timing, test data, environment — and rebuild trust in the signal.
Code that's readable, layered, and reviewed. No QA-only silos that decay the moment someone leaves.
GitHub Actions, Jenkins, Allure, Qase, TestRail. Failures point to the broken line, not 'something broke'.
Deploys stop being stressful. Hotfix rate drops. Engineering and product start trusting the green check again.
Workshops, pair-programming, and documentation that turn manual QA engineers into automation engineers.
Free resource
A practical checklist to evaluate your current automation framework, CI/CD setup, API coverage, flaky tests, reporting, test data, and release readiness.
No email gate, no fluff. Score yourself honestly — the gaps are where the biggest wins are.
30-minute call, no slides, no pitch. I'll walk through your stack and where it hurts — you walk away with a clearer picture either way.
Concrete deliverables, not a generic slide deck. The goal is a practical technical report your team can use to improve automation immediately.
No fixed list prices — every engagement is scoped to the team, stack, and what success looks like. Quotes back within 48 hours of the discovery call.
Best for
Teams with existing automation that is flaky, slow, or hard to maintain.
Includes
Outcome
A clear picture of what's broken and a defensible plan for fixing it — technical and business-friendly.
Best for
Teams starting from scratch or migrating to Playwright, Selenium, Cypress, or API automation.
Includes
Outcome
A production-grade framework in your repository, structured so your team can keep extending it.
Best for
Teams without dedicated automation hires who need senior support every month.
Includes
Outcome
A senior automation engineer in your sprint cadence — without the cost of a full-time hire.
Best for
Operations or product teams with repetitive browser, data, admin, or reporting work outside testing.
Includes
Outcome
Repetitive operational work runs reliably without human intervention — your team gets the time back.
Not sure which fits? Most engagements start with a 30-minute discovery call. I'll help you figure out the right shape.
This work is most effective when the team wants senior automation ownership, clear architecture, and practical delivery improvements — not just more test scripts.
AI can accelerate QA work, but it should not replace engineering judgment.
In client engagements, I use AI carefully for test design support, coverage analysis, automation planning, documentation, and workflow acceleration. Generated output still needs review, prioritization, and ownership by experienced engineers and QA professionals.
The TestBlocks connection
Building TestBlocks — an AI-assisted testing platform — keeps me close to what AI can and can't do in real QA work. Lessons from product feed directly into how I scope AI use in client engagements.
Aleksandar Stojanovic — Senior QA Automation Consultant and founder of TestBlocks. ISTQB Certified, with 7+ years designing, building, stabilizing, and scaling automation systems across UI, API, mobile, performance, and CI/CD.
My work combines hands-on framework engineering, test architecture, QA process improvement, and practical AI-assisted testing workflows.
Based in Serbia, working remotely with engineering teams in Europe and the US. Engagements are remote, async-friendly, and structured around clear weekly deliverables.
I work as an individual senior consultant — no agency overhead, no junior handoffs, no PMs translating between you and the person doing the work. The person scoping is the person writing the code.
If your question isn't here, send it via the contact form below — I usually reply within 24 hours.
Yes. I can review what you have, fix architecture and stability issues, expand coverage, and integrate it with CI/CD without scrapping the work already done.
Yes. Most engagements start with a clean architecture — framework structure, test data strategy, reporting, CI/CD, and documentation, production-grade from day one.
Yes — full migrations or incremental ones where the new framework runs alongside the old until parity is reached, so the team isn't blocked.
Yes. Stabilization is one of the highest-ROI engagements I run. The fix is rarely 'add a sleep' — it's diagnosing selectors, test data, environment, and architecture, then patching the root cause at the right layer.
Yes — GitHub Actions, Jenkins, GitLab CI, Azure DevOps, and AWS pipelines, with parallelization, sharding, reporting, and gating.
Both. Fixed-scope projects, audits, and ongoing monthly engagements. Remote, async-friendly, with clear weekly deliverables. Contracts and IP terms standard.
Yes. Workshops, 1:1 mentoring, pair-programming, and code review. Most teams want a mix: a working framework plus the skills to own it long-term.
Yes. I can act as a fractional senior QA automation engineer — building the foundation and the framework so your engineering team has a reliable test layer while you hire.
Yes — browser, web, mobile, and desktop automation for repetitive operational work. Data extraction, form filling, admin workflows, internal tools, scheduled jobs.
Always. Every engagement ends with documentation in your repository, a handover session, and (where useful) team training notes. The framework is structured so your team can keep extending it without me.
Calm, professional, no pressure. A simple sequence so you know what to expect.
I review your current QA process, automation setup, and release pain points before the call.
On the call, I identify the highest-impact automation opportunities for your stack.
You leave with a clear recommendation: audit, framework build, migration, ongoing support — or no engagement if it's not a fit.
The first call is focused on understanding scope and identifying whether I can help. No pressure, no hard sell.
Tell me what you're working on. Discovery call within 24 hours of your message — I read every email myself.