What if your QA engineer never slept?
I've worked in startups and big tech. The most common bottleneck? QA. One team I know ditched the traditional approach and runs an agent that acts like an engineer, 24/7. It's synthetic, learns from bug history, and can gate PRs. Wild idea, or future standard?
QA receives whatever gets merged and (what they decide gets) deployed (to test); they cannot block PRs. It would be nice though to make some checks block merge, i.e. Required workflows.
Learning from bugs is amazing. Connect to production support tickets to link code changes to real incidents. When done manually by on-call, there is no other historical context.
Automate estimation with "this story reminds me of stories A, B, C, which were estimated to be X points and took Y days." A link lets folks drill down to code metrics, artifact version, etc.
A QA agent would be remarkable in that it has a complete and total timeline for everything, and can be queried in chat.
Completely agree. Linking incidents back to code changes is one of the most valuable things a team can do but it's rarely done well. In this case, the agent actually learns from that full timeline production incidents, support tickets, commit diffs. It surfaces patterns you’d never catch manually, like an issue that only appears under high concurrency.
Also yes on chat querying. One of the most useful parts was letting PMs ask questions like “Has this bug happened since April?” and getting a full trace across releases. The idea of automating grooming using historical story similarity is spot on too. This could easily save teams hours per sprint.
I think it's an interesting idea, especially if it's just running on production or staging and constantly just trying new flows/testing edge cases. I would be curious about (1) the quality of testing compared to an actual human and (2) the cost involved. Obviously compared to a human salary the cost could get quite high before it became an impediment (also depending on quality). But running an agent 24/7 seems like costs could certainly pile up.
Really good points. On quality it’s not replacing human insight, but it is exceptional at pattern recognition and coverage at scale. It catches edge cases that tend to get missed and never forgets past regressions. The best results I’ve seen come from pairing the agent with human QA. The agent does ambient monitoring and flags suspicious behavior. Humans then dig deeper.
Cost-wise, it’s surprisingly reasonable. The version I saw ran in containers that spun up based on commit activity or deploy frequency. So if no one is pushing code, it's idle. But during launches or busy dev cycles, it ramps up. Much cheaper than staffing a full team to maintain 24/7 vigilance.
If your QA staff are no better than an "AI" agent, dump them and hire better QA staff.
I hear you and to be clear, this isn’t about replacing talented QA teams. It’s about offloading the repetitive and pattern-based parts of QA so human testers can focus on more strategic, exploratory, and usability-driven work.
In the case I saw, the agent handled things like regression patterns, diff analysis, and known-risk detection across thousands of past issues. The QA team actually became more valuable because they weren’t stuck rerunning the same test plan for the fifth time that week. It was augmentation, not replacement.
That said, I totally agree if a team is just rubber-stamping PRs, the issue isn’t automation, it’s expectations and leadership.
I think you knowing someone who does this thing might be able to clue us into how well it works.
Fair point. The team I know using this runs a SaaS platform with around 25 engineers. Before the agent, their QA team was stuck doing triage on weekends after bugs hit prod. Now, the agent blocks PRs that resemble patterns from past bugs—things like changes to concurrency-heavy areas that previously caused memory leaks.
It hasn’t replaced QA, but it shifted their role. Now they spend more time analyzing what the agent flags instead of rerunning test plans. It’s not perfect but it’s made a big difference in stability and team morale.
Also in the process of building: Actory AI https://actoryv3.vercel.app/