Test automation is widely recognized as essential to modern delivery; it enables faster feedback, supports CI/CD practices, and increases release confidence. Yet in many organizations, automation growth lags behind development velocity. The reason is rarely a lack of intent.
It’s the effort required to convert validated manual tests into automation scripts.
Most teams already have structured manual test cases inside Jira. These tests represent validated functionality, documented business logic, and risk mitigation. However, the step that consistently slows progress is converting those structured manual tests into automation scripts compatible with frameworks like Playwright or Selenium. This often happens because automation engineers are in short supply within many teams. As a result, manual test cases need to be handed over, reviewed, and rewritten before they can be automated, creating a bottleneck in the overall testing process.
Xray’s AI Test Script Generation addresses this exact point of friction. Instead of treating manual validation and automation as separate tracks, it connects them directly within Jira, transforming validated test cases into framework-ready automation scripts while preserving traceability and workflow continuity.
Let’s explore how the feature works in practice, why it changes automation scalability, and how it fits into a broader AI-enabled quality strategy.
AI Test Script Generation operates on Test work items within Jira. For manual tests, the item must include at least one defined Test Step. For tests written in Gherkin format, the definition field must contain a valid scenario before automation can be generated. This ensures the AI has structured input to interpret the intended test behavior.
Manual tests are rarely the problem; in fact, they are often well-structured and thoughtfully written. The challenge arises after validation, when those tests are handed off for rewriting in an automation framework.
This rewrite phase introduces duplication, which means the logic has already been defined, but it must be translated step by step into executable code. That translation requires framework knowledge, coding expertise, and careful attention to application-specific details. When automation engineers are responsible for this conversion, their time is spent recreating what already exists in documented form.
Xray’s AI Test Script Generation reduces that duplication of effort. It does not eliminate the need for expertise, but it shifts the starting point. Instead of beginning with a blank file, teams begin with a generated script that already reflects the structured intent of the manual test case. This fundamentally changes how quickly automation can scale.
The entire process is embedded inside Xray and Jira, which means teams do not need to introduce new platforms or adjust their existing test management structure.
The workflow begins with an existing structured manual test case. Because the feature relies on clearly defined steps and expected results, it builds directly on artifacts that already represent validated logic.
From there:
The refinement stage is particularly important. Generated scripts are intentionally designed as structured starting points. Teams remain responsible for validation, optimization, and alignment with application-specific nuances before execution.
Generic AI tools can produce automation snippets, but they lack structured context. They operate outside Jira. They do not understand requirement linkage. They do not preserve test version history or maintain traceability within a QA workflow.
Framework-native generators often assume coding proficiency and require scripts to be created independently from requirement artifacts. They typically start from scratch, detached from manual validation.
Xray’s AI Test Script Generation operates differently because it is embedded inside structured test management. It reads validated manual test steps directly from Jira, generates scripts within that context, and creates version-linked automated artifacts. This ensures continuity between requirements, manual tests, and automation.
The difference is not just in script generation. It is in maintaining governance and traceability throughout the transition from validation to automation.
Automation growth is frequently constrained by available engineering bandwidth. When every automated test requires deep framework expertise from the outset, coverage expansion becomes limited.
By generating the initial script structure, Xray’s AI Test Script Generation enables testers who understand functional intent to participate more directly in the automation process. Automation engineers can then focus on refining logic, improving resilience, and integrating scripts into CI pipelines rather than rewriting foundational steps.
This reduces repetitive scripting effort and shortens the path from validation to automation. It also lowers dependency on handoffs between roles, improving collaboration within QA and engineering teams.
Importantly, the feature maintains a human-in-the-loop approach. Generated scripts are editable, version-controlled, and traceable. Final execution remains under team oversight. The objective is acceleration with structure, not automation without control.
The impact at the organizational level includes faster automation velocity, improved coverage scalability, and stronger alignment between manual validation and automated execution.
When using AI Test Script Generation, teams should also be aware of several practical constraints.
If files are uploaded during the generation process, their combined size cannot exceed 1 MB. The limit applies to the total size of all selected files. If the threshold is exceeded, the file selection must be adjusted before proceeding.
Supported file formats include TXT, MD, XML, YAML, PY, C, CPP, CC, C++, JAVA, JS, TS, JSX, TSX, SH, BASH, GO, RUST, SQL, HTML, RTF, CSV, JSON, DOC, DOCX, and PDF.
As with any AI-assisted output, generated automation scripts should be treated as an initial foundation rather than a final deliverable.
To get the best results:
Xray’s AI Test Script Generation builds on the AI foundation powered by Sembi IQ.
AI Test Case Generation helps teams accelerate the creation of structured test drafts from requirements across editions. AI Test Model Generation, available in Xray Enterprise, supports structured modeling of system behavior to strengthen coverage design.
Together, these capabilities create a connected workflow that moves from requirement interpretation to validation, behavioral modeling, and now directly into automation script generation.
Rather than existing as isolated AI features, they reinforce each stage of the testing lifecycle within Jira.
It is an AI-powered capability that transforms structured manual test cases into framework-ready automation scripts compatible with Playwright and Selenium, directly inside Jira.
Xray’s AI Test Script Generation is available for Xray Advanced and Xray Enterprise.
No. The feature accelerates initial script creation, but teams review, refine, and finalize scripts before execution. Expertise remains central to quality.
No. Scripts are generated from validated manual test cases that are already linked to requirements inside Jira, preserving context and traceability.
It complements AI Test Case Generation and AI Test Model Generation by extending the AI-enabled workflow from requirement drafting and coverage modeling into executable automation.
We’ve transitioned our AI capabilities to “Enabled by Default” to ensure all users have immediate access to the productivity gains Sembi IQ provides. The platform’s full power is available out of the box, rather than hidden behind menus. However, your workflow comes first, and you have absolute control to toggle it off in the settings if it doesn’t fit your needs.
If you are still on Xray Cloud 6.16.0, your Jira admin must manually update Xray to the latest version in order to take advantage of all recent and future releases. (Including AI Test Script Generation)