- Welcome to Victor Zheng's Website/
- Articles/
- Technology/
- Open-Source Keyword Driven Automation Framework/
Open-Source Keyword Driven Automation Framework

Table of Contents
At a previous internship, I developed a testing automation framework that streamlined browser, API, and database interactions.
The code is open-source and provides a comprehensive, end-to-end testing solution for UI testing and reporting. The original architecture used a multi-repo model, which I consolidated into a mono-repo to facilitate easier versioning, feature enhancements, and documentation.
The full, open-source framework is available here: github.com/zzzrst/AutomationTestingProgram.
All thoughts expressed are my own, and I’ve redacted any personal or team information. My blog posts aim to provide personal reflections on different stages of my professional journey.
Background #
What is a Testing Framework? #
A testing framework is a structured approach and set of guidelines used to create and execute tests, enabling QA professionals to efficiently validate software applications. It combines practices, tools, and methodologies to standardize testing processes while improving efficiency, cost-effectiveness, and accuracy.
What is an Automated Testing Program? #
An automated testing program implements a testing framework by using software tools and scripts to execute tests automatically. This automation eliminates repetitive manual tasks, enhancing testing efficiency and ensuring software meets quality standards before release. The primary benefits include faster testing cycles and reduced human intervention.
The details of the automation program #
The framework follows a Keyword Driven approach with Selenium 4.0 running in the background. I created abstractions that allow test steps to be defined in Excel or Oracle. This design choice enables non-technical team members to use the framework for testing by simply sharing Excel files via email. The abstraction removes the need to understand the underlying scripting, allowing project managers and other stakeholders without technical knowledge to generate and understand tests.
The framework processes Oracle DB or Excel test cases by parsing them into Test Steps, Test Cases, and Test Suites using the builder design pattern. The execution flow starts with building Test Suites, adding Test Cases to each suite, and then adding Test Steps to each case. During test execution, we begin with the Test Steps and report results upward to the Test Case and finally to the Test Suite. The framework reads ALM Test Suite and Test Case definitions, queries the Oracle DB for test step ordering, and executes tests via C# before reporting results to ALM or other testing platforms.
Additionally, the framework supports Traceability Matrix functionality to map requirements to test cases. This traceability is typically maintained in an Excel file with direct mapping to test cases and steps, often in a separate tab within the same spreadsheet.
When I first joined the team at the Ministry of Education, we were working with an Oracle Database which stored our test cases alongside Micro Focus’ ALM project management platform. As in all companies, migration projects are always underway.
We were moving to Microsoft’s SharePoint Online from on-prem SharePoint and to Azure DevOps from Microsoft TFS. With Azure DevOps, work items would be logged into DevOps and manual tests would also no longer be done in ALM.
Therefore, there was a necessary need to investigate how we could migrate our tests to Azure DevOps and get results published to Azure DevOps instead of simply ALM. The quest began to build this new framework.
Now, I’ll detail some of the technical aspects and technical decisions made.
Why Azure DevOps #
Microsoft Azure DevOps represents the future of development operations as it enables continuous integration and delivery, accelerating product development and shortening time-to-production. The platform significantly reduces barriers between development and operations teams, allowing for faster iteration and better products. With TFS, achieving true Agile development was challenging due to manual approval gates and deployment processes.
Micro Focus’ ALM, being an on-premises solution, presents numerous challenges: it’s difficult to use, complex to configure, and cluttered with unnecessary functionality. Performance issues plague the system, which also lacks critical capabilities like build/deployment tracking and code-execution linkages. Without the ability to associate test cases with specific builds, we couldn’t guarantee tests were executing in appropriate environments. Setting up ALM requires extensive configuration and installation of outdated software.
Azure DevOps (comparable to Jira) provides clear connections between defects and code changes, along with numerous marketplace integrations. It offers task groups and agent pools for deployment, essentially functioning as a modernized, Git-integrated version of Team Foundation Server. Its cloud-based nature means it’s accessible from any web browser, drastically reducing the time needed to access defects and log issues—making it a truly modern solution for software development.
Why I took up the project #
My initial interest was in understanding ALM’s integration with Oracle and Selenium’s background operations. However, I quickly identified significant gaps in our methodologies, particularly in version control, update processes, and comprehensive system visualization. I had questions about framework adaptability, such as browser version updates and Oracle DB script execution features. The necessity of SQL database interaction created additional complexity, as database tables don’t provide clear execution insights. One month into my internship, I joined the migration project and discovered source code for a framework conceived five years earlier. While the blueprint existed, the framework required substantial updates to align with modern development approaches.
How I managed the project #
I thrive on ambiguous projects that demand thoughtful analysis, strategic planning, and careful execution. The primary challenge with ambiguity is establishing clear objectives. My manager and team began by comprehensively discussing the migration project requirements. We established three primary objectives:
- Enable test execution from Excel rather than Oracle, eliminating Oracle DB dependencies
- Enhance test reporting capabilities for Azure DevOps, implement email notifications, and develop robust results visualization
- Make feature configurations code-configurable with simple toggles for enabling/disabling functionality
Throughout the project, I maintained daily communication with the team regarding enhancements and incremental framework developments. I documented all advancements and features in Azure DevOps, tracking over 400 work items with a priority system (1-4). I also categorized emerging ideas as “Investigation” items, such as potential integrations with complementary tools.
Technical Problems #
Importing Oracle Database Data into a Viewable Format #
When we look at an Oracle Database table, it is just like any relational database. The data is stored on one table, but you cannot get one list of everything ordered correctly. So, if we simply exported the table from SQL Developer it would be thousands of lines of unordered test steps. We wanted to be able to extract and then immediately run the tests directly from the XML, TXT, CSV, or Excel that it was extracted into.
The problem with XML and TXT is that it is not configurable per column. We wanted to be able to interact with columns and update columns quickly.
This led us to believe that Excel running an .xlsx or .csv was the right choice. We also wanted to add triggers for validations of columns just like how we had in Oracle.
We also found that some of our values in the Oracle had commas built in, so exporting it may cause issues with the tests. We could overwrite the values, but we also knew that some validations required the checks of commas. The benefit of CSV would be that we would be able to perform proper version control on the values as they were in plaintext.
In the end, we started with the idea of moving all the test definitions to Excel by building a program to grab test case steps and test suite steps from ALM, creating excel binaries using an Excel library, querying Oracle using client libraries for the test steps and then inputting the data into Excel. I called this the ALM Migrator.
For ease of users of the ALM Migrator, I built the framework and added it to a network drive. Then modified the VBA scripts in ALM to add a button that we execute the framework from the network drive to report results into the local user’s C drive.
Reading Excel Data #
The next problem I faced was in reading excel test data. There was a builder model in the framework outline which did not include implementations for Excel. Since our framework outline had been published into NuGet packages which were segregated, I decided to download our binaries and rebuild them locally. This is the idea of a mono repo.
I ran a debugger and turned off Just My Code which allowed me to understand what was causing issues with the framework. Eventually I got the framework to read the Test Suite named by the excel name, read the Test Cases within the Excel, and then read the Test Steps inside excel. I finally was able to make the framework work by just reading, but executing was still a big question mark.
Executing Data #
As I began trying to run our Testing Driver, I was perplexed. How does our framework interact with the browsers, what are chrome drivers? Definitions of implicit vs explicit waits, how many attempts, timeouts, Selenium drivers, compilation modes, and what are AODA (Accessibility for Ontarians with Disabilities Act) drivers.
In addition, took a long time to understand how our frameworks’ action on objects were interacting with the Selenium Driver and the Chrome Drivers which we installed. Figuring out how the drivers worked was the biggest learning curve.
Slowly but surely though I was able to make the Automation program build and run.
Functionality Development #
For developing features, I focused on making feature flags available to end users. This enables us to create unique features so that if something breaks, the framework itself will still work. Some features I added were the abilities to report test results to DevOps, report results to Micro Focus ALM, report results to CSV, run AODA reports, highlight elements, enable VNC video, enable PDF reports, enable emailed reports, and to report to Report Portal. If one day one of the features stops working, all the QA team needs to do is turn off the feature in app.config.
Unit Tests #
I prioritized comprehensive unit testing and improved code coverage metrics. Higher test coverage increases confidence in framework reliability and functionality. Well-constructed unit tests facilitate faster release cycles and more frequent iterations, ultimately enhancing the framework’s quality and maintainability.
Authorization and Authentication #
For authorization and authentication, I implemented three primary approaches: credential storage in Excel files, hardcoded values (for development purposes only), and Azure Key Vault integration. These mechanisms provide flexibility for QA teams across various project environments. Working alongside a mentor, I gained deeper insights into authentication considerations that weren’t initially apparent, such as token management, permission scoping, and credential rotation policies.
Azure DevOps #
My work with Azure DevOps involved extensive use of their Test Management API and SDK to report test results efficiently. The implementation included API requests to verify the existence of Test Plans, Cases, Steps, and Runs—creating them when necessary. The final solution combined these components into Test Runs with Azure Test Points to generate test instances. This process follows a structured workflow: executing test steps, reporting results to test cases, progressing through all test cases, and finally reporting comprehensive results to the test set. This approach of separate build and execution phases forms the architectural foundation of our test automation framework.
Results Reporting #
Initially, result reporting in Azure DevOps presented significant challenges, particularly with duplicate Test Case and Test Plan entries and unclear result presentation. I focused on improving result alignment and eliminating redundancies. When faced with API throttling due to high request volumes, I transitioned from direct API calls to client libraries that support request batching for operations like test case updates. I also implemented asynchronous functionality where possible to maximize framework performance.
For email reporting, I developed comprehensive execution result templates. I created an HTML template for results reporting that populates at execution completion. This yielded two HTML variants: a static version for email embedding and a dynamic version with JavaScript for interactive accordion-style result viewing.
Deployment of Code #
The deployment and execution environment configuration consumed a significant portion of my internship. My initial approach involved packaging the framework on a network drive along with creating self-extracting archives and MSI installers. I developed various execution methods including bash scripts, PowerShell scripts, scheduled tasks, and Excel VBA integrations. Our team lead provided access to remote management tools like AnyDesk, SSH, and RDP for cross-machine testing. I even developed a C# Windows application as a user interface for the executable.
This approach proved unwieldy and difficult to maintain. The breakthrough came when I leveraged Azure pipelines and agent pools to deploy code to on-premises machines and distribute test execution across agents. Working with my manager, we installed agents on over ten Windows Servers and established a pipeline for framework deployment. I explored NuGet package publishing, binary distribution, and Azure DevOps Artifacts for user distribution.
Significant effort went into pipeline development and parallel execution configuration in Azure DevOps. After extensive experimentation with Deployment Pools, Agent Pools, and Azure Pipelines, I arrived at an optimal solution: deploying compiled framework code to self-hosted agents on QA servers, allowing any project sharing the Agent Pool to execute tests through the framework. With proper deployment targeting, manual framework distribution became unnecessary.
Documentation #
I started a markdown readme format for documentation. I enabled our wiki to have documentation which could be referenced and sent to all teams. I created mechanisms for users to report bugs in the framework and for users to fix bugs. I wrote documentation on how users can create Work Items and then how to triage them. I also created some KT videos, ran KT sessions, and interacted with QA teams for requirements gathering. To complement this all, I created a Azure DevOps Wiki repo which contained all important system information. I also created a Canvas course on testing and automation.
Data Files #
I converted Oracle Database tables to Excel and explored XML and JSON implementations for the framework. The limitation of non-column-based formats became apparent as they complicated test interaction. Excel-based test steps offered superior usability for adding, modifying, or removing test steps.
SQL Enhancements #
I developed SQL enhancements to restore applications to their initial states, creating scripts that reset statuses from Closed/Submitted to Open/Input. This approach eliminated the need for database refreshes and ensured tests could consistently start from identical initial conditions. These scripts were integrated into the automation framework, with database credentials securely stored in Azure KeyVault using proper authentication protocols.
Parameterization #
To increase test flexibility, I implemented parameterization capabilities that allowed tests to execute with minor variations. This was particularly valuable for year-dependent tests, as users could easily modify the year parameter without rewriting tests. I also introduced a unique value generator to ensure test data uniqueness across execution runs.
Browsers #
My browser compatibility work involved investigating driver issues and updating test binaries from Chromium to standard Chrome installations. I devoted substantial effort to browser isolation, execution separation, and environment segregation techniques to minimize flaky tests and improve automation reliability.
During my internship, I gained valuable insights into the differences between Chromium-based and Firefox-based browsers, and experimented with various browser versions to identify feature variations and compatibility concerns.
Packaging #
My packaging strategy evolved through several iterations: from zip file distribution via network drives, to Azure DevOps artifacts, to Git repositories, and finally to Azure DevOps Agents. Each approach offered lessons in distribution efficiency and maintenance requirements.
Why BDD is Something That I Endorse More #
As we move into 2024, Behavior Driven Development (BDD) has become increasingly important. The Keyword Driven model presents challenges in requirement mapping and makes it difficult for QA teams to understand the underlying purpose of tests. Effective QA testing should focus primarily on user experiences rather than application behaviors in isolation.
In retrospect, implementing BDD with Playwright would have been more efficient than investing significant time in modifying a Keyword Driven Testing framework.
Playwright #
I collaborated with a colleague on a Playwright proof of concept to evaluate integration possibilities with our framework. While we successfully incorporated Playwright, I recognized that it offers substantial out-of-the-box functionality with minimal additional configuration. For future automation initiatives, I would recommend combining Playwright with BDD principles to create a more efficient testing framework.
Summary #
This project provided an invaluable opportunity to develop working software from concept to completion. Throughout this experience, I gained comprehensive knowledge in creating reusable software components, building and deploying applications, gathering and incorporating requirements and feedback, and implementing new features. The project encompassed the complete software development lifecycle and equipped me with versatile skills applicable across development disciplines.