BNA Compare Case Study
While working as a Business Systems Analyst/QA Analyst at Innosphere SDG Ltd., I took the initiative to leverage my technical skills to automate various aspects of analysis and testing. This was accomplished over the course of several years and multiple projects, all starting with the origin of the BNA Compare project and its US federal regulations dataset.
The BNA Compare utility was created to integrate into existing regulatory library tools. Used to verify amendment updates, the tool could show the differences between two different versions of the same regulation or law, underlining added text and striking out removed text.
UI Test Automation - Visual Testing
The Problem to Solve
Initially, the problem was finding ways to avoid boredom. :)
I also wanted to find appropriate and useful ways to use test automation -- not just do automation for the sake of automating tests.
The Journey
The BNA Compare tool had a very basic UI, and made a perfect candidate for exploring UI automation.
While I had more experience with other languages, I decided to try learning Ruby.
Using Ruby, I...
- Created Watir page objects to automate login and basic version comparisons
- Used Excel/CSV to drive regression (especially comparison) tests
- Setup Selenium Grid with virtual machines to test in different browsers/environments
- Took a screenshot at the end of every test to visually examine the results of the comparison test
Advantages
I could setup many tests, on many browsers, press a button and walk away and do other things. Then check back on the results later.
There was a season in the early development where I would need to regress many times in a day. It was awesome to have this effective, time-saving option.
Disadvantages
Because of the nature of the implementation, the most important tests could not be reliably confirmed with an assertion at the end of the automated steps. Comparison testing still required a quick glance at a screenshot.
(The design of the test data that evolved over time made this increasingly efficient.)
Long-term benefits
Learning how to drive the UI from CSV and to take screenshots of results was a technique I returned to time and again for several years to come.
Code-Level Testing - Supporting the Developer
The Problem to Solve
Our original team was small -- just me and one developer -- and the demands on his time were huge.
He was a huge advocate for adding code-level testing and he wanted to add more, but wasn't finding the time to increase coverage in the way he envisioned.
The Journey
Knowing I was technically-inclined, developers often walked through their code with me. On this occasion, he suggested I try adding integration tests based on the one sample he had, testing between intermediary layers of translation.
Using C# and Ruby, I...
- Studied the existing C# and NUnit test code, identifying what needed to change to test different cases -- then created one new test manually
- Created a Ruby script for generating the text of a single C# test case for provided parameters
- Used Excel/CSV to list parameters for the C# tests to generate in bulk using the Ruby script
- Added the generated C# tests to the codebase
Advantages
Even with only a loose grasp of C#, I could create valuable test cases and the corresponding test code far quicker than the developer could, even if he had the time.
With the initial dataset in particular, this caught a number of regressions before it even got to me to take a look at.
I was exposed to the main codebase.
Disadvantages
It only covered one aspect (although, it was a helpful one!) and had limited value after the code stabilized.
Long-term benefits
As far as I recall, this specific kind of testing did not carry forward into datasets after the original one. But its legacy of working through so many of the growing pains of the development approach resonated into the future.
* Note: There would have been other code-level tests in the codebase. I just wasn't involved with creating, running, or maintaining those.
Analysis of XML Dataset
The Problem to Solve
A large amount of archived data versions were loaded into the initial database.
There was no way to guess what data had examples of any particular element or what surprises or secrets awaited within.
The Journey
Instead of solely relying on schema documents and the recommendations of subject matter experts to provide insights, I also dug deeper into the data independently.
Using Ruby, I...
- Wrote various scripts to interrogate the XML in order to identify all elements and attributes in existing data
- Created an index of element information to help find real examples of different situtaions
Advantages
I learned much more about the data than any schema alone could show.
Disadvantages
Even at the time (2010+), there may have been more efficient ways to interrogate the data than I currently understood.
Long-term benefits
I was able to take the scripts and skills and apply them with all of the future XML-based datasets. I was also better positioned to ask better questions every time around.
Creation of XML Test Data
The Problem to Solve
Even with the analysis of the existing data, there were elements and changes between versions that did not exist in the data we had. (The nature of the application meant it was necessary to address elements individually.)
As a result, we would sometimes find defects in production as new data was added going forward. There would be unhandled situations and unexpected elements. We needed to find more robust ways to avoid this.
The Journey
Schema in hand, we needed a recipe for tests that would evaluate all possibilities of how an element could be viewed or changed.
Using Ruby, I...
- Studied the XML data files and determined the smallest possible structure for creating test data -- then created a test file manually
- Created a script to generate test files and scenarios for a provided element
- Used Excel/CSV to list elements and tests to generate in bulk
- Leveraged the UI test automation code to run a series of tests (with screenshots taken), after loading the generated data to a test server
Advantages
This was a tremendous help. It became easier to test, faster to test, more thorough, and also trusted by the developers.
It became possible to hand over the review of the resultant screenshots to a junior team member, who could flag issues.
Disadvantages
Other than those moments of doubt and anxiety when attmpeting to implement -- "will this be worth it?" ("YES!!") -- there really weren't any.
Long-term benefits
I was able to take the scripts and skills and apply them with all of the future XML-based dataset testing.
New Datasets
The Problem to Solve
Following the initial building of BNA Compare and its first dataset, the tool was rapidly expanded to include other (differently stuctured) datasets.
The timeline to build was always shorter than desired and the data was always full of new challenges.
The Journey
Every new dataset became an adventure into unfamiliar territory but with reliable methods and knowledge to guide us.
Using Ruby and armed with previous experiences, I...
- Leveraged and improved on the previously discussed techniques of analysis and testing for several more BNA Compare datasets
- Leveraged data analysis and test data generation experiences for a non-Compare application with an XML-based dataset
Advantages
Expanding to inlucde new datasets did become easier to do, and all of the lessons learned from previous experience carried forward.
These techniques saved so much time and became a trusted approach by the project team.
Disadvantages
It was easy for some to assume that new data would be unreasonably quick to turnaround because of how refined this, and other aspects of development, had become. We still had to do the work of each individual dataset and it could be hard to convince some folk on occasion.
Long-term benefits
As mentioned, not only did I use these approaches successfully with BNA Compare, I also used them with an unrelated web application which had an XML-based dataset. It took quite a bit of reworking, but the principles were the same -- as was the end result: Success!