Rails Integration Testing with Omniauth and CanCan

Implementing CanCan and Omniauth with multi-authorization support and integration testing proved to me to be somewhat challenging.  For the most part, I think the challenges reflect my lack of sophistication with Rails development, but there were a reasonable number of questions out on the web that reflected other people with similar challenges, so I thought I’d document what I’ve learned, and how I was able to get Capybara working with it all.

Test Driven Development and COTS

COTS is traditionally taken to mean Commercial Off The Shelf.  However, I’ve also heard it referred to Configurable Off The Shelf.  This latter term more accurately describes a large class of systems implemented in business environments today.  The package can largely be deployed as is, but typically has a significant number of configuration options to allow it to be more closely tailored to its target environment.

When preparing to deploy one such system earlier this year, a colleauge and I thought it would be interesting to leverage Microsoft’s Test Manager(MTM) to create and record test results covering the final testing of the product.  Both of us being new to automated testing and things of that nature, we didn’t foresee the challenges we would face.  The largest challenge was the lack of detailed reporting available at the end to use as documentation for compliance purposes.  In addition, the system is not very supportive of stopping and test mid stream and continuing later.  If you stop a test you frequently need to declare it a failure, even if the issue you were facing isn’t strictly speaking a failure.  Of course, this sort of testing is not really what Test Manager was meant for (or any other code testing tool).  The end result was that we decided to that we wouldn’t use MTM for similar activities in the future.

At the same time, I was working on another project that was all about code development, but where the significant code base had no tests written for it before.  Typically, attempting to run tests against such code is likely to result in a bunch of errors, and very little, if any value.  However, I was playing QA for an offsite developer, and I decided that it would be useful to have some predefined tests for me to run whenever the develope did a code check in.  I turned out to be right.  I didn’t have to rethink what steps to run through every time.  I could also refine the test list, and hone it so that I was hitting all of the important parts.

As I was going through this process, it occurred to me that testing tools, and Test Driven Development (TDD) techniques could actually be of benefit during a COTS implementation process.  Since tests in a TDD environment actually serve as a form of requirement, you can write up your tests either as a copy of an existing requirements document, or have the test list serve as the requirements document.  Of course, the important point here would be to make the test titles suitably descriptive (as they frequently are in a TDD environment).

As configuration of the system progresses, you run all of the tests (skipping ones that you know haven’t been worked on yet), until you have all of your tests completed.  This then tells you that you are ready to enter into the formal testing phase for compliance systems, or that you are ready to release.  This also facilitates Agile implementation of a COTS system.  As you progress through the various Sprints, User Stories turn into Test Cases, and as those cases test properly, you are done with a Sprint.

I haven’t actually attempted this yet, but it seems reasonable, so I will definitely attempt it.

Old Habits

I have written recently about my discovery of testing when developing code, and of test driven development.  At the time, I was amazed at how well it revealed unused code as well as bugs in the code that is being used.  Unfortunately, I have also discovered that testing is a discipline.  Like any other discipline, it is one that you have to work at establishing, and you need to work to maintain it.

I came to a spot while working on the LiMSpec project where I was trying to develop some unique code, and simply stopped writing new tests, and completely avoided running the existing tests while I was busily “hacking.”  I had reverted to my old bad habits established from 20 years of coding in everything from Assembly Language on up the complexity tree.  Code, code, code, then maybe document, then maybe design.  Test only when everything is done.

Surprisingly… okay, not surprising at all, I had broken various things in the process.  The only way I knew this was when I ran tests to generate a coverage report.  So I’ve now gone back and fixed the broken parts, and have extended the existing test suite.  There is still more work to be done.  I’d like to get close to 100% coverage, but am only at about 85% right now (I think it might actually be quite a bit higher as RCov seems to have missed many places where the code has been run in a test but is not being captured), and will seek to get there during the second release.  In the meantime, I’m going to try to force myself to move much more toward the Test Driven Development approach – where the tests are written first, then the code written to pass the tests.