Ian Bicking: the old part of his blog

Commandline Acceptance Testing

I'm doing some acceptance tests at work now, as I try to clarify our deployment process. I call them acceptance tests because they are fairly black-box, and are run in a different process.

To do this I added a new fixture to paste.fixture in TestFileEnvironment. It expects to be used in a py.test environment, though that just means it prints a lot (py.test captures prints and only displays them when errors occur) and produces plain assertion errors (without assertEqual style exceptions).

You set it up like:

testenv = TestFileEnvironment(
  os.path.join(os.path.dirname(__file__), 'scratch')

The testenv object generally writes things in the scratch/ directory. To use it:

def test_paster_create():
    # Delete any files in the scratch directory:
    testenv.clear()
    result = testenv.run('paster', 'create', 'ProjectName')
    # Make sure a file or directory was created:
    assert 'ProjectName' in result.files_created
    # Get a file wrapper:
    setup = result.files_created['ProjectName/setup.py']
    # Test that the file contains particular text:
    setup.mustcontain('ProjectName')
    result = testenv.run('rm ProjectName/setup.py')
    assert 'ProjectName/setup.py' in result.files_deleted

If an exit code is non-zero, or anything is written to stderr, then an exception is raised. It looks for what files were created, deleted, or updated during the command too. There's a couple other methods, but I haven't needed a whole lot yet.

Anyway, I've found it makes an often annoying process (automatically testing command-line scripts) much more pleasant. And subprocess is like a million times better than all the ways we had to execute commands in the past. OK, a million might be high -- 4x at least (a hint: just use the .communicate() method, the other methods are Too Hard).

Created 07 Sep '05