Someone lately requested me this query:

I’m growing a pytest undertaking to check an API. How can I go surroundings info into my assessments? I have to run assessments in opposition to completely different environments like DEV, TEST, and PROD. Each surroundings has a special URL and a singular set of customers.

This is a standard drawback for automated check suites, not simply in Python or pytest. Any info a check wants in regards to the surroundings beneath check is known as configuration metadata. URLs and consumer accounts are widespread configuration metadata values. Tests have to know what web site to hit and the best way to authenticate.

Using config information with an surroundings variable

There are some ways to deal with inputs like this. I wish to create JSON information to retailer the configuration metadata for every surroundings. So, one thing like this:

  • dev.json
  • check.json
  • prod.json

Each one may seem like this:

{
  "base_url": "
  "username": "pandy",
  "password": "DandyAndySugarCandy"
}

The construction of every file should be the identical in order that assessments can deal with them interchangeably.

I like utilizing JSON information as a result of:

  • they’re plain textual content information with an ordinary format
  • they’re straightforward to diff
  • they retailer information hierarchically
  • Python’s commonplace json module turns them into dictionaries in 2 traces flat

Then, I create an surroundings variable to set the specified config file:

export TARGET_ENV=dev.json

In my pytest undertaking, I write a fixture to get the config file path from this surroundings variable after which learn that file as a dictionary:

import json
import os
import pytest

@pytest.fixture
def target_env(scope="session"):
  config_path = os.environ['TARGET_ENV']
  with open(config_path) as config_file:
    config_data = json.load(config_file)
  return config_data

I’ll put this fixture in a conftest.py file so all assessments can share it. Since it makes use of session scope, pytest will execute it one time earlier than all assessments. Test features can name it like this:

import requests

def test_api_get(target_env):
  url = target_env['base_url']
  creds = (target_env['username'], target_env['password'])
  response = requests.get(url, auth=creds)
  assert response.status_code == 200

Selecting the config file with a command line argument

If you don’t wish to use surroundings variables to pick the config file, you possibly can as a substitute create a customized pytest command line argument. Bas Dijkstra wrote a superb article exhibiting how to do that. Basically, you possibly can add the next operate to conftest.py so as to add the customized argument:

def pytest_addoption(parser):
  parser.addoption(
    '--target-env',
    motion='retailer',
    default="dev.json",
    assist='Path to the goal surroundings config file')

Then, replace the target_env fixture:

import json
import pytest

@pytest.fixture
def target_env():
  config_path = request.config.getoption('--target-env')
  with open(config_path) as config_file:
    config_data = json.load(config_file)
  return config_data

When operating your assessments, you’d specify the config file path like this:

python -m pytest --target-env dev.json

Why trouble with JSON information?

In idea, you possibly can go all inputs into your assessments with pytest command line arguments or surroundings variables. You don’t want config information. However, I discover that storing configuration metadata in information is far more handy than setting a bunch of inputs every time I have to run my assessments. In our instance above, passing one worth for the config file path is way simpler than passing three completely different values for base URL, username, and password. Real-world check tasks would possibly want extra inputs. Plus, configurations don’t change frequency, so it’s okay to avoid wasting them in a file for repeated use. Just be sure to maintain your config information secure if they’ve any secrets and techniques.

Validating inputs

Whenever studying inputs, it’s good follow to verify their values are good. Otherwise, assessments may crash! I like so as to add a number of primary assertions as security checks:

import json
import os
import pytest

@pytest.fixture
def target_env():
  config_path = request.config.getoption('--target-env')
  assert os.path.isfile(config_path)

  with open(config_path) as config_file:
    config_data = json.load(config_file)

  assert 'base_url' in config_data
  assert 'username' in config_data
  assert 'password' in config_data

  return config_data

Now, pytest will cease instantly if inputs are incorrect.

Source link