various (#16)
* add full integration test of cli / pytest_abra with all tests * save path of runner_*.py in runner subclass to improve test discovery -> allows for same test name in two different runners * reorganize output dir names * use URL fixture everywhere * rework coordinator interface * add --session_id to cli args * add log results table * plenty of refactoring * add assert messages * add plenty of tests * add /docs dir with plenty of documentation * fix authentik setup * add authentik cleanup, remove test user * add random test user credential generation and integrate into test routine. random creds are saved to STATES Reviewed-on: local-it-infrastructure/e2e_tests#16 Co-authored-by: Daniel <d.brummerloh@gmail.com> Co-committed-by: Daniel <d.brummerloh@gmail.com>
This commit is contained in:
parent
016b88a68d
commit
2dd765a974
36 changed files with 1145 additions and 432 deletions
1
.gitignore
vendored
1
.gitignore
vendored
|
|
@ -7,3 +7,4 @@ TestResults/
|
|||
*.zip
|
||||
*.egg-info
|
||||
credentials*
|
||||
!credentials-example.json
|
||||
154
README.md
154
README.md
|
|
@ -1,90 +1,18 @@
|
|||
# pytest-abra
|
||||
|
||||
Pytest-Abra is an installable python package design to test instances created with [abra](https://docs.coopcloud.tech/abra/). After installation, you will have two things:
|
||||
Pytest-Abra is an installable python package baed on pytest, designed to test instances created with [abra](https://docs.coopcloud.tech/abra/). After installation, you will have two things:
|
||||
|
||||
- `abratest` CLI command. *Used to initialize the testing.*
|
||||
|
||||
- `pytest-abra` Pytest plugin. *Automatically loads custom fixtures in any pytest (see `pytest_abra/custom_fixtures.py`)*
|
||||
|
||||
## CLI (`abratest`)
|
||||
|
||||
`abratest` can be called via terminal:
|
||||
|
||||
```bash
|
||||
abratest [arguments]
|
||||
```
|
||||
|
||||
To run successfully, very specific arguments are required. The easiest way to use abratest is with the helper script in `main.py`. Of yourse you can implement a similar helper script in the language of your liking. The cli command `abratest` has 3 **required arguments**:
|
||||
|
||||
- `--env_paths`: list of the .env files used in the test
|
||||
- `--recipes_dir`: directory of all available abra recipes
|
||||
- `--output_dir`: target directory for all test results
|
||||
|
||||
### env_paths [string]
|
||||
|
||||
The variable env_paths consists of one or more paths pointing at .env files. The paths are separated with ";". These .env files are actually configuration files for `abra` recipes, but `pytest-abra` uses the same files for test configuration.
|
||||
|
||||
To run `abratest` with these `.env` configuration files
|
||||
|
||||
```
|
||||
/path/to/config_1.env
|
||||
/path/to/config_2.env
|
||||
/path/to/config_3.env
|
||||
```
|
||||
|
||||
we simply call
|
||||
|
||||
```
|
||||
abratest --env_paths /path/to/config_1.env;/path/to/config_2.env;/path/to/config_3.env
|
||||
```
|
||||
|
||||
Under the hood, each `.env` file in `--env_paths` will create one instance of a `Runner` subclass. Let's say we have `wordpress_configuration.env` containing `TYPE=wordpress`. This will create an instance of `RunnerWordpress`. This class has to be imported from `recipes_dir`.
|
||||
|
||||
### recipes_dir [string]
|
||||
|
||||
The required argument `--recipes_dir` has to point to the directory, where all the abra recipes are stored. We can call `abratest` with
|
||||
|
||||
```
|
||||
abratest --recipes_dir /path/to/abra/recipes
|
||||
```
|
||||
|
||||
The expected dir structure inside of `recipes_dir` is as follows:
|
||||
|
||||
```
|
||||
DIR recipes_dir [contains abra recipes]
|
||||
│
|
||||
├── DIR authentik [authentik recipe]
|
||||
│ ├── [files from authentik recipe]
|
||||
│ └── DIR tests_authentik [pytest tests for authentik]
|
||||
│ ├── FILE runner_authentik.py # containing RunnerAuthentik class
|
||||
│ └── [pytest_files]
|
||||
│
|
||||
└── DIR wordpress [wordpress recipe]
|
||||
├── [files from wordpress recipe]
|
||||
└── DIR tests_wordpress [pytest tests for wordpress]
|
||||
├── FILE runner_wordpress.py # containing RunnerWordpress class
|
||||
└── [pytest_files]
|
||||
```
|
||||
|
||||
The class `RunnerWordpress` will be automatically imported using `importlib` library, which is equivalent to the code below. Note that `recipes_dir` will be added to sys.path automatically for the import to work and that every `Runner` class matching `recipes_dir.rglob("*/runner*.py")` will be imported.
|
||||
|
||||
```python
|
||||
from wordpress.tests_wordpress.runner_wordpress import RunnerWordpress
|
||||
```
|
||||
|
||||
### output_dir [string]
|
||||
|
||||
Path to the directory where all test outputs are stored (test report, tracebacks, playwright traces etc.)
|
||||
|
||||
```
|
||||
abratest --output_dir /path/to/output
|
||||
```
|
||||
- `pytest-abra` Pytest plugin. *Automatically loads custom fixtures in any pytest run (see `pytest_abra/custom_fixtures.py`)*
|
||||
|
||||
# Usage
|
||||
|
||||
To use pytest-abra, follow these steps:
|
||||
Pytest-abra can easily be installed on any system but also offers a Docker image. To use pytest-abra, follow these steps:
|
||||
|
||||
## 1. GIT clone [with & without Docker]
|
||||
## Usage [without Docker]
|
||||
|
||||
### Installation [without Docker]
|
||||
|
||||
To clone with submodules, use these git commands:
|
||||
|
||||
|
|
@ -95,14 +23,6 @@ git submodule update --init // add submodule after normal cloning
|
|||
git submodule update --remote // update submodules
|
||||
```
|
||||
|
||||
## Run
|
||||
|
||||
You can run pytest-abra with and without Docker. Choose now and follow the steps accordingly:
|
||||
|
||||
## 2.1 Run without Docker
|
||||
|
||||
### Installation
|
||||
|
||||
Create a python environment and install all dependencies via
|
||||
|
||||
```bash
|
||||
|
|
@ -110,46 +30,40 @@ pip install -e .
|
|||
playwright install
|
||||
```
|
||||
|
||||
Run the script with
|
||||
### Run [without Docker]
|
||||
|
||||
Run the helper script or directly use the cli command (see docs)
|
||||
|
||||
```bash
|
||||
python main.py
|
||||
python main.py # run pytest-abra
|
||||
abratest [options]
|
||||
```
|
||||
|
||||
## 2.2 Run with Docker
|
||||
## Usage [with docker]
|
||||
|
||||
### Installation [with docker]
|
||||
|
||||
To clone with submodules, use these git commands:
|
||||
|
||||
```bash
|
||||
git clone --recurse-submodules <repository>
|
||||
// optional:
|
||||
git submodule update --init // add submodule after normal cloning
|
||||
git submodule update --remote // update submodules
|
||||
```
|
||||
|
||||
Build the image
|
||||
|
||||
```bash
|
||||
docker compose build # build the image
|
||||
docker compose build --no-cache # Force rebuild without cache
|
||||
```
|
||||
|
||||
### Run [with docker]
|
||||
|
||||
Run the script
|
||||
|
||||
```bash
|
||||
docker compose run --rm app python main.py # run pytest-abra
|
||||
docker compose run --rm -it app /bin/bash # debug the container
|
||||
```
|
||||
|
||||
Force rebuild with cache
|
||||
|
||||
```bash
|
||||
docker-compose up --build
|
||||
```
|
||||
|
||||
Force rebuild without cache
|
||||
|
||||
```bash
|
||||
docker-compose build --no-cache
|
||||
```
|
||||
|
||||
## Playwright Debug & Codegen
|
||||
|
||||
Use playwright debug mode or codegen to create testing code easily by recording browser actions https://playwright.dev/python/docs/codegen
|
||||
|
||||
```bash
|
||||
abratest --debug # launch your tests in debug mode
|
||||
playwright codegen demo.playwright.dev/todomvc # visit given url in codegen mode
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
```bash
|
||||
pytest # test pytest-abra
|
||||
pytest -m "not slow" # test pytest-abra without slow tests
|
||||
pytest --collect-only # debug test pytest-abra
|
||||
docker compose run --rm app pytest # run pytest-abra
|
||||
docker compose run --rm -it app /bin/bash # use the container interactively
|
||||
```
|
||||
|
|
|
|||
9
credentials-example.json
Normal file
9
credentials-example.json
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
{
|
||||
"ADMIN_USER": "admin",
|
||||
"ADMIN_PASS": "password",
|
||||
"IMAP_EMAIL": "test@domain.com",
|
||||
"IMAP_HOST": "mail.domain.com",
|
||||
"IMAP_PORT": "993",
|
||||
"IMAP_USER": "imap_user",
|
||||
"IMAP_PASS": "password"
|
||||
}
|
||||
329
docs/documentation.md
Normal file
329
docs/documentation.md
Normal file
|
|
@ -0,0 +1,329 @@
|
|||
# pytest-abra
|
||||
|
||||
Pytest-Abra is an installable python package baed on pytest, designed to test instances created with [abra](https://docs.coopcloud.tech/abra/). After installation, you will have two things:
|
||||
|
||||
- `abratest` CLI command. *Used to initialize the testing.*
|
||||
|
||||
- `pytest-abra` Pytest plugin. *Automatically loads custom fixtures in any pytest run (see `pytest_abra/custom_fixtures.py`)*
|
||||
|
||||
# Getting Started
|
||||
|
||||
Pytest-abra can easily be installed on any system but also offers a Docker image. To use pytest-abra, follow these steps:
|
||||
|
||||
## Usage [without Docker]
|
||||
|
||||
### Installation [without Docker]
|
||||
|
||||
To clone with submodules, use these git commands:
|
||||
|
||||
```bash
|
||||
git clone --recurse-submodules <repository>
|
||||
// optional:
|
||||
git submodule update --init // add submodule after normal cloning
|
||||
git submodule update --remote // update submodules
|
||||
```
|
||||
|
||||
Create a python environment and install all dependencies via
|
||||
|
||||
```bash
|
||||
pip install -e .
|
||||
playwright install
|
||||
```
|
||||
|
||||
### Run [without Docker]
|
||||
|
||||
Run the helper script or directly use the cli command (see docs)
|
||||
|
||||
```bash
|
||||
python main.py # run pytest-abra
|
||||
abratest [options]
|
||||
```
|
||||
|
||||
## Usage [with docker]
|
||||
|
||||
### Installation [with docker]
|
||||
|
||||
To clone with submodules, use these git commands:
|
||||
|
||||
```bash
|
||||
git clone --recurse-submodules <repository>
|
||||
// optional:
|
||||
git submodule update --init // add submodule after normal cloning
|
||||
git submodule update --remote // update submodules
|
||||
```
|
||||
|
||||
Build the image
|
||||
|
||||
```bash
|
||||
docker compose build # build the image
|
||||
docker compose build --no-cache # Force rebuild without cache
|
||||
```
|
||||
|
||||
### Run [with docker]
|
||||
|
||||
Run the script
|
||||
|
||||
```bash
|
||||
docker compose run --rm app python main.py # run pytest-abra
|
||||
docker compose run --rm -it app /bin/bash # use the container interactively
|
||||
```
|
||||
|
||||
# Documentation
|
||||
|
||||
After Installation, `abratest` can be called via terminal:
|
||||
|
||||
```bash
|
||||
abratest [arguments]
|
||||
```
|
||||
|
||||
To run successfully, very specific arguments are required. The easiest way to use `abratest` is with the helper script `main.py`. Of yourse you can implement a similar helper script in the language of your liking.
|
||||
|
||||
## CLI Interface
|
||||
|
||||
The cli command `abratest` has 3 **required arguments**:
|
||||
|
||||
- `--env_paths ENV_PATHS`: list of the .env files used in the test
|
||||
- `--recipes_dir RECIPES_DIR`: directory of all available abra recipes
|
||||
- `--output_dir OUTPUT_DIR`: target directory for all test results
|
||||
|
||||
Furtheremore, there are these optional arguments:
|
||||
|
||||
- `--resume`: `abratest` will take the directory in `output_dir` with the most recent creation date and resume the tests there.
|
||||
- `--session_id SESSION_ID`: Instead of generating a new session_id, the given session_id is used to run or resume the test. Overwrites --resume to False.
|
||||
- `--debug`: enables playwright debug mode, see docs [here](https://playwright.dev/python/docs/running-tests#debugging-tests)
|
||||
- `--timeout`: will overwrite the default playwright timeouts in [ms], see docs [here](https://playwright.dev/python/docs/api/class-browsercontext#browser-context-set-default-timeout) and [here](https://playwright.dev/python/docs/test-assertions#global-timeout). In our current setup, some tests can fail at 10s but will pass with 20s.
|
||||
|
||||
### env_paths [required | string]
|
||||
|
||||
The .env files provied through the `--env_paths` argument are the most important input to abratest, as they serve as configuration for the tests. One or more paths pointing at .env files can be provided, multiple paths are separated with ";". These .env files are actually the same files that are used to configure the `abra` recipes for instance creation.
|
||||
|
||||
To run `abratest` with these `.env` configuration files
|
||||
|
||||
- `/path/config_1.env` [of TYPE authentik]
|
||||
- `/path/config_2.env` [of TYPE wordpress]
|
||||
- `/path/config_3.env` [of TYPE wordpress]
|
||||
|
||||
we simply call
|
||||
|
||||
```
|
||||
abratest --env_paths /path/config_1.env;/path/config_2.env;/path/config_3.env [...other args]
|
||||
```
|
||||
|
||||
Under the hood, each `.env` file in `--env_paths` will create one instance of a `Runner` subclass. Let's say we have `config_2.env` containing `TYPE=wordpress`. This will create an instance of `RunnerWordpress`. This class has to be imported from `recipes_dir`.
|
||||
|
||||
### recipes_dir [required | string]
|
||||
|
||||
The required argument `--recipes_dir` has to point to the directory, where all the abra recipes are stored. We can call `abratest` with
|
||||
|
||||
```
|
||||
abratest --recipes_dir /path/to/abra/recipes
|
||||
```
|
||||
|
||||
The expected dir structure inside of `recipes_dir` is as follows:
|
||||
|
||||
```
|
||||
DIR recipes_dir [contains abra recipes]
|
||||
│
|
||||
├── DIR authentik [authentik recipe]
|
||||
│ ├── [files from authentik recipe]
|
||||
│ └── DIR tests_authentik [pytest tests for authentik]
|
||||
│ ├── FILE runner_authentik.py # containing RunnerAuthentik class
|
||||
│ └── [pytest_files]
|
||||
│
|
||||
└── DIR wordpress [wordpress recipe]
|
||||
├── [files from wordpress recipe]
|
||||
└── DIR tests_wordpress [pytest tests for wordpress]
|
||||
├── FILE runner_wordpress.py # containing RunnerWordpress class
|
||||
└── [pytest_files]
|
||||
```
|
||||
|
||||
The class `RunnerWordpress` will be automatically imported using `importlib` library, which is equivalent to the code below. Note that `recipes_dir` will be added to sys.path automatically for the import to work and that every `Runner` class matching `recipes_dir.rglob("*/runner*.py")` will be imported.
|
||||
|
||||
```python
|
||||
from wordpress.tests_wordpress.runner_wordpress import RunnerWordpress
|
||||
```
|
||||
|
||||
### output_dir [required | string]
|
||||
|
||||
Path to the directory where all test outputs are stored (test report, tracebacks, playwright traces etc.)
|
||||
|
||||
```
|
||||
abratest --output_dir /path/to/output
|
||||
```
|
||||
|
||||
# Functionality
|
||||
|
||||
Abratest has 3 required inputs, but most importantly the test configuration is done through the .env files given with the --env_paths argument. So let's say we want to run abratest with these 3 .env files:
|
||||
|
||||
- `config1.env` [of TYPE authentik]
|
||||
|
||||
- config2.env [of TYPE wordpress]
|
||||
|
||||
- config3.env [of TYPE wordpress]
|
||||
|
||||
Now we run
|
||||
|
||||
```bash
|
||||
abratest --env_paths path/config1.env;path/config2.env;path/config3.env [...other args]
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
abratest -> create Coordinator() instance
|
||||
└── Coordinator() -> create Runner() subclass instances
|
||||
├── RunnerAuthentik() [based on config1.env, loaded
|
||||
│ │ from abra/recipes/authentik]
|
||||
│ │ # RunnerAuthentik with 3 test files:
|
||||
│ ├── RUN pytest path/setup_authentik.py
|
||||
│ ├── RUN pytest path/test_authentik_1.py
|
||||
│ └── RUN pytest path/test_authentik_2.py
|
||||
├── RunnerWordpress() [based on config2.env, loaded
|
||||
│ │ from abra/recipes/wordpress]
|
||||
│ │ # RunnerWordpress with 1 test file
|
||||
│ ├── RUN pytest path/setup_authentik.py
|
||||
│ ├── RUN pytest path/test_authentik_1.py
|
||||
│ └── RUN pytest path/test_authentik_2.py
|
||||
└── RunnerWordpress() [based on config3.env, loaded
|
||||
│ from abra/recipes/wordpress]
|
||||
│ # RunnerWordpress with 1 test file
|
||||
├── RUN pytest path/setup_authentik.py
|
||||
├── RUN pytest path/test_authentik_1.py
|
||||
└── RUN pytest path/test_authentik_2.py
|
||||
|
||||
|
||||
```
|
||||
|
||||
Coordinator will take care of the correct order of the tests. In general, tests are placed in one of 3 categories: `setups`, `tests` and `cleanups`. To associate a test with one of these categories, place the Test in the corresponding list of the Runner class, i.e. Runner.setups = [test] or Runner.tests = [test]. The execution order will be.
|
||||
|
||||
> [setups] ➔ [tests] ➔ [cleanups]
|
||||
|
||||
|
||||
Furthermore, some `Runner` classes can depend on others. For example, `RunnerWordpress` depends on `RunnerAuthentik`. Therefore, `Coordinator` will make sure that `RunnerAuthentik` runs before `RunnerWordpress`. We will end up with with this order:
|
||||
|
||||
| # | Runner | Type |
|
||||
| --- | -------------- | -------- |
|
||||
| 1. | Authentik | setups |
|
||||
| 2. | Wordpress-1 | setups |
|
||||
| 3. | Wordpress-2 | setups |
|
||||
| 4. | Authentik | tests |
|
||||
| 5. | Wordpress-1 | tests |
|
||||
| 6. | Wordpress-2 | tests |
|
||||
| 7. | Authentik | cleanups |
|
||||
| 8. | Wordpress-1 | cleanups |
|
||||
| 9. | Wordpress-2 | cleanups |
|
||||
|
||||
|
||||
# Create a test suite for a recipe
|
||||
|
||||
todo
|
||||
|
||||
To understand how a test suite is built, let's have a look at the files
|
||||
|
||||
runner_authentik.py -> required, defines the Runner subclass (see below)
|
||||
conftest.py -> not required. special file for pytest. is automatically discovered and loaded. convenient place to define fixtures and functions to be used in more than one test routine
|
||||
setup_authentik.py -> not required. can hold setup routine for authentik, has to be registered in runner_authentik.py
|
||||
|
||||
# Create a custom Runner
|
||||
|
||||
To comprehend the process of creating a new subclass of `Runner`, let's examine a simplified rendition of the `RunnerWordpress` class. Within it, there exist two setup scripts and two test scripts, one of which operates conditionally.
|
||||
|
||||
|
||||
```python
|
||||
from pytest_abra import Runner, Test
|
||||
|
||||
class RunnerWordpress(Runner):
|
||||
env_type = "wordpress"
|
||||
dependencies = ["authentik"]
|
||||
setups = [
|
||||
Test(test_file="setup_wordpress_1.py"),
|
||||
Test(test_file="setup_wordpress_2.py"),
|
||||
]
|
||||
tests = [
|
||||
Test(test_file="test_wordpress.py"),
|
||||
Test(condition=condition_function, test_file="test_wordpress_conditional.py"),
|
||||
]
|
||||
cleanups = []
|
||||
```
|
||||
|
||||
The signature of condition functions can be seen below. The function takes one `NamedTuple` and returns of type `bool`. You can learn about the contents of the input by looking up the class `ConditionArgs`. Generally speaking, it provides access to all of the .env files, especially the one related to the current Runner.
|
||||
|
||||
```python
|
||||
def condition_function(args: ConditionArgs) -> bool:
|
||||
...
|
||||
```
|
||||
|
||||
## Discovery of `Runners` and `Tests`
|
||||
|
||||
- Runners will be discovered, if they are defined in a moduled of name `runner_*.py` including a class of name `Runner*`.
|
||||
|
||||
- Tests will be discovered by filename as long as they are placed in the parent dir of `runner_*.py` or in any subdirectory.
|
||||
|
||||
```
|
||||
DIR parent_dir
|
||||
├── FILE runner_*.py
|
||||
├── FILE test1.py
|
||||
└── DIR subdir
|
||||
├── DIR subsubdir
|
||||
│ └── test2.py
|
||||
└── test3.py
|
||||
```
|
||||
|
||||
# Create custom Tests
|
||||
|
||||
The test files are written in the same way as any other pytest test file. The only difference is that pytest-abra provides custom fixtures that make it easy to get the configuration by the provided .env files and to deal with URLS etc.
|
||||
|
||||
|
||||
### Step 1) Add new Test
|
||||
|
||||
Create a new testfile `new_test.py` in the same directory or a subdirectory of `runner_wordpress.py`.
|
||||
Register `new_test.py` as a test in the `RunnerWordpress` class.
|
||||
Set prevent_skip=True, so that you can run your new test over and over again for debugging, without it being skipped
|
||||
|
||||
```python
|
||||
# runner_wordpress.py
|
||||
from pytest_abra import Runner, Test
|
||||
|
||||
class RunnerWordpress(Runner):
|
||||
env_type = "wordpress"
|
||||
tests = [
|
||||
Test(test_file="working_test.py"),
|
||||
Test(test_file="new_test.py", prevent_skip=True),
|
||||
]
|
||||
```
|
||||
|
||||
```python
|
||||
# new_test.py
|
||||
|
||||
def test_new():
|
||||
...
|
||||
```
|
||||
|
||||
### Step 2) Call abratest
|
||||
|
||||
Call abratest with `--debug` to enable playwright debug mode and either `--session_id` or `--resume`.
|
||||
|
||||
```bash
|
||||
abratest [required-options] --debug --session_id debug_session
|
||||
```
|
||||
|
||||
This could be done by modifying `main.py`. The first time you run abratest, all tests will be executed as usual. The second time, all tests will be skipped as they have passed already. Only your new test will be run again and again, as the prevent_skip option is enabled. So you can run all tests once and then skip all tests besides your new test you want to debug.
|
||||
|
||||
# todo: add example
|
||||
|
||||
# Playwright Debug & Codegen
|
||||
|
||||
Use playwright debug mode or codegen to create testing code easily by recording browser actions https://playwright.dev/python/docs/codegen
|
||||
|
||||
```bash
|
||||
abratest --debug # launch your tests in debug mode
|
||||
playwright codegen demo.playwright.dev/todomvc # visit given url in codegen mode
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
```bash
|
||||
pytest # test pytest-abra
|
||||
pytest -m "not slow" # test pytest-abra without slow tests
|
||||
pytest --collect-only # debug test pytest-abra
|
||||
docker compose run --rm app pytest # run pytest-abra
|
||||
```
|
||||
16
main.py
16
main.py
|
|
@ -1,16 +1,12 @@
|
|||
import json
|
||||
import os
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
from pytest_abra.utils import load_json_to_environ
|
||||
|
||||
# --------------------- load credentials to env variables -------------------- #
|
||||
|
||||
cred_file = Path("credentials.json")
|
||||
with open(cred_file, "r") as f:
|
||||
CREDENTIALS = json.load(f)
|
||||
|
||||
for key, value in CREDENTIALS.items():
|
||||
os.environ[key] = value
|
||||
load_json_to_environ(cred_file)
|
||||
|
||||
# --------------------------------- env files -------------------------------- #
|
||||
|
||||
|
|
@ -18,7 +14,7 @@ for key, value in CREDENTIALS.items():
|
|||
# triggers the execution of one test Runner and provides configuration to the
|
||||
# tests inside the runner.
|
||||
|
||||
ENV_FILES_ROOT = Path("../envfiles").resolve()
|
||||
ENV_FILES_ROOT = Path("./envfiles").resolve()
|
||||
ENV_FILES = [
|
||||
ENV_FILES_ROOT / "login.test.dev.local-it.cloud.env", # authentik
|
||||
ENV_FILES_ROOT / "blog.test.dev.local-it.cloud.env", # wordpress
|
||||
|
|
@ -28,7 +24,7 @@ ENV_PATHS = ";".join([x.as_posix() for x in ENV_FILES])
|
|||
|
||||
# ----------------------------------- dirs ----------------------------------- #
|
||||
|
||||
RECIPES_DIR = Path("../recipes").resolve()
|
||||
RECIPES_DIR = Path("./recipes").resolve()
|
||||
OUTPUT_DIR = Path("./test-output").resolve()
|
||||
|
||||
# ------------------------------------ run ----------------------------------- #
|
||||
|
|
@ -44,5 +40,7 @@ subprocess.run(
|
|||
OUTPUT_DIR,
|
||||
"--resume",
|
||||
# "--debug",
|
||||
# "--session_id",
|
||||
# "abc",
|
||||
]
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,104 +0,0 @@
|
|||
# RUN
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Abratest has 3 required inputs, but most importantly the test configuration is done through the .env files given with the --env_paths argument. So let's say we want to run abratest with these 3 .env files:
|
||||
|
||||
- config1.env [of TYPE authentik]
|
||||
|
||||
- config2.env [of TYPE wordpress]
|
||||
|
||||
- config3.env [of TYPE wordpress]
|
||||
|
||||
Now we run
|
||||
|
||||
```bash
|
||||
abratest --env_paths path/config1.env;path/config2.env;path/config3.env [...other args]
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
abratest -> create Coordinator() instance
|
||||
└── Coordinator() -> create Runner() subclass instances
|
||||
├── RunnerAuthentik() [based on config1.env, loaded
|
||||
│ │ from abra/recipes/authentik]
|
||||
│ │ # RunnerAuthentik with 3 test files:
|
||||
│ ├── RUN pytest path/setup_authentik.py
|
||||
│ ├── RUN pytest path/test_authentik_1.py
|
||||
│ └── RUN pytest path/test_authentik_2.py
|
||||
├── RunnerWordpress() [based on config2.env, loaded
|
||||
│ │ from abra/recipes/wordpress]
|
||||
│ │ # RunnerWordpress with 1 test file
|
||||
│ ├── RUN pytest path/setup_authentik.py
|
||||
│ ├── RUN pytest path/test_authentik_1.py
|
||||
│ └── RUN pytest path/test_authentik_2.py
|
||||
└── RunnerWordpress() [based on config3.env, loaded
|
||||
│ from abra/recipes/wordpress]
|
||||
│ # RunnerWordpress with 1 test file
|
||||
├── RUN pytest path/setup_authentik.py
|
||||
├── RUN pytest path/test_authentik_1.py
|
||||
└── RUN pytest path/test_authentik_2.py
|
||||
|
||||
|
||||
```
|
||||
|
||||
Coordinator will take care of the correct order of the tests. In general, tests are placed in one of 3 categories: `setups`, `tests` and `cleanups`. To associate a test with one of these categories, place the Test in the corresponding list of the Runner class, i.e. Runner.setups = [test] or Runner.tests = [test]. The execution order will be.
|
||||
|
||||
> [setups] ➔ [tests] ➔ [cleanups]
|
||||
|
||||
|
||||
Furthermore, some `Runner` classes can depend on others. For example, `RunnerWordpress` depends on `RunnerAuthentik`. Therefore, `Coordinator` will make sure that `RunnerAuthentik` runs before `RunnerWordpress`. We will end up with with this order:
|
||||
|
||||
| # | Runner | Type |
|
||||
| --- | -------------- | -------- |
|
||||
| 1. | Authentik | setups |
|
||||
| 2. | Wordpress-1 | setups |
|
||||
| 3. | Wordpress-2 | setups |
|
||||
| 4. | Authentik | tests |
|
||||
| 5. | Wordpress-1 | tests |
|
||||
| 6. | Wordpress-2 | tests |
|
||||
| 7. | Authentik | cleanups |
|
||||
| 8. | Wordpress-1 | cleanups |
|
||||
| 9. | Wordpress-2 | cleanups |
|
||||
|
||||
|
||||
# Create a custom Runner
|
||||
|
||||
To comprehend the process of creating a new subclass of `Runner`, let's examine a simplified rendition of the `RunnerWordpress` class. Within it, there exist two setup scripts and two test scripts, one of which operates conditionally.
|
||||
|
||||
|
||||
```python
|
||||
from pytest_abra import Runner, Test
|
||||
|
||||
class RunnerWordpress(Runner):
|
||||
env_type = "wordpress"
|
||||
dependencies = ["authentik"]
|
||||
setups = [
|
||||
Test(test_file="setup_wordpress_1.py"),
|
||||
Test(test_file="setup_wordpress_2.py"),
|
||||
]
|
||||
tests = [
|
||||
Test(test_file="test_wordpress.py"),
|
||||
Test(condition=condition_function, test_file="test_wordpress_conditional.py"),
|
||||
]
|
||||
cleanups = []
|
||||
```
|
||||
|
||||
The signature of condition functions can be seen below. The function takes one `NamedTuple` and returns of type `bool`. You can learn about the contents of the input by looking up the class `ConditionArgs`. Generally speaking, it provides access to all of the .env files, especially the one related to the current Runner.
|
||||
|
||||
```python
|
||||
def condition_function(args: ConditionArgs) -> bool:
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
# Create custom Tests
|
||||
|
||||
The test files are written in the same way as any other pytest test file. The only difference is that pytest-abra provides custom fixtures that make it easy to get the configuration by the provided .env files and to deal with URLS etc.
|
||||
|
||||
# todo: add example
|
||||
|
|
@ -21,9 +21,15 @@ dependencies = [
|
|||
"loguru == 0.7.2",
|
||||
"beautifulsoup4 == 4.12.2",
|
||||
"imbox == 0.9.8",
|
||||
"tabulate == 0.9.0",
|
||||
"hatchling == 1.18.0",
|
||||
"icecream",
|
||||
"tabulate",
|
||||
"icecream == 2.1.3",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"mypy",
|
||||
"ruff",
|
||||
]
|
||||
|
||||
[project.entry-points.pytest11]
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
from pytest_abra.coordinator import Coordinator
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pytest_abra.env_manager import EnvFile
|
||||
from pytest_abra.env_manager import EnvFile, EnvManager
|
||||
from pytest_abra.runner import ConditionArgs, Runner, Test
|
||||
from pytest_abra.utils import BaseUrl
|
||||
|
||||
|
|
@ -12,4 +12,5 @@ __all__ = [
|
|||
"DirManager",
|
||||
"BaseUrl",
|
||||
"EnvFile",
|
||||
"EnvManager",
|
||||
]
|
||||
|
|
|
|||
|
|
@ -2,21 +2,29 @@ import argparse
|
|||
import os
|
||||
from pathlib import Path
|
||||
|
||||
import pkg_resources # type: ignore
|
||||
from loguru import logger
|
||||
|
||||
from pytest_abra import Coordinator
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pytest_abra.utils import get_datetime_string
|
||||
from pytest_abra.utils import get_session_id
|
||||
|
||||
|
||||
def get_version():
|
||||
return pkg_resources.get_distribution("pytest_abra").version
|
||||
|
||||
|
||||
def run():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--version", "-V", action="version", version=get_version(), help="output the version number")
|
||||
parser.add_argument("--env_paths", type=str, help="List of loaded env files separated with ;", required=True)
|
||||
parser.add_argument("--recipes_dir", type=Path, help="List of loaded env files separated with ;", required=True)
|
||||
parser.add_argument("--output_dir", type=Path, help="List of loaded env files separated with ;", required=True)
|
||||
parser.add_argument("--recipes_dir", type=Path, help="Dir of abra recipes and respective runners", required=True)
|
||||
parser.add_argument("--output_dir", type=Path, help="Dir of test outputs", required=True)
|
||||
parser.add_argument("--timeout", type=int, help="Set Playwright timeout in ms", default=20_000)
|
||||
parser.add_argument("--debug", action="store_true", help="Enable Playwright debug mode")
|
||||
parser.add_argument("--resume", action="store_true", help="Re-run the most recent test, skipping passed tests")
|
||||
parser.add_argument("--session_id", help="Session dir name (inside output_dir). Overwrites --resume")
|
||||
|
||||
args = parser.parse_args()
|
||||
env_paths = [Path(s) for s in args.env_paths.split(";")]
|
||||
|
||||
|
|
@ -27,17 +35,13 @@ def run():
|
|||
|
||||
# ----------------------------- define session_id ---------------------------- #
|
||||
|
||||
session_id = "test-" + get_datetime_string()
|
||||
if args.resume:
|
||||
latest_session_id = DirManager.get_latest_session_id(args.output_dir)
|
||||
if latest_session_id:
|
||||
session_id = DirManager.get_latest_session_id(args.output_dir)
|
||||
session_id = get_session_id(args.output_dir, args.resume, args.session_id)
|
||||
|
||||
# ------------------------------- setup logging ------------------------------ #
|
||||
|
||||
# todo: move to Coordinator
|
||||
DIR = DirManager(output_dir=args.output_dir, session_id=session_id)
|
||||
log_file = DIR.RECORDS / "coordinator.log"
|
||||
log_file = DIR.RESULTS / "coordinator.log"
|
||||
logger.add(log_file)
|
||||
|
||||
# ---------------------------- initialize and run ---------------------------- #
|
||||
|
|
@ -49,7 +53,7 @@ def run():
|
|||
recipes_dir=args.recipes_dir,
|
||||
timeout=args.timeout,
|
||||
)
|
||||
coordinator.setup_test()
|
||||
coordinator.run_test()
|
||||
coordinator.prepare_tests()
|
||||
coordinator.run_tests()
|
||||
coordinator.combine_html()
|
||||
coordinator.collect_traces()
|
||||
|
|
|
|||
|
|
@ -1,15 +1,18 @@
|
|||
import importlib
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
from loguru import logger
|
||||
from tabulate import tabulate # type: ignore
|
||||
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pytest_abra.env_manager import EnvFile, EnvManager
|
||||
from pytest_abra.html_helper import merge_html_reports
|
||||
from pytest_abra.runner import Runner
|
||||
from pytest_abra.utils import rmtree
|
||||
from pytest_abra.shared_types import TestResult
|
||||
from pytest_abra.utils import generate_random_string, load_json_to_environ, rmtree
|
||||
|
||||
|
||||
class Coordinator:
|
||||
|
|
@ -32,21 +35,24 @@ class Coordinator:
|
|||
self.ENV = EnvManager(env_paths=env_paths, RUNNER_DICT=self.RUNNER_DICT)
|
||||
self.TIMEOUT = timeout
|
||||
|
||||
def setup_test(self) -> None:
|
||||
logger.info("calling setup_test()")
|
||||
def prepare_tests(self) -> None:
|
||||
logger.info("calling prepare_tests()")
|
||||
self.DIR.create_all_dirs()
|
||||
self.ENV.copy_env_files(self.DIR)
|
||||
self.ENV.copy_env_files(self.ENV.env_files, self.DIR)
|
||||
self.load_test_credentials(self.DIR)
|
||||
|
||||
def run_test(self) -> None:
|
||||
logger.info("calling run_test()")
|
||||
def run_tests(self) -> None:
|
||||
logger.info("calling run_tests()")
|
||||
self.runners: list[Runner] = self._load_runners(self.ENV.env_files)
|
||||
status_list: list[TestResult] = []
|
||||
for runner in self.runners:
|
||||
runner.run_setups()
|
||||
status_list.extend(runner.run_setups())
|
||||
for runner in self.runners:
|
||||
runner.run_tests()
|
||||
status_list.extend(runner.run_tests())
|
||||
for runner in self.runners:
|
||||
runner.run_cleanups()
|
||||
logger.info("run_test() finished")
|
||||
status_list.extend(runner.run_cleanups())
|
||||
status_table = tabulate([[t.test_name, t.status] for t in status_list], headers=["name", "status"])
|
||||
logger.info(f"run_tests() finished\n{status_table}")
|
||||
|
||||
def _load_runners(self, env_files: list[EnvFile]) -> list[Runner]:
|
||||
"""Creates an instance of the correct Runner class for each given env file"""
|
||||
|
|
@ -58,13 +64,13 @@ class Coordinator:
|
|||
|
||||
def combine_html(self) -> None:
|
||||
"""combines all generated pytest html reports into one"""
|
||||
in_dir_path = str(self.DIR.RECORDS / "html")
|
||||
out_file_path = str(self.DIR.RECORDS / "full-report.html")
|
||||
in_dir_path = str(self.DIR.RESULTS / "html")
|
||||
out_file_path = str(self.DIR.RESULTS / "full-report.html")
|
||||
title = "combined.html"
|
||||
merge_html_reports(in_dir_path, out_file_path, title)
|
||||
|
||||
def collect_traces(self):
|
||||
"""moves all traces into SESSION/RECORDS dir
|
||||
"""moves all traces into SESSION/RESULTS dir
|
||||
|
||||
if tests are rerun and generate another trace, the new trace will get a unique name such as
|
||||
tracename-0
|
||||
|
|
@ -80,14 +86,34 @@ class Coordinator:
|
|||
index += 1
|
||||
return get_new_path(root_dir, base_name, index=index)
|
||||
|
||||
trace_root_dir = self.DIR.RECORDS / "traces"
|
||||
trace_root_dir = self.DIR.RESULTS / "traces"
|
||||
for f in trace_root_dir.rglob("*/trace.zip"):
|
||||
new_path = get_new_path(self.DIR.RECORDS, f.parent.name)
|
||||
new_path = get_new_path(self.DIR.RESULTS, f.parent.name)
|
||||
f.parent.rename(new_path)
|
||||
rmtree(trace_root_dir)
|
||||
|
||||
@staticmethod
|
||||
def create_runner_dict(recipes_dir: Path) -> dict[str, type["Runner"]]:
|
||||
def load_test_credentials(DIR: DirManager):
|
||||
"""Load test user credentials. If not available, create them randomly.
|
||||
|
||||
Test users are created during testing but should be deleted after the test. In case test
|
||||
users are not deleted after tests by accident, the user credentials are not known to an
|
||||
attacker."""
|
||||
|
||||
test_credentials_path = DIR.STATES / "credentials_test.json"
|
||||
if not test_credentials_path.is_file():
|
||||
test_credentials = {
|
||||
"TEST_USER": "test-" + generate_random_string(6),
|
||||
"TEST_PASS": generate_random_string(12, punctuation=True),
|
||||
}
|
||||
|
||||
with open(test_credentials_path, "w") as json_file:
|
||||
json.dump(test_credentials, json_file)
|
||||
|
||||
load_json_to_environ(test_credentials_path)
|
||||
|
||||
@staticmethod
|
||||
def create_runner_dict(recipes_dir: Path) -> dict[str, type[Runner]]:
|
||||
"""Creates a dictionary holding all the RunnerClasses that can be discovered in recipes_dir
|
||||
|
||||
example:
|
||||
|
|
@ -101,18 +127,19 @@ class Coordinator:
|
|||
because recipes_dir is added to sys.path.
|
||||
"""
|
||||
|
||||
RUNNER_DICT: dict[str, type["Runner"]] = dict()
|
||||
RUNNER_DICT: dict[str, type[Runner]] = dict()
|
||||
runner_discovery_pattern = re.compile("Runner.+")
|
||||
|
||||
# make it possible to import modules from recipes_dir
|
||||
sys.path.append(recipes_dir.as_posix())
|
||||
|
||||
for module_path in recipes_dir.rglob("*/runner*.py"):
|
||||
for module_path in recipes_dir.rglob("*/runner_*.py"):
|
||||
rel_path = module_path.relative_to(recipes_dir).as_posix().replace("/", ".").replace(".py", "")
|
||||
module = importlib.import_module(rel_path)
|
||||
runner_class_names = [name for name in dir(module) if runner_discovery_pattern.match(name)]
|
||||
assert len(runner_class_names) == 1
|
||||
runner_class_name = runner_class_names[0]
|
||||
RunnerClass: type[Runner] = getattr(module, runner_class_name)
|
||||
RunnerClass._tests_path = module_path.parent
|
||||
RUNNER_DICT[RunnerClass.env_type] = RunnerClass
|
||||
return RUNNER_DICT
|
||||
|
|
|
|||
|
|
@ -3,7 +3,8 @@
|
|||
|
||||
import os
|
||||
import re
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
# from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import Generator, Protocol, TypedDict
|
||||
|
||||
|
|
@ -11,7 +12,7 @@ import pytest
|
|||
from dotenv import dotenv_values
|
||||
from icecream import ic # type: ignore
|
||||
from imbox import Imbox # type: ignore
|
||||
from playwright.sync_api import APIRequestContext, BrowserContext, Playwright, expect
|
||||
from playwright.sync_api import BrowserContext, expect
|
||||
from pytest import Parser
|
||||
|
||||
from pytest_abra import BaseUrl, DirManager, EnvFile
|
||||
|
|
@ -49,9 +50,9 @@ def DIR(request) -> DirManager:
|
|||
|
||||
DIR.OUTPUT
|
||||
DIR.SESSION
|
||||
DIR.RECORDS
|
||||
DIR.STATES
|
||||
DIR.RESULTS"""
|
||||
DIR.RESULTS
|
||||
DIR.STATUS"""
|
||||
|
||||
output_dir = request.config.getoption("--output_dir")
|
||||
assert output_dir, "pytest argument --output_dir not set"
|
||||
|
|
@ -93,13 +94,13 @@ def URL(env_config: dict[str, str]) -> BaseUrl:
|
|||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def imap_client() -> None:
|
||||
def imap_client() -> Generator[Imbox, None, None]:
|
||||
"""imap email client using credentials from environment variables"""
|
||||
|
||||
assert os.environ["IMAP_HOST"]
|
||||
assert os.environ["IMAP_PORT"]
|
||||
assert os.environ["IMAP_USER"]
|
||||
assert os.environ["IMAP_PASS"]
|
||||
assert os.environ["IMAP_HOST"], "required environment variable is undefined"
|
||||
assert os.environ["IMAP_PORT"], "required environment variable is undefined"
|
||||
assert os.environ["IMAP_USER"], "required environment variable is undefined"
|
||||
assert os.environ["IMAP_PASS"], "required environment variable is undefined"
|
||||
|
||||
imbox = Imbox(
|
||||
hostname=os.environ["IMAP_HOST"],
|
||||
|
|
@ -138,9 +139,8 @@ def imap_recent_messages(imap_client: Imbox) -> list[Message]:
|
|||
for uid, message in messages:
|
||||
print(uid, message.subject, message.date)"""
|
||||
|
||||
N_MINUTES = 30
|
||||
|
||||
n_minutes_ago = datetime.now() - timedelta(minutes=N_MINUTES)
|
||||
# N_MINUTES = 30
|
||||
# n_minutes_ago = datetime.now() - timedelta(minutes=N_MINUTES)
|
||||
uids: list[bytes] = []
|
||||
messages: list[Message] = []
|
||||
# for uid, message in imap_client.messages(date__gt=n_minutes_ago):
|
||||
|
|
@ -150,14 +150,3 @@ def imap_recent_messages(imap_client: Imbox) -> list[Message]:
|
|||
messages.append(message)
|
||||
|
||||
return messages
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def api_request_context(
|
||||
playwright: Playwright,
|
||||
DIR: DirManager,
|
||||
) -> Generator[APIRequestContext, None, None]:
|
||||
state_file = DIR.STATES / "authentik_admin_state.json"
|
||||
request_context = playwright.request.new_context(storage_state=state_file)
|
||||
yield request_context
|
||||
request_context.dispose()
|
||||
|
|
|
|||
|
|
@ -11,11 +11,11 @@ class DirManager:
|
|||
The structures is as follows:
|
||||
tests dir/
|
||||
session_id-1/
|
||||
records
|
||||
results
|
||||
states
|
||||
status
|
||||
session_id-2/
|
||||
records
|
||||
results
|
||||
...
|
||||
"""
|
||||
|
||||
|
|
@ -32,11 +32,11 @@ class DirManager:
|
|||
dirs: list[Path] = [
|
||||
self.OUTPUT_DIR,
|
||||
self.SESSION,
|
||||
self.RECORDS,
|
||||
self.HTML,
|
||||
self.STATES,
|
||||
self.ENV_FILES,
|
||||
self.RESULTS,
|
||||
self.HTML,
|
||||
self.STATUS,
|
||||
]
|
||||
for d in dirs:
|
||||
d.mkdir(exist_ok=True)
|
||||
|
|
@ -49,14 +49,6 @@ class DirManager:
|
|||
def SESSION(self):
|
||||
return self.OUTPUT_DIR / self.session_id
|
||||
|
||||
@property
|
||||
def RECORDS(self):
|
||||
return self.SESSION / "records"
|
||||
|
||||
@property
|
||||
def HTML(self):
|
||||
return self.RECORDS / "html"
|
||||
|
||||
@property
|
||||
def STATES(self):
|
||||
return self.SESSION / "states"
|
||||
|
|
@ -69,6 +61,14 @@ class DirManager:
|
|||
def RESULTS(self):
|
||||
return self.SESSION / "results"
|
||||
|
||||
@property
|
||||
def HTML(self):
|
||||
return self.RESULTS / "html"
|
||||
|
||||
@property
|
||||
def STATUS(self):
|
||||
return self.SESSION / "status"
|
||||
|
||||
@property
|
||||
def RECIPES(self):
|
||||
return self.recipes_dir
|
||||
|
|
@ -80,7 +80,13 @@ class DirManager:
|
|||
|
||||
@staticmethod
|
||||
def get_latest_session_id(output_dir: Path) -> Optional[str]:
|
||||
"""returns the name of the newest dir inside of output_dir"""
|
||||
"""returns the name of the newest dir inside of output_dir
|
||||
|
||||
if output_dir does not exists or is empty, None is returned"""
|
||||
|
||||
if not output_dir.is_dir():
|
||||
return None
|
||||
|
||||
all_dirs = [d for d in output_dir.iterdir() if d.is_dir()]
|
||||
if all_dirs:
|
||||
newest_dir: Path = max(all_dirs, key=lambda x: x.stat().st_ctime)
|
||||
|
|
|
|||
|
|
@ -4,9 +4,10 @@ from typing import TYPE_CHECKING, NamedTuple
|
|||
|
||||
from dotenv import dotenv_values
|
||||
|
||||
from pytest_abra.utils import files_are_same
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pytest_abra.runner import Runner
|
||||
from pytest_abra import DirManager, Runner
|
||||
|
||||
|
||||
class EnvFile(NamedTuple):
|
||||
|
|
@ -45,6 +46,7 @@ class EnvManager:
|
|||
def _get_dependency_rules(env_files: list[EnvFile], RUNNER_DICT: dict[str, type["Runner"]]) -> list[DependencyRule]:
|
||||
dependency_rules: list[DependencyRule] = []
|
||||
for env_file in env_files:
|
||||
assert env_file.env_type in RUNNER_DICT, f"no runner for env_type={env_file.env_type} found in RUNNER_DICT"
|
||||
child_runner_class = RUNNER_DICT[env_file.env_type]
|
||||
for dependency in child_runner_class.dependencies:
|
||||
dependency_rule = DependencyRule(child=child_runner_class.env_type, dependency=dependency)
|
||||
|
|
@ -93,11 +95,25 @@ class EnvManager:
|
|||
"Could not resolve test order. This is possibly due to a circular dependency (a on b, b on c, c on a)"
|
||||
)
|
||||
|
||||
def copy_env_files(self, DIR: "DirManager") -> None:
|
||||
"""Copies all env files to STATES/env_files. Files will be renamed to
|
||||
<index>-<env_type>-<original_name>
|
||||
00-authentik-login.test.dev.local-it.cloud.env"""
|
||||
@staticmethod
|
||||
def copy_env_files(env_files: list[EnvFile], DIR: "DirManager") -> None:
|
||||
"""Copies all env files to STATES/env_files.
|
||||
|
||||
for index, env_file in enumerate(self.env_files):
|
||||
Files will be renamed to <index>-<env_type>-<original_name>. Example:
|
||||
00-authentik-login.test.dev.local-it.cloud.env
|
||||
|
||||
Does nothing when called twice with same env_files. Throws an AssertionError if either
|
||||
contents or filenames of env_files have changed (probably test rerun with different input)"""
|
||||
|
||||
dir_was_not_empty = len(list(DIR.ENV_FILES.iterdir())) > 0
|
||||
|
||||
for index, env_file in enumerate(env_files):
|
||||
file_name = "-".join([str(index).zfill(2), env_file.env_type, env_file.env_path.name])
|
||||
if dir_was_not_empty:
|
||||
# check that the copied env files have not changed
|
||||
present_files = [f.name for f in DIR.ENV_FILES.iterdir()]
|
||||
assert (
|
||||
file_name in present_files and files_are_same(env_file.env_path, DIR.ENV_FILES / file_name)
|
||||
), "It appears that you are resuming a test while the input env files have changed. Start a new test instead"
|
||||
|
||||
shutil.copy(env_file.env_path, DIR.ENV_FILES / file_name)
|
||||
|
|
|
|||
|
|
@ -6,9 +6,10 @@ from typing import TYPE_CHECKING, Callable, NamedTuple
|
|||
import pytest
|
||||
from loguru import logger
|
||||
|
||||
from pytest_abra.shared_types import STATUS, TestResult
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pytest_abra.coordinator import Coordinator
|
||||
from pytest_abra.env_manager import EnvFile
|
||||
from pytest_abra import Coordinator, DirManager, EnvFile
|
||||
|
||||
|
||||
class ConditionArgs(NamedTuple):
|
||||
|
|
@ -30,6 +31,7 @@ class Runner:
|
|||
tests: list[Test] = []
|
||||
cleanups: list[Test] = []
|
||||
dependencies: list[str] = []
|
||||
_tests_path: Path = Path()
|
||||
|
||||
def __init__(self, coordinator: "Coordinator", runner_index: int):
|
||||
self.coordinator = coordinator
|
||||
|
|
@ -41,62 +43,58 @@ class Runner:
|
|||
|
||||
logger.info(f"creating instance of {self.__class__.__name__}")
|
||||
|
||||
def run_setups(self):
|
||||
def run_setups(self) -> list[TestResult]:
|
||||
"""runs the setup scripts if available"""
|
||||
self._execute_test_list(self.setups)
|
||||
return self._execute_tests_list(self.setups)
|
||||
|
||||
def run_tests(self):
|
||||
def run_tests(self) -> list[TestResult]:
|
||||
"""runs the test scripts if available"""
|
||||
self._execute_test_list(self.tests)
|
||||
return self._execute_tests_list(self.tests)
|
||||
|
||||
def run_cleanups(self):
|
||||
def run_cleanups(self) -> list[TestResult]:
|
||||
"""runs the cleanup scripts if available"""
|
||||
self._execute_test_list(self.cleanups)
|
||||
return self._execute_tests_list(self.cleanups)
|
||||
|
||||
def _execute_test_list(self, test_list: list[Test]):
|
||||
"""runs the main test script and if available and sub test scripts if their running condition is met"""
|
||||
def _execute_tests_list(self, test_list: list[Test]) -> list[TestResult]:
|
||||
"""Runs all tests given in the list. If condition is defined, it is also checked."""
|
||||
# check if required dependencies have passed
|
||||
if not self._dependencies_passed():
|
||||
logger.warning(f"skipping run_tests() of {self.env_type} (one or more dependencies have not passed)")
|
||||
return
|
||||
return [TestResult("skipped_dep", test.test_file) for test in test_list]
|
||||
|
||||
for test in test_list:
|
||||
self._run_test_with_checks(test)
|
||||
|
||||
def _run_test_with_checks(self, test: Test):
|
||||
# dependency passed: true / false
|
||||
# already_passed: true / false
|
||||
# prevent_skip: true / false
|
||||
# condition_available: true / pass
|
||||
# condition_met: true / false
|
||||
return [self._run_test_with_checks(test) for test in test_list]
|
||||
|
||||
def _run_test_with_checks(self, test: Test) -> TestResult:
|
||||
identifier_string = self.combine_names(self.env_type, test.test_file)
|
||||
|
||||
results = list(self.DIR.RECIPES.rglob(test.test_file))
|
||||
assert len(results) == 1, f"{test.test_file} should exist exactly 1 time, but found {len(results)} times"
|
||||
full_test_path = results[0]
|
||||
test_files = list(self._tests_path.rglob(test.test_file))
|
||||
assert len(test_files) == 1, f"{test.test_file} should exist exactly once, but found {len(test_files)} times"
|
||||
full_test_path = test_files[0]
|
||||
|
||||
# check if test aleady passed
|
||||
if self._is_test_passed(identifier_string, remove_existing=True):
|
||||
if self._is_test_passed(self.DIR, identifier_string):
|
||||
if test.prevent_skip:
|
||||
logger.info(f"continuing {identifier_string} (passed before but prevent_skip=True)")
|
||||
else:
|
||||
logger.info(f"skipping {identifier_string} (test has passed)")
|
||||
return
|
||||
return TestResult("skipped_pas", test.test_file)
|
||||
|
||||
if test.condition:
|
||||
condition_result = self._run_condition(test.condition)
|
||||
condition_result = self._call_condition_function(test.condition)
|
||||
if not condition_result:
|
||||
# test condition is defined but not met
|
||||
logger.info(f"skipping {identifier_string} (test condition is not met)")
|
||||
return
|
||||
self._create_status_file(self.DIR, status="skipped_con", identifier_string=identifier_string)
|
||||
return TestResult("skipped_con", test.test_file)
|
||||
|
||||
# test condition is undefined or not met
|
||||
logger.info(f"running {identifier_string}")
|
||||
result = self._call_pytest(full_test_path)
|
||||
self._create_result_file(result=result, identifier_string=identifier_string)
|
||||
exit_code = self._call_pytest(full_test_path)
|
||||
status = self.exit_code_to_str(exit_code)
|
||||
self._create_status_file(self.DIR, status=status, identifier_string=identifier_string)
|
||||
return TestResult(status, test.test_file)
|
||||
|
||||
def _run_condition(self, condition_function: Callable[[ConditionArgs], bool]):
|
||||
def _call_condition_function(self, condition_function: Callable[[ConditionArgs], bool]):
|
||||
"""run the test condition function with multiple arguments"""
|
||||
# more arguments can be added later without changing the function signature
|
||||
conditon_args = ConditionArgs(
|
||||
|
|
@ -106,24 +104,40 @@ class Runner:
|
|||
)
|
||||
return condition_function(conditon_args)
|
||||
|
||||
def _is_test_passed(self, identifier_string: str, remove_existing: bool = False) -> bool:
|
||||
"""returns True if the selected test matching identifier_string already passed
|
||||
@classmethod
|
||||
def _create_status_file(
|
||||
cls,
|
||||
DIR: "DirManager",
|
||||
status: STATUS,
|
||||
identifier_string: str,
|
||||
):
|
||||
"""create result file to indicated passed/failed/skipped test"""
|
||||
|
||||
This is determined by the presence of a specific output file in the RESULTS folder that
|
||||
matches identifier_string
|
||||
# remove matching files
|
||||
for status_file in cls._get_status_files(DIR, identifier_string):
|
||||
status_file.unlink()
|
||||
|
||||
remove_existing: If True, result files matching identifier_string with a status
|
||||
other than 'passed' will be deleted"""
|
||||
full_name = cls.combine_names(status, identifier_string)
|
||||
file_path = DIR.STATUS / full_name
|
||||
with open(file_path, "w") as _:
|
||||
pass # create empty file
|
||||
|
||||
already_passed = False
|
||||
for result in self.DIR.RESULTS.glob("*"):
|
||||
if identifier_string in result.name:
|
||||
# process any result file (passed / failed / skipped) if it exists
|
||||
if "passed" in result.name:
|
||||
already_passed = True
|
||||
elif remove_existing:
|
||||
result.unlink()
|
||||
return already_passed
|
||||
@staticmethod
|
||||
def _get_status_files(DIR: "DirManager", identifier_string: str) -> list[Path]:
|
||||
return [f for f in DIR.STATUS.glob("*") if identifier_string in f.name]
|
||||
|
||||
@classmethod
|
||||
def _is_test_passed(cls, DIR: "DirManager", identifier_string: str) -> bool:
|
||||
"""returns True if the selected test matching identifier_string already passed"""
|
||||
|
||||
matching_files = cls._get_status_files(DIR, identifier_string)
|
||||
if len(matching_files) == 1:
|
||||
status_file = matching_files[0]
|
||||
if "passed" in status_file.name:
|
||||
return True
|
||||
elif len(matching_files) > 1:
|
||||
logger.warning("more than one matching status file found")
|
||||
return False
|
||||
|
||||
def _call_pytest(self, full_test_path: Path) -> int:
|
||||
"""runs pytest programmatically with a specific file
|
||||
|
|
@ -155,7 +169,7 @@ class Runner:
|
|||
# --output only works with the given context and page fixture
|
||||
# folder needs to be unique! traces will not appear, if every pytest run has same output dir
|
||||
command_arguments.append("--output")
|
||||
command_arguments.append(str(self.DIR.RECORDS / "traces" / full_test_path.stem))
|
||||
command_arguments.append(str(self.DIR.RESULTS / "traces" / full_test_path.stem))
|
||||
|
||||
# tracing
|
||||
command_arguments.append("--tracing") # "on", "off", "retain-on-failure"
|
||||
|
|
@ -170,28 +184,16 @@ class Runner:
|
|||
# command_arguments.append("--headed")
|
||||
|
||||
# html report. Will be combined into one file later.
|
||||
command_arguments.append(f"--html={self.DIR.RECORDS / 'html' / full_test_path.with_suffix('.html').name}")
|
||||
command_arguments.append(f"--html={self.DIR.RESULTS / 'html' / full_test_path.with_suffix('.html').name}")
|
||||
|
||||
return pytest.main(command_arguments)
|
||||
|
||||
def _create_result_file(
|
||||
self,
|
||||
result: int,
|
||||
identifier_string: str,
|
||||
):
|
||||
"""create result file to indicated passed/failed or skipped test"""
|
||||
|
||||
full_name = self.combine_names(self.result_int_to_str(result), identifier_string)
|
||||
file_path = self.DIR.RESULTS / full_name
|
||||
with open(file_path, "w") as _:
|
||||
pass # create empty file
|
||||
|
||||
def _dependencies_passed(self):
|
||||
"""returns true if all setups of each dependency have passed"""
|
||||
|
||||
# todo: what about conditional setups?
|
||||
|
||||
passed_tests = [r.name for r in self.DIR.RESULTS.glob("*") if "passed" in r.name]
|
||||
passed_tests = [r.name for r in self.DIR.STATUS.glob("*") if "passed" in r.name]
|
||||
results = []
|
||||
for dependency in self.dependencies:
|
||||
dependency_runner = self.coordinator.RUNNER_DICT[dependency]
|
||||
|
|
@ -201,11 +203,9 @@ class Runner:
|
|||
return all(results)
|
||||
|
||||
@staticmethod
|
||||
def result_int_to_str(result_int: int) -> str:
|
||||
def exit_code_to_str(result_int: int) -> STATUS:
|
||||
"""converts the pytest exit code (int) into a meaningful string"""
|
||||
match result_int:
|
||||
case -1:
|
||||
return "skipped"
|
||||
case 0:
|
||||
return "passed"
|
||||
case _:
|
||||
|
|
|
|||
16
pytest_abra/shared_types.py
Normal file
16
pytest_abra/shared_types.py
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
from typing import Literal, NamedTuple
|
||||
|
||||
"""
|
||||
passed: test passed
|
||||
failed: test failed
|
||||
skipped_con: test skipped because condition was not met
|
||||
skipped_dep: test skipped because dependencies did not finish
|
||||
skipped_pas: test skipped because it passed before
|
||||
"""
|
||||
|
||||
STATUS = Literal["passed", "failed", "skipped_con", "skipped_dep", "skipped_pas"]
|
||||
|
||||
|
||||
class TestResult(NamedTuple):
|
||||
status: STATUS
|
||||
test_name: str
|
||||
|
|
@ -1,8 +1,17 @@
|
|||
import json
|
||||
import os
|
||||
import random
|
||||
import string
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from urllib.parse import urlunparse
|
||||
|
||||
from loguru import logger
|
||||
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
|
||||
|
||||
@dataclass
|
||||
class BaseUrl:
|
||||
|
|
@ -24,7 +33,7 @@ def get_datetime_string() -> str:
|
|||
return current_datetime.strftime("%Y-%m-%d-%H-%M-%S")
|
||||
|
||||
|
||||
def rmtree(root_dir: Path):
|
||||
def rmtree(root_dir: Path) -> None:
|
||||
"""removes a folder with content recursively"""
|
||||
if not root_dir.is_dir():
|
||||
return
|
||||
|
|
@ -35,3 +44,43 @@ def rmtree(root_dir: Path):
|
|||
child.unlink()
|
||||
|
||||
root_dir.rmdir()
|
||||
|
||||
|
||||
def generate_random_string(length: int, punctuation=False) -> str:
|
||||
"""returns a random string of the given length"""
|
||||
characters = string.ascii_letters + string.digits
|
||||
if punctuation:
|
||||
characters += string.punctuation
|
||||
random_string = "".join(random.choice(characters) for _ in range(length))
|
||||
return random_string
|
||||
|
||||
|
||||
def load_json_to_environ(cred_file: Path) -> None:
|
||||
"""Load the contents of a json file directly into os.environ. Variable names are inherited"""
|
||||
|
||||
if not cred_file.is_file():
|
||||
logger.warning(f"{cred_file} could not be found, no credentials loaded")
|
||||
return
|
||||
|
||||
with open(cred_file, "r") as f:
|
||||
CREDENTIALS = json.load(f)
|
||||
|
||||
for key, value in CREDENTIALS.items():
|
||||
os.environ[key] = value
|
||||
|
||||
|
||||
def get_session_id(args_output_dir: Path, args_resume: bool, args_session_id: Optional[str]) -> str:
|
||||
"""converts the cli arguments to the correct session_id"""
|
||||
session_id = args_session_id
|
||||
if not session_id:
|
||||
session_id = "test-" + get_datetime_string()
|
||||
if args_resume:
|
||||
latest_session_id = DirManager.get_latest_session_id(args_output_dir)
|
||||
if latest_session_id:
|
||||
session_id = latest_session_id
|
||||
return session_id
|
||||
|
||||
|
||||
def files_are_same(file1: Path, file2: Path) -> bool:
|
||||
with open(file1, "r") as f1, open(file2, "r") as f2:
|
||||
return f1.read() == f2.read()
|
||||
|
|
|
|||
40
recipes/authentik/tests_authentik/cleanup_authentik.py
Normal file
40
recipes/authentik/tests_authentik/cleanup_authentik.py
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
import json
|
||||
import os
|
||||
import re
|
||||
|
||||
from playwright.sync_api import BrowserContext
|
||||
|
||||
from pytest_abra import BaseUrl, DirManager
|
||||
|
||||
ADMIN_USER = os.environ["ADMIN_USER"]
|
||||
ADMIN_PASS = os.environ["ADMIN_PASS"]
|
||||
TEST_USER = os.environ["TEST_USER"]
|
||||
TEST_PASS = os.environ["TEST_PASS"]
|
||||
|
||||
|
||||
def remove_user(admin_context: BrowserContext, URL: BaseUrl):
|
||||
"""removes TEST_USER account from authentik"""
|
||||
page = admin_context.new_page()
|
||||
page.goto(URL.get())
|
||||
page.get_by_role("link", name="Admin Interface").click()
|
||||
nav = page.locator("ak-sidebar-item", has_text=re.compile(r"Directory|Verzeichnis"))
|
||||
nav.click()
|
||||
nav.get_by_role("link", name=re.compile(r"Users|Benutzer")).click()
|
||||
|
||||
name_pattern = re.compile(TEST_USER)
|
||||
page.get_by_role("row", name=name_pattern).get_by_label("").check()
|
||||
page.get_by_role("button", name=re.compile(r"Löschen|Delete")).click()
|
||||
page.get_by_role("dialog").get_by_role("button", name=re.compile(r"Löschen|Delete")).click()
|
||||
|
||||
|
||||
def cleanup_delete_user(
|
||||
context: BrowserContext, env_config: dict[str, str], DIR: DirManager, URL: BaseUrl, check_if_user_exists
|
||||
):
|
||||
# load admin cookies to context
|
||||
state_file = DIR.STATES / "authentik_admin_state.json"
|
||||
storage_state = json.loads(state_file.read_bytes())
|
||||
context.add_cookies(storage_state["cookies"])
|
||||
|
||||
if check_if_user_exists(context, env_config, URL):
|
||||
remove_user(context, URL)
|
||||
assert not check_if_user_exists(context, env_config, URL)
|
||||
46
recipes/authentik/tests_authentik/conftest.py
Normal file
46
recipes/authentik/tests_authentik/conftest.py
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
import os
|
||||
import re
|
||||
from typing import Callable, Generator
|
||||
|
||||
import pytest
|
||||
from playwright.sync_api import APIRequestContext, BrowserContext, Playwright, TimeoutError
|
||||
|
||||
from pytest_abra import BaseUrl, DirManager
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def api_request_context(
|
||||
playwright: Playwright,
|
||||
DIR: DirManager,
|
||||
) -> Generator[APIRequestContext, None, None]:
|
||||
state_file = DIR.STATES / "authentik_admin_state.json"
|
||||
request_context = playwright.request.new_context(storage_state=state_file)
|
||||
yield request_context
|
||||
request_context.dispose()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def check_if_user_exists() -> Callable[[BrowserContext, dict[str, str], BaseUrl], bool]:
|
||||
"""This is actually a normal function supplied by a fixture. We do this, because imports from
|
||||
tests_authentik are difficult as it is not part of the python environment. We expect
|
||||
from X import function
|
||||
to fail here. However, pytest handles the loading of fixtures from conftest.py automatically,
|
||||
hence we use that to load functions too."""
|
||||
|
||||
def inner_check_if_user_exists(admin_context: BrowserContext, env_config: dict[str, str], URL: BaseUrl) -> bool:
|
||||
# go to admin page
|
||||
page = admin_context.new_page()
|
||||
page.goto(URL.get())
|
||||
page.get_by_role("link", name="Admin Interface").click()
|
||||
nav = page.locator("ak-sidebar-item", has_text=re.compile(r"Directory|Verzeichnis"))
|
||||
nav.click()
|
||||
nav.get_by_role("link", name=re.compile(r"Users|Benutzer")).click()
|
||||
|
||||
user = page.get_by_text(os.environ["TEST_USER"])
|
||||
try:
|
||||
user.wait_for(state="visible", timeout=5_000)
|
||||
return True
|
||||
except TimeoutError:
|
||||
return False
|
||||
|
||||
return inner_check_if_user_exists
|
||||
|
|
@ -3,13 +3,13 @@ import json
|
|||
import pytest
|
||||
from playwright.sync_api import BrowserContext, Page
|
||||
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pytest_abra.utils import BaseUrl
|
||||
from pytest_abra import BaseUrl, DirManager
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def authentik_admin_context(context: BrowserContext, DIR: DirManager) -> BrowserContext:
|
||||
state_file = DIR.STATES / "authentik_admin_state.json"
|
||||
assert state_file.is_file(), "authentik setup did not finish successfully"
|
||||
storage_state = json.loads(state_file.read_bytes())
|
||||
context.add_cookies(storage_state["cookies"])
|
||||
return context
|
||||
|
|
@ -27,6 +27,7 @@ def authentik_admin_page(authentik_admin_context: BrowserContext, DIR: DirManage
|
|||
@pytest.fixture
|
||||
def authentik_user_context(context: BrowserContext, DIR: DirManager) -> BrowserContext:
|
||||
state_file = DIR.STATES / "authentik_user_state.json"
|
||||
assert state_file.is_file(), "authentik setup did not finish successfully"
|
||||
storage_state = json.loads(state_file.read_bytes())
|
||||
context.add_cookies(storage_state["cookies"])
|
||||
return context
|
||||
|
|
|
|||
|
|
@ -5,3 +5,4 @@ class RunnerAuthentik(Runner):
|
|||
env_type = "authentik"
|
||||
setups = [Test(test_file="setup_authentik.py")]
|
||||
tests = [Test(test_file="test_authentik_blueprint_api.py")]
|
||||
cleanups = [Test(test_file="cleanup_authentik.py")]
|
||||
|
|
|
|||
|
|
@ -4,21 +4,18 @@ import re
|
|||
|
||||
from playwright.sync_api import BrowserContext, expect
|
||||
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pytest_abra.utils import BaseUrl
|
||||
from pytest_abra import BaseUrl, DirManager
|
||||
|
||||
ADMIN_USER = os.environ["ADMIN_USER"]
|
||||
ADMIN_PASS = os.environ["ADMIN_PASS"]
|
||||
TEST_USER = os.environ["TEST_USER"]
|
||||
TEST_PASS = os.environ["TEST_PASS"]
|
||||
|
||||
|
||||
TESTUSER = {"username": "testuser", "name": "Test User", "password": "test123", "email": "test@example.com"}
|
||||
|
||||
|
||||
def setup_admin_state(context: BrowserContext, env_config: dict[str, str], DIR: DirManager):
|
||||
def setup_admin_state(context: BrowserContext, env_config: dict[str, str], DIR: DirManager, URL: BaseUrl):
|
||||
# go to page
|
||||
page = context.new_page()
|
||||
url = "https://" + env_config["DOMAIN"]
|
||||
page.goto(url)
|
||||
page.goto(URL.get())
|
||||
|
||||
# check welcome message
|
||||
welcome_message = env_config.get("welcome_message")
|
||||
|
|
@ -35,20 +32,6 @@ def setup_admin_state(context: BrowserContext, env_config: dict[str, str], DIR:
|
|||
context.storage_state(path=DIR.STATES / "authentik_admin_state.json")
|
||||
|
||||
|
||||
def check_if_user_exists(admin_context: BrowserContext, env_config: dict[str, str], URL: BaseUrl):
|
||||
# go to admin page
|
||||
page = admin_context.new_page()
|
||||
page.goto(URL.get())
|
||||
page.get_by_role("link", name="Admin Interface").click()
|
||||
nav = page.locator("ak-sidebar-item", has_text=re.compile(r"Directory|Verzeichnis"))
|
||||
nav.click()
|
||||
nav.get_by_role("link", name=re.compile(r"Users|Benutzer")).click()
|
||||
|
||||
user = page.get_by_text(TESTUSER["username"])
|
||||
user.wait_for(state="visible")
|
||||
return user.is_visible()
|
||||
|
||||
|
||||
def create_invite_link(admin_context: BrowserContext, env_config: dict[str, str], URL: BaseUrl):
|
||||
# go to admin page
|
||||
page = admin_context.new_page()
|
||||
|
|
@ -85,20 +68,23 @@ def create_user(user_context: BrowserContext, invitelink):
|
|||
page = user_context.new_page()
|
||||
page.goto(invitelink)
|
||||
page.get_by_placeholder("Benutzername").click()
|
||||
page.get_by_placeholder("Benutzername").fill(TESTUSER["username"])
|
||||
page.get_by_placeholder("Benutzername").fill(TEST_USER)
|
||||
page.locator('input[name="name"]').click()
|
||||
page.locator('input[name="name"]').fill(TESTUSER["name"])
|
||||
page.locator('input[name="name"]').fill("name")
|
||||
page.locator('input[name="email"]').click()
|
||||
page.locator('input[name="email"]').fill(TESTUSER["email"])
|
||||
email = os.environ["IMAP_EMAIL"] if "IMAP_EMAIL" in os.environ else "test@domain.com"
|
||||
page.locator('input[name="email"]').fill(email)
|
||||
page.get_by_placeholder("Passwort", exact=True).click()
|
||||
page.get_by_placeholder("Passwort", exact=True).fill(TESTUSER["password"])
|
||||
page.get_by_placeholder("Passwort", exact=True).fill(TEST_PASS)
|
||||
page.get_by_placeholder("Passwort (wiederholen)").click()
|
||||
page.get_by_placeholder("Passwort (wiederholen)").fill(TESTUSER["password"])
|
||||
page.get_by_placeholder("Passwort (wiederholen)").fill(TEST_PASS)
|
||||
page.get_by_role("button", name="Weiter").click()
|
||||
expect(page.locator("ak-library")).to_be_visible()
|
||||
|
||||
|
||||
def setup_user_state(context: BrowserContext, env_config: dict[str, str], DIR: DirManager, URL: BaseUrl):
|
||||
def setup_user_state(
|
||||
context: BrowserContext, env_config: dict[str, str], DIR: DirManager, URL: BaseUrl, check_if_user_exists
|
||||
):
|
||||
# load admin cookies to context
|
||||
state_file = DIR.STATES / "authentik_admin_state.json"
|
||||
storage_state = json.loads(state_file.read_bytes())
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ def test_authentik_blueprint_status(
|
|||
blueprints = api_request_context.get(URL.get("api/v3/managed/blueprints"))
|
||||
assert blueprints.ok
|
||||
blueprints_data = blueprints.json()
|
||||
ic(blueprints_data)
|
||||
# ic(blueprints_data)
|
||||
|
||||
# fake failed blueprint
|
||||
# blueprints_data["results"][10]["status"] = "failed"
|
||||
|
|
|
|||
|
|
@ -4,8 +4,7 @@ import os
|
|||
import pytest
|
||||
from playwright.sync_api import BrowserContext, Page
|
||||
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pytest_abra.utils import BaseUrl
|
||||
from pytest_abra import BaseUrl, DirManager
|
||||
|
||||
pytest_plugins = "authentik.tests_authentik.fixtures_authentik"
|
||||
|
||||
|
|
|
|||
|
|
@ -2,8 +2,7 @@ import re
|
|||
|
||||
from playwright.sync_api import Page, expect
|
||||
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pytest_abra.utils import BaseUrl
|
||||
from pytest_abra import BaseUrl, DirManager
|
||||
|
||||
# url dashboard
|
||||
# https://files.test.dev.local-it.cloud/apps/dashboard/
|
||||
|
|
|
|||
|
|
@ -1,10 +1,11 @@
|
|||
from pytest_abra import ConditionArgs, Runner, Test
|
||||
|
||||
|
||||
def condition_has_locale(args: ConditionArgs) -> bool:
|
||||
def env_config_has_locale(args: ConditionArgs) -> bool:
|
||||
env_config = args.env_config
|
||||
if "de" in env_config.get("LOCALE", ""):
|
||||
if "LOCALE" in env_config:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
|
|
@ -16,6 +17,6 @@ class RunnerWordpress(Runner):
|
|||
Test(test_file="setup_wordpress_trigger_email.py"),
|
||||
]
|
||||
tests = [
|
||||
Test(test_file="test_wordpress_receive_email.py", prevent_skip=True),
|
||||
# Test(condition=condition_has_locale, test_file="test_wordpress_localization.py"),
|
||||
# Test(test_file="test_wordpress_receive_email.py", prevent_skip=True),
|
||||
Test(condition=env_config_has_locale, test_file="test_wordpress_localization.py"),
|
||||
]
|
||||
|
|
|
|||
|
|
@ -1,14 +1,13 @@
|
|||
import pytest
|
||||
from playwright.sync_api import BrowserContext, Page, expect
|
||||
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pytest_abra import BaseUrl, DirManager
|
||||
|
||||
|
||||
def test_visit_from_domain(authentik_admin_context: BrowserContext, env_config: dict[str, str]):
|
||||
def test_visit_from_domain(authentik_admin_context: BrowserContext, URL: BaseUrl):
|
||||
"""visit wordpress directly with admin_session, expect not to be logged in"""
|
||||
page = authentik_admin_context.new_page()
|
||||
url = "https://" + env_config["DOMAIN"]
|
||||
page.goto(url)
|
||||
page.goto(URL.get())
|
||||
with pytest.raises(AssertionError):
|
||||
# look for admin bar
|
||||
expect(page.locator("#wpadminbar")).to_be_visible(timeout=3_000)
|
||||
|
|
|
|||
|
|
@ -2,14 +2,13 @@
|
|||
|
||||
from playwright.sync_api import BrowserContext, expect
|
||||
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pytest_abra import BaseUrl
|
||||
|
||||
|
||||
def test_welcome_message(context: BrowserContext, env_config: dict[str, str], DIR: DirManager):
|
||||
def test_de_welcome_message(context: BrowserContext, env_config: dict[str, str], URL: BaseUrl):
|
||||
page = context.new_page()
|
||||
url = "https://" + env_config["DOMAIN"]
|
||||
page.goto(url)
|
||||
page.goto(URL.get())
|
||||
|
||||
expect(page.locator(".wp-block-heading")).to_be_visible()
|
||||
if "locale" in env_config and "de" in env_config["locale"]:
|
||||
if "de" in env_config.get("locale", ""):
|
||||
expect(page.get_by_role("heading")).to_have_text("Willkommen bei WordPress!")
|
||||
|
|
|
|||
|
|
@ -1,8 +1,10 @@
|
|||
import pytest
|
||||
from icecream import ic
|
||||
|
||||
from pytest_abra.custom_fixtures import Message
|
||||
|
||||
|
||||
@pytest.mark.skip
|
||||
def test_demo(imap_recent_messages: list[Message]):
|
||||
for message in imap_recent_messages:
|
||||
print(dir(message))
|
||||
|
|
|
|||
54
tests/test_cli.py
Normal file
54
tests/test_cli.py
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
import re
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from pytest_abra import DirManager
|
||||
from pytest_abra.utils import get_session_id
|
||||
|
||||
|
||||
def test_get_session_id_random(tmp_path: Path):
|
||||
args_output_dir = tmp_path
|
||||
args_resume = False
|
||||
args_session_id = None
|
||||
session_id = get_session_id(args_output_dir, args_resume, args_session_id)
|
||||
assert re.search(r"\d+-\d+-\d+", session_id)
|
||||
|
||||
|
||||
def test_get_session_id_explicit1(tmp_path: Path):
|
||||
args_output_dir = tmp_path
|
||||
args_resume = False
|
||||
args_session_id = "abc"
|
||||
session_id = get_session_id(args_output_dir, args_resume, args_session_id)
|
||||
assert session_id == "abc"
|
||||
|
||||
|
||||
def test_get_session_id_explicit2(tmp_path: Path):
|
||||
args_output_dir = tmp_path
|
||||
args_resume = True
|
||||
args_session_id = "abc"
|
||||
session_id = get_session_id(args_output_dir, args_resume, args_session_id)
|
||||
assert session_id == "abc"
|
||||
|
||||
|
||||
@pytest.mark.slow
|
||||
def test_get_session_id_integration(tmp_path: Path):
|
||||
assert len(list(tmp_path.iterdir())) == 0
|
||||
session_id_1 = get_session_id(args_output_dir=tmp_path, args_resume=False, args_session_id=None)
|
||||
|
||||
DIR = DirManager(output_dir=tmp_path, session_id=session_id_1)
|
||||
DIR.create_all_dirs()
|
||||
assert len(list(tmp_path.iterdir())) == 1
|
||||
|
||||
time.sleep(1.1) # get_session_id won't be unique if called without time passed
|
||||
session_id_2 = get_session_id(args_output_dir=tmp_path, args_resume=False, args_session_id=None)
|
||||
DIR = DirManager(output_dir=tmp_path, session_id=session_id_2)
|
||||
DIR.create_all_dirs()
|
||||
assert len(list(tmp_path.iterdir())) == 2
|
||||
|
||||
session_id_3 = get_session_id(args_output_dir=tmp_path, args_resume=True, args_session_id=None)
|
||||
assert session_id_2 == session_id_3
|
||||
|
||||
session_id_4 = get_session_id(args_output_dir=tmp_path, args_resume=True, args_session_id="abc")
|
||||
assert session_id_4 == "abc"
|
||||
68
tests/test_cli_full_integration.py
Normal file
68
tests/test_cli_full_integration.py
Normal file
|
|
@ -0,0 +1,68 @@
|
|||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from pytest_abra import DirManager
|
||||
from pytest_abra.utils import load_json_to_environ
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def session_tmp_path_testout(tmp_path_factory: pytest.TempPathFactory) -> Path:
|
||||
return tmp_path_factory.mktemp("test_out")
|
||||
|
||||
|
||||
@pytest.mark.slow
|
||||
def test_abratest_cli_full_integration(session_tmp_path_testout: Path):
|
||||
"""run abratest against the dev instance"""
|
||||
|
||||
# --------------------- load credentials to env variables -------------------- #
|
||||
|
||||
cred_file = Path("credentials.json")
|
||||
load_json_to_environ(cred_file)
|
||||
|
||||
# --------------------------------- env files -------------------------------- #
|
||||
|
||||
ENV_FILES_ROOT = Path("./envfiles").resolve()
|
||||
ENV_FILES = [
|
||||
ENV_FILES_ROOT / "login.test.dev.local-it.cloud.env", # authentik
|
||||
ENV_FILES_ROOT / "blog.test.dev.local-it.cloud.env", # wordpress
|
||||
ENV_FILES_ROOT / "files.test.dev.local-it.cloud.env", # nextcloud
|
||||
]
|
||||
ENV_PATHS = ";".join([x.as_posix() for x in ENV_FILES])
|
||||
|
||||
# ----------------------------------- dirs ----------------------------------- #
|
||||
|
||||
RECIPES_DIR = Path("./recipes").resolve()
|
||||
# OUTPUT_DIR = Path("./test-output").resolve()
|
||||
OUTPUT_DIR = session_tmp_path_testout.resolve()
|
||||
|
||||
# ------------------------------------ run ----------------------------------- #
|
||||
|
||||
result = subprocess.run(
|
||||
[
|
||||
"abratest",
|
||||
"--env_paths",
|
||||
ENV_PATHS,
|
||||
"--recipes_dir",
|
||||
RECIPES_DIR,
|
||||
"--output_dir",
|
||||
OUTPUT_DIR,
|
||||
"--session_id",
|
||||
"abc",
|
||||
]
|
||||
)
|
||||
|
||||
assert result.returncode == 0
|
||||
|
||||
|
||||
@pytest.mark.slow
|
||||
def test_results_abra(session_tmp_path_testout: Path):
|
||||
OUTPUT_DIR = session_tmp_path_testout.resolve()
|
||||
|
||||
DIR = DirManager(output_dir=OUTPUT_DIR, session_id="abc")
|
||||
all_files = [f.name for f in DIR.STATUS.rglob("*")]
|
||||
passed_files = [f.name for f in DIR.STATUS.rglob("passed-*")]
|
||||
failed_files = set(all_files) - set(passed_files)
|
||||
assert len(all_files) > 0
|
||||
assert not failed_files, failed_files
|
||||
43
tests/test_coordinator.py
Normal file
43
tests/test_coordinator.py
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
import os
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from pytest_abra.coordinator import Coordinator
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
|
||||
|
||||
def test_load_test_credentials(tmp_path: Path):
|
||||
assert "TEST_USER" not in os.environ
|
||||
|
||||
DIR = DirManager(output_dir=tmp_path, session_id="abc")
|
||||
DIR.create_all_dirs()
|
||||
|
||||
Coordinator.load_test_credentials(DIR)
|
||||
assert (DIR.STATES / "credentials_test.json").is_file()
|
||||
|
||||
assert "TEST_USER" in os.environ
|
||||
test_user_before = os.environ["TEST_USER"]
|
||||
|
||||
# os.environ.clear() # this breaks pytest!
|
||||
del os.environ["TEST_USER"]
|
||||
assert "TEST_USER" not in os.environ
|
||||
|
||||
Coordinator.load_test_credentials(DIR)
|
||||
assert test_user_before == os.environ["TEST_USER"]
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def tmp_recipes(tmp_path_factory: pytest.TempPathFactory) -> Path:
|
||||
tmp_recipes_target = tmp_path_factory.mktemp("recipes")
|
||||
recipes_dir_source = Path("recipes")
|
||||
shutil.copytree(recipes_dir_source, tmp_recipes_target, dirs_exist_ok=True)
|
||||
return tmp_recipes_target
|
||||
|
||||
|
||||
def test_runner_runner_dict_import(tmp_recipes: Path):
|
||||
"""import from recipes dict should work, because create_runner_dict has sys.path.append"""
|
||||
|
||||
RUNNER_DICT = Coordinator.create_runner_dict(tmp_recipes)
|
||||
assert len(RUNNER_DICT.keys()) > 0
|
||||
30
tests/test_dir_manager.py
Normal file
30
tests/test_dir_manager.py
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
|
||||
import time
|
||||
import pytest
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pathlib import Path
|
||||
|
||||
def test_get_latest_session_id_from_non_existing_dir(tmp_path: Path):
|
||||
out = DirManager.get_latest_session_id(tmp_path / "not_exist")
|
||||
assert out is None
|
||||
|
||||
def test_get_latest_session_id_from_empty_dir(tmp_path: Path):
|
||||
out = DirManager.get_latest_session_id(tmp_path)
|
||||
assert out is None
|
||||
|
||||
def test_get_latest_session_id_single(tmp_path: Path):
|
||||
(tmp_path / "a").mkdir()
|
||||
out = DirManager.get_latest_session_id(tmp_path)
|
||||
assert out == "a"
|
||||
|
||||
|
||||
|
||||
@pytest.mark.slow
|
||||
def test_get_latest_session_id(tmp_path: Path):
|
||||
(tmp_path / "a").mkdir()
|
||||
time.sleep(1.1)
|
||||
(tmp_path / "b").mkdir()
|
||||
out = DirManager.get_latest_session_id(tmp_path)
|
||||
assert out == "b"
|
||||
|
||||
|
||||
137
tests/test_env_manager.py
Normal file
137
tests/test_env_manager.py
Normal file
|
|
@ -0,0 +1,137 @@
|
|||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from pytest_abra.dir_manager import DirManager
|
||||
from pytest_abra.env_manager import EnvManager
|
||||
from pytest_abra.utils import files_are_same
|
||||
|
||||
ENV_PATHS = [
|
||||
Path("envfiles/blog.test.dev.local-it.cloud.env"), # wordpress
|
||||
Path("envfiles/login.test.dev.local-it.cloud.env"), # authentik
|
||||
Path("envfiles/login.test.dev.local-it.cloud.env"), # authentik
|
||||
]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_output(tmp_path_factory: pytest.TempPathFactory) -> Path:
|
||||
return tmp_path_factory.mktemp("output")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_recipes(tmp_path_factory: pytest.TempPathFactory) -> Path:
|
||||
return tmp_path_factory.mktemp("recipes")
|
||||
|
||||
|
||||
def test_copy_env_files(tmp_output: Path, tmp_recipes: Path):
|
||||
# create dirs in output
|
||||
DIR = DirManager(output_dir=tmp_output, session_id="abc", recipes_dir=tmp_recipes)
|
||||
DIR.create_all_dirs()
|
||||
|
||||
# confirm dir is empty
|
||||
assert len(list(DIR.ENV_FILES.iterdir())) == 0
|
||||
|
||||
# copy env files
|
||||
env_files = EnvManager._get_env_files(ENV_PATHS)
|
||||
EnvManager.copy_env_files(env_files, DIR)
|
||||
|
||||
# check that each env file is present in DIR.ENV_FILES with correct contents
|
||||
assert len(list(DIR.ENV_FILES.iterdir())) == len(env_files)
|
||||
for index, env_path in enumerate(ENV_PATHS):
|
||||
matching_files = [f for f in DIR.ENV_FILES.iterdir() if index == int(f.name.split("-")[0])]
|
||||
assert len(matching_files) == 1
|
||||
assert files_are_same(env_path, matching_files[0])
|
||||
|
||||
|
||||
def test_copy_env_files_twice(tmp_output: Path, tmp_recipes: Path):
|
||||
"""Copy the same env files twice"""
|
||||
# create dirs in output
|
||||
DIR = DirManager(output_dir=tmp_output, session_id="abc", recipes_dir=tmp_recipes)
|
||||
DIR.create_all_dirs()
|
||||
|
||||
# confirm dir is empty
|
||||
assert len(list(DIR.ENV_FILES.iterdir())) == 0
|
||||
|
||||
# copy env files
|
||||
env_files = EnvManager._get_env_files(ENV_PATHS)
|
||||
EnvManager.copy_env_files(env_files, DIR)
|
||||
|
||||
# check that each env file is present in DIR.ENV_FILES with correct contents
|
||||
assert len(list(DIR.ENV_FILES.iterdir())) == len(env_files)
|
||||
|
||||
# copy env files again
|
||||
EnvManager.copy_env_files(env_files, DIR)
|
||||
|
||||
for index, env_path in enumerate(ENV_PATHS):
|
||||
matching_files = [f for f in DIR.ENV_FILES.iterdir() if index == int(f.name.split("-")[0])]
|
||||
assert len(matching_files) == 1
|
||||
assert files_are_same(env_path, matching_files[0])
|
||||
|
||||
|
||||
def test_copy_env_files_twice_with_content_change(tmp_output: Path, tmp_recipes: Path, tmp_path: Path):
|
||||
# copy env files to tmp_path
|
||||
assert len(list(tmp_path.iterdir())) == 0
|
||||
for f in ENV_PATHS:
|
||||
shutil.copy(f, tmp_path / f.name)
|
||||
ENV_PATHS_NEW = list(tmp_path.iterdir())
|
||||
assert len(ENV_PATHS_NEW) > 0
|
||||
|
||||
# create dirs in output
|
||||
DIR = DirManager(output_dir=tmp_output, session_id="abc", recipes_dir=tmp_recipes)
|
||||
DIR.create_all_dirs()
|
||||
|
||||
# confirm dir is empty
|
||||
assert len(list(DIR.ENV_FILES.iterdir())) == 0
|
||||
|
||||
# copy env files from tmp_path to tmp_output
|
||||
env_files = EnvManager._get_env_files(ENV_PATHS_NEW)
|
||||
EnvManager.copy_env_files(env_files, DIR)
|
||||
|
||||
# check that each env file is present in DIR.ENV_FILES with correct contents
|
||||
assert len(list(DIR.ENV_FILES.iterdir())) == len(env_files)
|
||||
|
||||
# change content of one env_file in tmp_path
|
||||
file_path = next(tmp_path.iterdir())
|
||||
with open(file_path, "w") as file:
|
||||
file.write("This is the new content")
|
||||
|
||||
# copy env files again
|
||||
with pytest.raises(AssertionError) as excinfo:
|
||||
EnvManager.copy_env_files(env_files, DIR)
|
||||
|
||||
assert "input env files have changed" in str(excinfo.value)
|
||||
|
||||
|
||||
def test_copy_env_files_twice_with_name_change(tmp_output: Path, tmp_recipes: Path, tmp_path: Path):
|
||||
# copy env files to tmp_path
|
||||
assert len(list(tmp_path.iterdir())) == 0
|
||||
for f in ENV_PATHS:
|
||||
shutil.copy(f, tmp_path / f.name)
|
||||
ENV_PATHS_NEW = list(tmp_path.iterdir())
|
||||
assert len(ENV_PATHS_NEW) > 0
|
||||
|
||||
# create dirs in output
|
||||
DIR = DirManager(output_dir=tmp_output, session_id="abc", recipes_dir=tmp_recipes)
|
||||
DIR.create_all_dirs()
|
||||
|
||||
# confirm dir is empty
|
||||
assert len(list(DIR.ENV_FILES.iterdir())) == 0
|
||||
|
||||
# copy env files from tmp_path to tmp_output
|
||||
env_files = EnvManager._get_env_files(ENV_PATHS_NEW)
|
||||
EnvManager.copy_env_files(env_files, DIR)
|
||||
|
||||
# check that each env file is present in DIR.ENV_FILES with correct contents
|
||||
assert len(list(DIR.ENV_FILES.iterdir())) == len(env_files)
|
||||
|
||||
# change name of one env_file in tmp_path
|
||||
file_path = next(tmp_path.iterdir())
|
||||
file_path.rename(file_path.parent / (file_path.stem + "-other" + file_path.suffix))
|
||||
|
||||
# copy env files from tmp_path to tmp_output again
|
||||
with pytest.raises(AssertionError) as excinfo:
|
||||
env_files = EnvManager._get_env_files(list(tmp_path.iterdir()))
|
||||
EnvManager.copy_env_files(env_files, DIR)
|
||||
|
||||
assert "input env files have changed" in str(excinfo.value)
|
||||
|
|
@ -102,3 +102,17 @@ def test_env_manager() -> None:
|
|||
assert ENV.env_files[0].env_type == "authentik"
|
||||
assert ENV.env_files[1].env_type == "authentik"
|
||||
assert ENV.env_files[2].env_type == "wordpress"
|
||||
|
||||
|
||||
def test_RUNNER_DICT_missing_key() -> None:
|
||||
"""RUNNER_DICT missing wordpress key while .env file with TYPE=wordpress given"""
|
||||
env_paths_list = [
|
||||
Path("envfiles/blog.test.dev.local-it.cloud.env"), # wordpress
|
||||
Path("envfiles/login.test.dev.local-it.cloud.env"), # authentik
|
||||
Path("envfiles/login.test.dev.local-it.cloud.env"), # authentik
|
||||
]
|
||||
RUNNER_DICT_COPY = RUNNER_DICT.copy()
|
||||
del RUNNER_DICT_COPY["wordpress"]
|
||||
with pytest.raises(AssertionError) as excinfo:
|
||||
EnvManager(env_paths_list, RUNNER_DICT_COPY)
|
||||
assert "no runner for" in str(excinfo.value)
|
||||
|
|
|
|||
|
|
@ -16,27 +16,31 @@ def session_tmp_path(tmp_path_factory: pytest.TempPathFactory) -> Path:
|
|||
return tmp_path_factory.mktemp("html_test")
|
||||
|
||||
|
||||
def test_merge_html(session_tmp_path: Path):
|
||||
@pytest.fixture(scope="session")
|
||||
def html_file(session_tmp_path: Path) -> Path:
|
||||
"""combines all generated pytest html reports into one"""
|
||||
|
||||
in_dir_path = Path(__file__).parent / "assets" / "html_merge"
|
||||
in_dir_path = in_dir_path.resolve()
|
||||
ic(in_dir_path)
|
||||
|
||||
out_file_path = session_tmp_path / "test.html"
|
||||
out_assets_dir = session_tmp_path / "assets"
|
||||
html_file = session_tmp_path / "test.html"
|
||||
|
||||
merge_html_reports(in_dir_path.as_posix(), out_file_path.as_posix(), "combined.html")
|
||||
merge_html_reports(in_dir_path.as_posix(), html_file.as_posix(), "combined.html")
|
||||
return html_file
|
||||
|
||||
assert out_file_path.is_file()
|
||||
assert out_assets_dir.is_dir()
|
||||
assert next(out_assets_dir.glob("*"))
|
||||
|
||||
def test_merge_html(html_file: Path):
|
||||
assert html_file.is_file()
|
||||
assert html_file.parent.is_dir()
|
||||
assert next(html_file.parent.glob("*"))
|
||||
|
||||
|
||||
@pytest.mark.slow
|
||||
def test_check_result_with_playwright(session_tmp_path, context: BrowserContext):
|
||||
html_file = session_tmp_path / "test.html"
|
||||
def test_check_result_with_playwright(html_file: Path, context: BrowserContext):
|
||||
assert html_file.is_file()
|
||||
|
||||
file_url = BaseUrl(netloc=html_file.as_posix(), scheme="file").get()
|
||||
|
||||
page = context.new_page()
|
||||
page.goto(file_url)
|
||||
|
||||
|
|
|
|||
29
tests/test_runner.py
Normal file
29
tests/test_runner.py
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
from pathlib import Path
|
||||
|
||||
from pytest_abra import DirManager, Runner
|
||||
|
||||
|
||||
def test_runner_create_status_file(tmp_path: Path):
|
||||
"""check if _create_status_file prevents duplicates"""
|
||||
|
||||
DIR = DirManager(output_dir=tmp_path, session_id="temp")
|
||||
DIR.create_all_dirs()
|
||||
assert len(list(DIR.STATUS.iterdir())) == 0
|
||||
|
||||
# create first status file
|
||||
Runner._create_status_file(DIR, "passed", "identifier-a")
|
||||
assert len(list(DIR.STATUS.iterdir())) == 1
|
||||
|
||||
# create second status file
|
||||
Runner._create_status_file(DIR, "passed", "identifier-b")
|
||||
assert len(list(DIR.STATUS.iterdir())) == 2
|
||||
|
||||
# check if _get_status_files finds only the correct status file
|
||||
result = Runner._get_status_files(DIR, "identifier-a")
|
||||
assert len(result) == 1
|
||||
|
||||
# overwrite first status file
|
||||
Runner._create_status_file(DIR, "failed", "identifier-a")
|
||||
assert len(list(DIR.STATUS.iterdir())) == 2
|
||||
|
||||
assert Runner._is_test_passed(DIR, "identifier-a") is False
|
||||
Loading…
Add table
Add a link
Reference in a new issue