Introduction
typst-test
is a test runner for Typst projects. It helps you worry less about regressions and speeds up your development.
Bird's-Eye View
Out of the box typst-test
supports the following features:
- locate the project it is invoked in
- collect and manage test scripts and references
- compile and run tests
- compare test output to references
- provide extra scripting functionality
- running custom scripts for test automation
A Closer Look
This book contains a few sections aimed at answering the most common questions right out the gate.
- Installation outlines various ways to install
typst-test
. - Usage goes over some basic commands to get started with
typst-test
.
After the quick start, a few guides delve deeper into some advanced topics.
- Using Test Sets delves into the test set language and how it can be used to isolate tests and speed up your TDD workflow.
- Watching for Changes automatically run tests while developing your package.
- Setting Up CI shows how to set up
typst-test
to continuously test all changes to your package.
The later sections of the book are a technical reference to typst-test
and its various features or concepts.
- Tests outlines which types of tests
typst-test
supports, how they can be customized and which features are offered within the test scripts. - Test Set Language defines the test set language and its built in test sets.
Installation
To install typst-test
on your PC, you must, for the time being, compile it from source.
Once typst-test
reaches 0.1.0, this restriction will be lifted and each release will provide precompiled binaries for major operating systems (Windows, Linux and macOS).
Installation From Source
To install typst-test
from source, you must have a Rust toolchain (Rust v1.80.0+) and cargo installed.
Run the following command to install the latest nightly version
cargo install --locked --git https://github.com/tingerrr/typst-test
To install the latest semi stable version run
cargo install --locked --git https://github.com/tingerrr/typst-test --tag ci-semi-stable
Required Libraries
OpenSSL
OpenSSL (v1.0.1 to v3.x.x) or LibreSSL (v2.5 to v3.7.x) are required to allow typst-test
to download packages from the Typst Universe package registry.
When installing from source the vendor-openssl
feature can be used on operating systems other than Windows and macOS to statically vendor and statically link to OpenSSL, avoiding the need for it on the operating system.
This is not yet possible, but will be once #32 is resolved, in the meantime OpenSSL may be linked to dynamically as a transitive dependency.
Usage
typst-test
is a command line program, it can be run by simply invoking it in your favorite shell and passing the appropriate arguments.
If you open a shell in the folder project
and typst-test
is at project/bin/typst-test
, then you can run it using ./project/bin/typst-test
.
Placing it directly in your project is most likely not what you will do, or want to do.
You should install it to a directory which is contained in your PATH
, allowing you to simply run it using typst-test
directly.
How to add such folders to your PATH
depends on your operating system, but if you installed typst-test
using one of the recommended methods in Installation, then such a folder should be chosen for you.
For the remainder of this document tt
is used in favor of typst-test
whenever a command line example is shown.
When you see an example such as
tt run -e 'ephemeral() | compile-only()'
it is meant to be run as
typst-test run -e 'ephemeral() | compile-only()'
You can also define an alias of the same name to make typing it easier.
typst-test
requires a certain project structure to work, if you want to start testing your project's code, you can create an example test and the required directory structure using the init
command.
tt init
This will create the default example to give you a grasp at where tests are located, and how they are structured.
typs-test
will look for the project root by checking for directories containing a typst.toml
manifest file.
This is because typst-test
is primarily aimed at developers of packages, if you want to use a different project root, or don't have a manifest file, you can provide the root directory using the --root
like so.
tt init --root ./path/to/root/
Keep in mind that you must pass this option to every command that operates on a project.
Alternatively the TYPST_ROOT
environment variable can be set to the project root.
Further examples assume the existence of a manifest, or the TYPST_ROOT
variable being set
If you're just following along and don't have a package to test this with, you can use an empty project with the following manifest:
[package]
name = "foo"
description = "A fancy Typst package!"
authors = ["John Doe"]
license = "MIT"
entrypoint = "src/lib.typ"
version = "0.1.0"
Once the project is initialized, you can run the example test to see that everything works.
tt run example
You should see something along the lines of
Starting 1 tests (run ID: 4863ce3b-70ea-4aea-9151-b83e25f11c94)
pass [ 0s 38ms 413µs] example
──────────
Summary [ 0s 38ms 494µs] 1/1 tests run: all 1 passed
Let's edit the test to actually do something, the default example test can be found in <project>/tests/example/
and simply contains Hello World
.
Write something else in there and see what happens
-Hello World
+Typst is Great!
Once we run typst-test
again we'll see that the test no longer passes:
Starting 1 tests (run ID: 7cae75f3-3cc3-4770-8e3a-cb87dd6971cf)
fail [ 0s 44ms 631µs] example
Page 1 had 1292 deviations
hint: Diff images have been saved at '/home/tinger/test/tests/example/diff'
──────────
Summary [ 0s 44ms 762µs] 1/1 tests run: all 1 failed
typst-test
has compared the reference output from the original Hello World
document to the new document and determined that they don't match.
It also told you where you can inspect the difference, the <project>/test/example
contains a diff
directory.
You can take a look to see what changed, you can also take a look at the out
and ref
directories, these contain the output of the current test and the expected reference output respectively.
Well, but this wasn't a mistake, this was a deliberate change.
So, let's update the references to reflect that and try again.
For this we use the appropriately named update
command:
tt update example
You should see output similar to
Starting 1 tests (run ID: f11413cf-3f7f-4e02-8269-ad9023dbefab)
pass [ 0s 51ms 550µs] example
──────────
Summary [ 0s 51ms 652µs] 1/1 tests run: all 1 passed
and the test should once again pass.
Note that, at the moment typst-test does not compress the reference images, this means that, if you use a version control system like git or mericural, the reference images of persistent tests can quickly bloat your repository if you update them frequently.
Consider using a program like oxipng
to compress them, typst-test
can still read them without any problems.
Using Test Sets
Why Tests Sets
Many operations such as running, comparing, removing or updating tests all need to somehow select which tests to operate on.
To avoid having lots of hard-to-remember options, which may or may not interact well, typst-test
offers an expression based set language which is used to select tests.
Instead of writing
tt run --regex 'foo-.*' --path '/bar/baz' --not --skip
typst-test
can be invoked like
tt run --expression '(regex:foo-.* & !skip()) | path:bar/baz'
This comes with quite a few advantages:
- it's easier to compose multiple identifier filters like
regex
andpath
- options are ambiguous whether they apply to the next option only or to all options like
--regex
- with options it's unclear how to compose complex relations like
and
vsor
of other options - test set expressions are visually close to the filter expressions they describe, their operators are deiberately chosen to feel like witing a predicate which is applied over all tests
Let's first disect what this expression actually means:
(regex:foo-.* & !skip()) | exact:bar/baz
- We have a top-level binary expression like so
a | b
, this is a union expression, it includes all tests found in eithera
orb
. - The right expression is
exact:/bar/baz
, this is a pattern (indicated by the colon:
). It matches all tests who's identifier exactly matchesbar/baz
, i.e. it includes this directory of tests and but not its sub tests. - The left expression is itself a binary expression again, this time an intersection.
It consists of another pattern and a complement set.
- The pattern is a regex pattern and behaves like one would expect, it matches on the identifier/path with the given regular expression. It includes all tests who's module identifier matches the given regex.
- The complement
!skip()
includes all tests which are not marked as skipped.
Tying it all together, we can describe what this expression matches in a sentence:
Select all tests which are not marked as skip and match the regex
foo-.*
, additionally, includebar/baz
, but not its sub tests.
Trying to describe this relationship using options on the command line would be cumbersome, error prone and, depending on the options present, impossible. 1
Default Test Sets
Many operations take either a set of tests as positional arguments, which are matched exactly, or a test set expression.
If neither are given the default test set is used, which is defined as !skip()
.
This may change in the future, commands my get their own, or even configurable default test sets. See #40.
More concretely given the invocation
tt list test1 test2 ...
is equivalent to the following invocation
tt list --expression 'none() & (exact:test1 | exact:test2 | ...)'
An Iterative Example
Suppose you had a project with the following tests:
mod/sub/foo ephemeral ignored
mod/sub/bar ephemeral
mod/sub/baz persistent
mod/foo persistent
bar ephemeral
baz persistent ignored
and you wanted run only ephemeral tests in mod/sub
.
You could construct a expression with the following steps:
- Firstly, filter out all ignored tests,
typst-test
does by default, but once we use our own expression we must include this restriction ourselves.!skip() & ...
- Now include only those tests which are ephemeral, to do this restriction we form the intersection of the current set and the ephemeral set.
!skip() & ephemeral()
- Now finally, restrict it to be only tests which are in
mod/sub
or its sub modules. You can do so by adding any of the following patterns:!skip() & ephemeral() & contains~sub
!skip() & ephemeral() & path:mod/sub
!skip() & ephemeral() & regex:^mod/sub
You can iteratively test your results with typst-test list -e '...'
until you're satisfied.
Then you can run whatever operation you want with the same expression. If it is a destructive operation, i.e. one that writes changes to non-temporary files, then you must prefix the expression with all:
if your test set contains more than one test.
Patterns
Note that patterns come in two forms:
- raw patterns: They are provided for convenience, they have been used in the examples above and are simply the pattern kind followed by a colon and any non-whitespace characters.
- string patterns: A generalization which allows for whitespace and usage in nested expressions.
This distinction is useful for scripting and some interactive use cases, note that a raw pattern would keep parsing any non whitespace character.
When nesting patterns like (regex:foo-.*) & ...
the parser would swallow the closing parenthesis as it is a valid character in many patterns.
String patterns explicitly wrap the pattern to avoid this: (regex:"foo-.*") & ...
is valid and will parse correctly.
Scripting
If you build up test set expressions programmatically, consider taking a look at the built-in test set functions.
Specifically the all()
and none()
sets can be used as identity sets for certain operators, possibly simplifying the code generating the test sets.
Some of the syntax used in test sets may interfere with your shell, especially the use of whitespace and special tokens within patterns like $
in regexes.
Use non-interpreting quotes around the test set expression (commonly single quotes '...'
) to avoid interpreting them as shell specific sequences.
To get a more complete look at test sets, take a look at the reference.
Watching for Changes
typst-test
does not currently support a "watch" command, which is common in this sort of tooling.
This is due to some of the complexity in how it uses the core Typst libraries.
However, you may workaround this by using watchexec
.
To begin, install it following the installation instructions in its README.
Then, run a command like this in your package root directory, which is the same directory with your typst.toml
and README.md
file:
watchexec \
--watch . \
--clear \
--ignore 'tests/**/diff/**' \
--ignore 'tests/**/out/**' \
"typst-test r"
You may create an alias in your shell to make it more convenient:
alias ttw="watchexec --watch . --clear --ignore 'tests/**/diff/**' --ignore 'tests/**/out/**' 'typst-test r'"
This will automatically run typst-test r
whenever any file changes other than those in your tests' {diff,out}
directories.
Notes:
-
The command we're using here, using
typst-test r
, runs all tests anytime any file changes. If you want different behavior, you need to modify the command appropriately. -
Although unlikely, if your tests change any files in your package source tree, you may need to include them as additional
--ignore <glob>
patterns to the command.
Setting Up CI
Continuous integration can take a lot of manual work off your shoulders.
In this chapter we'll look at how to run typst-test
in your GitHub CI to continuously test your code and catch bugs before they get merged into main.
We start off by creating a .github/workflows
directory in our project and place a single ci.yaml
file in this directory.
The name is not important, but should be something that helps you distinguish which workflow you're looking at.
If you simply want get CI working without any elaborate explanation, skip ahead to the bottom and copy the full file.
There's a good chance that you can simply copy and paste the workflow as is and it'll work, but the guide should give you an idea on how to adjust it to your liking.
First, we configure when CI should be running:
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
The on.push
and on.pull_request
fields both take a branches
fields with a single pattern matching our main branch, this means that this workflow is run on pull requests and pushes to main.
We could leave out the branches
field and it would apply to all pushes or pull requests, but this is seldom useful.
If you have branch protection, you may not need the on.push
trigger at all, if you're paying for CI this may save you money.
Next, let's add the test job we want to run, we'll let it run on ubuntu-latest
, that's a fairly common runner for regular CI.
More often than not, you won't need matrix or cross platform tests for Typst projects as Typst takes care of the OS differences for you.
Add this below the job triggers:
# ...
jobs:
tests:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
This adds a single step to our job (called tests
), which checks out the repository, making it available for the following steps.
For now, we'll need cargo
to download and install typst-test
, so we install it and cache the installation with a package cache action.
And then we install typst-test
straight from GitHub using the backport
branch, this branch does not yet have features like test set expressions, but it is somewhat stable and receives fixes.
steps:
# ...
- name: Probe runner package cache
uses: awalsh128/cache-apt-pkgs-action@latest
with:
packages: cargo
version: 1.0
- name: Install typst-test from github
uses: baptiste0928/cargo-install@v3.0.0
with:
crate: typst-test
git: https://github.com/tingerrr/typst-test.git
branch: backport
Because the typst-test
version at backport
does not yet come with it's own Typst compiler, it needs a Typst installation in the runner. Add the following with your preferred Typst version:
steps:
# ...
- name: Setup typst
uses: yusancky/setup-typst@v2
with:
version: 'v0.11.1'
Then we're ready to run our tests, that's as simple as adding a step like so:
steps:
# ...
- name: Run test suite
run: typst-test run
CI may fail for various reasons, such as
- missing fonts
- system time dependent test cases
- or otherwise hard-to-debug differences between the CI runner and your local machine.
To make it easier for us to actually get a grasp at the problem we should make the results of the test run available.
We can do this by using an upload action, however, if typst-test
fails the step will cancel all regular steps after itself, so we need to ensure it runs regardless of test failure or success by using if: always()
.
We upload all artifacts since some tests may produce both references and output on-the-fly and retain them for 5 days:
steps:
# ...
- name: Archive artifacts
uses: actions/upload-artifact@v4
if: always()
with:
name: artifacts
path: |
tests/**/diff/*.png
tests/**/out/*.png
tests/**/ref/*.png
retention-days: 5
And that's it, you can add this file to your repo, push it to a branch and open a PR, the PR will already start running the workflow for you and you can adjust and debug it as needed.
The full workflow file:
name: CI on: push: branches: [ main ] pull_request: branches: [ main ] jobs: tests: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v3 - name: Probe runner package cache uses: awalsh128/cache-apt-pkgs-action@latest with: packages: cargo version: 1.0 - name: Install typst-test from github uses: baptiste0928/cargo-install@v3.0.0 with: crate: typst-test git: https://github.com/tingerrr/typst-test.git branch: backport - name: Setup typst uses: yusancky/setup-typst@v2 with: version: 'v0.11.1' - name: Run test suite run: typst-test run - name: Archive artifacts uses: actions/upload-artifact@v4 if: always() with: name: artifacts path: | tests/**/diff/*.png tests/**/out/*.png tests/**/ref/*.png retention-days: 5
Tests
There are currently three types of tests:
- Unit tests, tests which are run to test regressions on code changes mostly through comparison to reference documents.
- Template tests, special tests for template packages which take a scaffold document and attempt to compile it and optionally compare it.
- Doc tests, example code in documentation comments which are compiled but not compared.
typst-test
can currently only operate on unit tests found as individual files in the test root.
In the future, template tests and doc tests will be added, see #34 and #49.
Tests get access to a special test library and can use annotations configuration.
Unit Tests
Unit tests are found in the test root as individual scripts and are the most versatile type of test. There are three kinds of unit tests:
- compile only, tests which are compiled, but not compared
- compared
- persistent, tests which are compared to reference persistent documents
- ephemeral, tests which are compared to the output of another script which is compiled on the fly
Each of those can be selected using one of the built-in test sets.
Unit tests are the only tests which have access to an extended Typst standard library. This extended standard library provides currently provides panic-helpers for catching and comparing panics.
A test is a directory somewhere within the test root (commonly <project>/tests
), which contains the following entries:
test.typ
: as the entry pointref.typ
(optional): for ephemeral tests as the reference entry pointref/
(optional, temporary): for persistent or ephemeral tests for the reference documentsout/
(temporary) for the test documentsdiff/
(temporary) for the diff documents
The path from the test root to the test script marks the test's identifier. Its test kind is determined by the existence of the ref script and ref directory:
- If it contains a
ref
directory but noref.typ
script, it is considered a persistent test. - If it a
ref.typ
script, it is considered an ephemeral test. - If it contains neither, it is considered compile only.
Tests may contain other tests at the moment, e.g the following is valid
tests/
foo
foo/test.typ
foo/bar
foo/bar/test.typ
and contains the tests foo
and foo/bar
.
Unit tests are compiled with the project root as typst root, such that they can easily access package internals.
They can also access test library items such as catch
for catching and binding panics for testing error reporting:
/// [annotation]
///
/// Description
// access to internals
#import "/src/internal.typ": foo
#let panics = () => {
foo("bar")
}
// ensures there's a panic
#assert-panic(panics)
// unwraps the panic if there is one
#assert.eq(
catch(panics).first(),
"panicked with: Invalid arg, expected `int`, got `str`",
)
Documentation Tests
TODO: See #34.
Template Tests
TODO: See #49.
Annotations
Tests may contain annotations at the start of the file. These annotations are placed on the leading doc comment of the file itself.
/// [skip]
///
/// Synopsis:
/// ...
#import "/src/internal.typ": foo
...
Annotations may only be placed at the start of the doc comment on individual lines without anything between them (no empty lines or other content).
The following annotations exist:
Annotation | Description |
---|---|
skip | Takes not arguments, marks the test as part of the skip test set, can only be used once. |
Test Library
The test library is an augmented standard library, it contains all definitions in the standard library plus some additional modules and functions which help testing packages and debug regressions.
It defines the following modules:
test
: a module with various testing helpers such ascatch
and additional asserts.
The following items are re-exported in the global scope as well:
assert-panic
: originallytest.assert-panic
catch
: originallytest.catch
test
Contains the main testing utilities.
assert-panic
Ensures that a function panics.
Fails with an error if the function does not panic. Does not produce any output in the document.
Example
#assert-panic(() => {}, message: "I panic!")
#assert-panic(() => panic(), message: "I don't!")
Parameters
assert-panic(
function,
message: str | auto,
)
function: function
required
positional
The function to test.
message: str | auto
The error message when the assertion fails.
catch
Unwraps and returns the panics generated by a function, if there were any.
Does not produce any output in the document.
Example
#assert.eq(catch(() => {}), none)
#assert.eq(
catch(panics).first(),
"panicked with: Invalid arg, expected `int`, got `str`",
)
Parameters
catch(
function,
)
function: function
required
positional
The function to test.
Test Set Language
The test set language is an expression based language, top level expression can be built up form smaller expressions consisting of binary and unary operators and built-in functions. They form sets which are used to specify which test should be selected for various operations.
See [Using Test Sets][guide]
Grammar
The exact EBNF 1 can be read from the source code at [grammar.pest].
Evaluation
Test set expressions restrict the set of all tests which are contained in the expression and are compiled to an AST which check against all tests.
A test set such as !skip()
would be checked against each test that is found by reading its annotations and filtering all tests out which do have an ignored annotation.
While the order of some operations like union and intersection doesn't matter semantically, the left operand is checked first for those where short circuiting can be applied.
The expression !skip() & regex:'complicated regex'
is more efficient than regex:'complicated regex' & !skip()
, since it will avoid the regex check for skipped tests entirely.
This may change in the future if optimizations are added for test set expressions.
Operators
Test set expressions can be composed using binary and unary operators.
Type | Prec. | Name | Symbols | Explanation |
---|---|---|---|---|
infix | 1 | union | | , or | Includes all tests which are in either the left OR right test set expression. |
infix | 1 | difference | ~ , diff | Includes all tests which are in the left but NOT in the right test set expression. |
infix | 2 | intersection | & , and | Includes all tests which are in both the left AND right test set expression. |
infix | 3 | symmetric difference | ^ , xor | Includes all tests which are in either the left OR right test set expression, but NOT in both. |
prefix | 4 | complement | ! , not | Includes all tests which are NOT in the test set expression. |
Be aware of precedence when combining different operators, higher precedence means operators bind more strongly, e.g. not a and b
is (not a) and b
, not not (a and b)
because not
has a higher precedence than and
.
Binary operators are left associative, e.g. a ~ b ~ c
is (a ~ b) ~ c
, not a ~ (b ~ c)
.
When in doubt, use parenthesis to force precedence.
Sections
- Built-in Test Sets lists built-in test sets and functions.
Extended Backus-Naur-Form [guide]: ../../guides/test-sets.md [grammar]: https://github.com/tinger/typst-test/crates/typst-test/
Built-in Test Sets
Types
There are a few available types:
Type | Explanation |
---|---|
function | Functions which evaluate to another type upon compilation. |
test set | Represents a set of tests. |
number | Positive whole numbers. |
string | Used for patterns containing special characters. |
pattern | Special syntax for test sets which operator on test identifiers. |
A test set expression must always evaluate to a test set, otherwise it is ill-formed, all operators operate on test sets only.
The following may be valid set(1) & set("aaa", 2)
, but set() & 1
is not.
There is no arithmetic, and at the time of writing this literals like numbers and strings are included for future test set functionality.
Functions
The following functions are available, they can be written out in place of any expression.
Name | Explanation |
---|---|
none() | Includes no tests. |
all() | Includes all tests. |
skip() | Includes tests with a skip annotation |
compile-only() | Includes tests without references. |
ephemeral() | Includes tests with ephemeral references. |
persistent() | Includes tests with persistent references. |
Patterns
Patterns are special types which are checked against identifiers and automatically turned into test sets.
A pattern starts with a pattern type before a colon :
and is either followed by a raw pattern or a string literal.
Raw patterns don't have any delimiters and parse anything that's not whitespace.
String patterns are pattern prefixes directly followed by literal strings, they can be used to avoid parsing other tokens as part of a pattern, like when nesting pattern literals in expression groups or in function arguments.
The following pattern types exist:
Type | Example | Explanation |
---|---|---|
e /exact | exact:mod/name | Matches by comparing the identifier exactly to the given term. |
c /contains | c:plot | Matches by checking if the given term is contained in the identifier. |
r /regex | regex:mod-[234]/.* | Matches using the given regex. |
g /glob | g:foo/**/bar | Matches using the given glob battern. |
p /path | p:foo | Matches using the given glob battern. |