Introduction

typst-test is a test runner for Typst projects. It helps you worry less about regressions and speeds up your development.

Bird's-Eye View

Out of the box typst-test supports the following features:

  • locate the project it is invoked in
  • collect and manage test scripts and references
  • compile and run tests
  • compare test output to references
  • provide extra scripting functionality
  • running custom scripts for test automation

A Closer Look

This book contains a few sections aimed at answering the most common questions right out the gate.

  • Installation outlines various ways to install typst-test.
  • Usage goes over some basic commands to get started with typst-test.

After the quick start, a few guides delve deeper into some advanced topics.

  • Writing Tests inspects adding, removing, updating and editing tests more closely.
  • Using Test Sets delves into the test set language and how it can be used to isolate tests and speed up your TDD workflow.
  • Automation explains the ins and outs of hooks and how they can be used for testing typst preprocessors or formatters.
  • Setting Up CI shows how to set up typst-test to continuously test all changes to your package.

The later sections of the book are a technical reference to typst-test and its various features or concepts.

  • Tests outlines which types of tests typst-test supports, how they can be customized and which features are offered within the test scripts.
  • Test Set Language defines the test set language and its built in test sets.
  • Configuration Schema lists all existing config options, their expected types and default values.
  • Command Line Tool goes over typst-tests various sub commands, arguments and options.

Installation

To install typst-test on your PC, you must, for the time being, compile it from source. Once typst-test reaches 0.1.0, this restriction will be lifted and each release will provide precompiled binaries for major operating systems (Windows, Linux and macOS).

Installation From Source

To install typst-test from source, you must have a Rust toolchain (Rust v1.79.0+) and cargo installed.

Run the following command to install the latest nightly version

cargo install --locked --git https://github.com/tingerrr/typst-test

To install the latest semi stable version run

cargo install --locked --git https://github.com/tingerrr/typst-test --tag ci-semi-stable

Required Libraries

OpenSSL

OpenSSL (v1.0.1 to v3.x.x) or LibreSSL (v2.5 to v3.7.x) are required to allow typst-test to download packages from the Typst Universe package registry.

When installing from source the vendor-openssl feature can be used on operating systems other than Windows and macOS to statically vendor and statically link to OpenSSL, avoiding the need for it on the operating system.

This is not yet possible, but will be once #32 is resolved, in the meantime OpenSSL may be linked to dynamically as a transitive dependency.

Usage

typst-test is a command line program, it can be run by simply invoking it in your favorite shell and passing the appropriate arguments.

If you open a shell in the folder project and typst-test is at project/bin/typst-test, then you can run it using ./project/bin/typst-test. Placing it directly in your project is most likely not what you will do, or want to do. You should install it to a directory which is contained in your PATH, allowing you to simply run it using typst-test directly. How to add such folders to your PATH depends on your operating system, but if you installed typst-test using one of the recommended methods in Installation, then such a folder should be chosen for you.

For the remainder of this document tt is used in favor of typst-test whenever a command line example is shown. When you see an example such as

tt run -e 'name(~id)'

it is meant to be run as

typst-test run -e 'name(~id)'

You can also define an alias of the same name to make typing it easier.

typst-test requires a certain project structure to work, if you want to start testing your project's code, you can create an example test and the required directory structure using the init command.

tt init

This will create the default example to give you a grasp at where tests are located, and how they are structured. typs-test will look for the project root by checking for directories containing a typst.toml manifest file. This is because typst-test is primarily aimed at developers of packages, if you want to use a different project root, or don't have a typst-manifest you can provide the root directory using the --root like so.

tt init --root ./path/to/root/

Keep in mind that you must pass this option to every command that operates on a project. Alternatively the TYPST_ROOT environment variable can be set to the project root.

Further examples assume the existence of a manifest, or the TYPST_ROOT variable being set If you're just following along and don't have a package to test this with, you can use an empty project with the following manifest:

[package]
name = "foo"
description = "A fancy Typst package!"
authors = ["John Doe"]
license = "MIT"

entrypoint = "src/lib.typ"
version = "0.1.0"

Once the project is initialized, you can run the example test to see that everything works.

tt run example

You should see something along the lines of

Running tests
        ok example

Summary
  1 / 1 passed.

Let's edit the test to actually do something, the default example test can be found in <project>/tests/example/ and simply contains Hello World. Write something else in there and see what happens

-Hello World
+Typst is Great!

Once we run typst-test again we'll see that the test no longer passes:

Running tests
    failed example
           Page 1 had 1292 deviations
           hint: Diff images have been saved at '<project>/tests/example/diff'

Summary
  0 / 1 passed.

typst-test has compared the reference output from the original Hello World document to the new document and determined that they don't match. It also told you where you can inspect the difference, the <project>/test/example contains a diff directory. You can take a look to see what changed, you can also take a look at the out and ref directories, these contain the output of the current test and the expected reference output respectively.

Well, but this wasn't a mistake, this was a deliberate change. So, let's update the references to reflect that and try again. For this we use the appropriately named update command:

tt update example

You should see output similar to

Updating tests
   updated example

Summary
  1 / 1 updated.

and the test should once again pass.

Using Test Sets

Why Tests Sets

Many operations such as running, comparing, removing or updating tests all need to somehow select which tests to operate on. To avoid having lots of hard-to-remember options, which may or may not interact well, typst-test offers an expression based set language which is used to select tests. Instead of writing

tt run --regex --mod 'foo-.*' --name 'bar/baz' --no-ignored

typst-test can be invoked like

tt run --expression '(mod(/foo-.*/) & !ignored) | name(=bar/baz)'

This comes with quite a few advantages:

  • it's easier to compose multiple identifier filters like mod and name
  • options are ambiguous whether they apply to the next option only or to all options like --regex
  • with options it's unclear how to compose complex relations like and vs or of other options
  • test set expressions are visually close to the filter expressions they describe, their operators are deiberately chosen to feel like witing a predicate which is applied over all tests

Let's first disect what this expression actually means: (mod(/foo-.*/) & !ignored) | id(=bar/baz)

  1. We have a top-level binary expression like so a | b, this is a union expression, it includes all tests found in either a or b.
  2. The right expression is id(=bar/baz), this includes all tests who's full identifier matches the given pattern =bar/baz. That's an exact matcher (indicated by =) for the test identifier bar/baz. This means that whatever is on the left of your union, we also include the test bar/baz.
  3. The left expression is itself a binary expression again, this time an intersection. It consists of another matcher test set and a complement.
    1. The name matcher is only applied to modules this time, indiicated by mod and uses a regex matcher (delimited by /). It includes all tests who's module identifier matches the given regex.
    2. The complement !ignored includes all tests which are not marked as ignored.

Tying it all together, we can describe what this expression matches in a sentence:

Select all tests which are not marked ignore and are inside a module starting with foo-, include also the test bar/baz.

Trying to describe this relationship using options on the command line would be cumbersome, error prone and, depending on the options present, impossible. 1

Default Test Sets

Many operations take either a set of tests as positional arguments, which are matched exactly, or a test set expression. If neither are given the default test set is used, which is itself a shorthand for !ignored.

This may change in the future, commands my get their own, or even configurable default test sets. See #40.

More concretely given the invocation

tt list test1 test2 ...

is equivalent to the following invocation

tt list --expression 'none & (id(=test1) | id(=test2) | ...)'

An Iterative Example

Suppose you had a project with the following tests:

mod/sub/foo ephemeral  ignored
mod/sub/bar ephemeral
mod/sub/baz persistent
mod/foo     persistent
bar         ephemeral
baz         persistent ignored

and you wanted run only ephemeral tests in mod/sub. You could construct a expression with the following steps:

  1. Firstly, filter out all ignored tests, typst-test does by default, but once we use our own expression we must include this restriction ourselves. Both of the following would work.
    • default & ...
    • !ignored & ... Let's go with default to keep it simple.
  2. Now include only those tests which are ephemeral, to do this, add the ephemeral test set.
    • default & ephemeral
  3. Now finally, restrict it to be only tests which are in mod/sub or it's sub modules. You can do so by adding any of the following identifier matchers:
    • default & ephemeral & mod(~sub)
    • default & ephemeral & mod(=mod/sub)
    • default & ephemeral & id(/^mod\/sub/)

You can iteratively test your results with typst-test list -e '...' until you're satisfied. Then you can run whatever operation you want with the same expression. IF it is a destructive operation, i.e. one that writes chaanges to non-temporary files, then you must also pass --all if your test set contains more than one test.

Scripting

If you build up test set expressions programmatically, consider taking a look at the built-in test set constants. Specifically the all and none test sets can be used as identity sets for certain operators, possibly simplifying the code generating the test sets.

Some of the syntax used in test sets may interfere with your shell, especially the use of whitespace. Use non-interpreting quotes around the test set expression (commonly single quotes '...') to avoid interpreting them as shell specific sequences.

1

To get a more complete look at test sets, take a look at the reference.

Setting Up CI

Continuous integration can take a lot of manual work off your shoulders. In this chapter we'll look at how to run typst-test in your GitHub CI to continuously test your code and catch bugs before they get merged into main.

We start off by creating a .github/workflows directory in our project and place a single ci.yaml file in this directory. The name is not important, but should be something that helps you distinguish which workflow you're looking at.

If you simply want get CI working without any elaborate explanation, skip ahead to the bottom and copy the full file.

There's a good chance that you can simply copy and paste the workflow as is and it'll work, but the guide should give you an idea on how to adjust it to your liking.

First, we configure when CI should be running:

name: CI
on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

The on.push and on.pull_request fields both take a branches fields with a single pattern matching our main branch, this means that this workflow is run on pull requests and pushes to main. We could leave out the branches field and it would apply to all pushes or pull requests, but this is seldom useful. If you have branch protection, you may not need the on.push trigger at all, if you're paying for CI this may save you money.

Next, let's add the test job we want to run, we'll let it run on ubuntu-latest, that's a fairly common runner for regular CI. More often than not, you won't need matrix or cross platform tests for Typst projects as Typst takes care of the OS differences for you. Add this below the job triggers:

# ...

jobs:
  tests:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v3

This adds a single step to our job (called tests), which checks out the repository, making it available for the following steps.

For now, we'll need cargo to download and install typst-test, so we install it and cache the installation with a package cache action. And then we install typst-test straight from Github using the ci-semi-stable tag, this tag does not yet have features like test set expressions, but it works just the same otherwise.

steps:
  # ...
  - name: Probe runner package cache
    uses: awalsh128/cache-apt-pkgs-action@latest
    with:
      packages: cargo
      version: 1.0

  - name: Install typst-test from github
    uses: baptiste0928/cargo-install@v3.0.0
    with:
      crate: typst-test
      git: https://github.com/tingerrr/typst-test.git
      tag: ci-semi-stable

Because the typst-test version at ci-semi-stable does not yet come with it's own Typst compiler, it needs a Typst installation in the runner. Add the folllowing with your prefferred Typst version:

steps:
  # ...
  - name: Setup typst
    uses: yusancky/setup-typst@v2
    with:
      version: 'v0.11.1'

Then we're ready to run our tests, that's as simple as adding a step like so:

steps:
  # ...
  - name: Run test suite
    run: typst-test run

CI may fail for various reasons, such as

  • missing fonts
  • system time dependent test cases
  • or otherwise hard-to-debug differences between the CI runner and your local machine.

To make it easier for us to actually get a grasp at the problem we should make the results of the test run available. We can do this by using an upload action, however, if typst-test fails the step will cancel all regular steps after itself, so we need to ensure it runs regardless of test failure or success by using if: always(). We upload all artifacts since some tests may produce both references and output on-the-fly and retain them for 5 days:

steps:
  # ...
  - name: Archive artifacts
    uses: actions/upload-artifact@v4
    if: always()
    with:
      name: artifacts
      path: |
        tests/**/diff/*.png
        tests/**/out/*.png
        tests/**/ref/*.png
      retention-days: 5

And that's it, you can add this file to your repo, push it to a branch and open a PR, the PR will already start running the workflow for you and you can adjust and debug it as needed.

The full workflow file:

name: CI
on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  tests:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v3

      - name: Probe runner package cache
        uses: awalsh128/cache-apt-pkgs-action@latest
        with:
          packages: cargo
          version: 1.0

      - name: Install typst-test from github
        uses: baptiste0928/cargo-install@v3.0.0
        with:
          crate: typst-test
          git: https://github.com/tingerrr/typst-test.git
          tag: ci-semi-stable

      - name: Setup typst
        uses: yusancky/setup-typst@v2
        with:
          version: 'v0.11.1'

      - name: Run test suite
        run: typst-test run

      - name: Archive artifacts
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: artifacts
          path: |
            tests/**/diff/*.png
            tests/**/out/*.png
            tests/**/ref/*.png
          retention-days: 5

Tests

There are currently three types of tests:

  • Unit tests, tests which are run to test regressions on code changes mostly through comparison to reference documents.
  • Template tests, special tests for template packages which take a scaffold document and attempt to compile it and optionally compare it.
  • Doc tests, example code in documentation comments which are compiled but not compared.

typst-test can currently only operate on unit tests found as individual files in the test root.

In the future, template tests and doc tests will be added, see #34 and #49.

Tests get access to a special test library and can use annotations configuration.

Unit Tests

Unit tests are found in the test root as individual scripts and are the most versatile type of test. There are three kinds of unit tests:

  • compile only, tests which are compiled, but not compared
  • compared
    • persistent, tests which are compared to reference persistent documents
    • ephemeral, tests which are compared to the output of another script which is compiled on the fly

Each of those can be selected using one of the built-in test sets.

Unit tests are the only tests which have access to an extended Typst standard library. This extended standard library provides curently provides panic-helpers for catching and comparing panics.

A test is a directory somehwere within the test root (commonly <project>/tests), which contains the following entries:

  • test.typ: as the entry point
  • ref.typ (optional): for ephemeral tests as the reference entry point
  • ref/ (optional, temporary): for persistent or ephemeral tests for the reference documents
  • out/ (temporary) for the test documents
  • diff/ (temporary) for the diff documents

The path from the test root to the test script marks the test's identifier. Its test kind is determined by the existence of the ref script and ref directory:

  • If it contains a ref directory but no ref.typ script, it is considered a persistent test.
  • If it a ref.typ script, it is considered an ephermeral test.
  • If it contains neither, it is considered compile only.

Tests may contain other tests at the moment, e.g the following is valid

tests/
  foo
  foo/test.typ
  foo/bar
  foo/bar/test.typ

and contains the tests foo and foo/bar.

Unit tests are compiled with the project root as typst root, such that they can easily access package internals. They can also access test library items such as catch for catching and binding panics for testing error reporting:

/// [annotation]
///
/// Description

// access to internals
#import "/src/internal.typ": foo

#let panics = () => {
  foo("bar")
}

// ensures there's a panic
#assert-panic(panics)

// unwraps the panic if there is one
#assert.eq(
  catch(panics).first(),
  "panicked with: Invalid arg, expected `int`, got `str`",
)

Documentation Tests

TODO: See #34.

Template Tests

TODO: See #49.

Annotations

Tests may contain annotations at the start of the file. These annotations are placed on the leading doc comment of the file itself.

/// [ignore]
/// [custom: foo]
///
/// Synopsis:
/// ...

#import "/src/internal.typ": foo
...

Annotations may only be placed at the start of the doc comment on individual lines without anything between them (no empty lines or other content).

The following annotations exist:

AnnotationDescription
ignoreTakes not arguments, marks the test as part of the ignored test set, can only be used once.
customTakes a single identifier as argument, marks the test as part of a custom test set of the given identifier, can be used multiple times.

A test with an annotation like [custom: foo] can be selected with a test set like custom(foo).

Test Library

The test library is an augmented standard library, it contains all definitions in the standard library plus some additional modules and functions which help testing packages and debug regressions.

It defines the following modules:

  • test: a module with various testing helpers such as catch and additonal asserts.

The following items are re-exorted in the global scope as well:

  • assert-panic: originally test.assert-panic
  • catch: originally test.catch

test

Contains the main testing utilities.

assert-panic

Ensures that a function panics.

Fails with an error if the function does not panic. Does not produce any output in the document.

Example

#assert-panic(() => {},      message: "I panic!")
#assert-panic(() => panic(), message: "I don't!")

Parameters

assert-panic(
  function,
  message: str | auto,
)
function: function
  • required
  • positional

The function to test.

message: str | auto

The error message when the assertion fails.

catch

Unwraps and returns the panics generated by a function, if there were any.

Does not produce any output in the document.

Example

#assert.eq(catch(() => {}), none)
#assert.eq(
  catch(panics).first(),
  "panicked with: Invalid arg, expected `int`, got `str`",
)

Parameters

catch(
  function,
)
function: function
  • required
  • positional

The function to test.

Test Set Language

The test set language is an expression based language, top level expression can be built up form smaller expressions consisting of binary and unary operators and built-in functions and constants.

Evaluation

Test set expressions restrict the set of all tests which are contained in the expression and are compiled to an AST which check against all tests. A test set such as !ignored would be checked against each test that is found by reading its annotations and filtering all tests out which do have an ignored annotation. While the order of some operations like union and intersection doesn't matter semantically, the left operand is checked first for those where short circuiting can be applied. The expression !ignored & id(/complicated regex/) is more efficient than id(/complicated regex/) & !ignored, since it will avoid the regex check for ignored tests entirely. This may change in the future if optimizations are added for test set expressions.

Operators

Test set expressions can be composed using binary and unary operators.

TypePrec.NameSymbolsExplanation
infix1union, | , +, orIncludes all tests which are in either the left OR right test set expression.
infix1difference\, - 1Includes all tests which are in the left but NOT in the right test set expression.
infix2intersection, &, andIncludes all tests which are in both the left AND right test set expression.
infix3symmetric differenceΔ, ^, xorIncludes all tests which are in either the left OR right test set expression, but NOT in both.
prefix4complement¬, !, notIncludes all tests which are NOT in the test set expression.

Be aware of precedence when combining different operators, higher precedence means operators bind more strongly, e.g. not a and b is (not a) and b, not not (a and b) because not has a higher precedence than and. Binary operators are left associative, e.g. a - b - c is (a - b) - c, not a - (b - c). When in doubt, use parenthesis to force precedence.

Sections

2

Extended Backus-Naur-Form

1

There is currently no literal difference operator such as and or not.

Grammar

The test set expression entrypoint rule is the main node. The nodes SOI and EOI stand for start of input and end of input respectively.

main ::=
  SOI
  WHITESPACE*
  expr
  WHITESPACE*
  EOI
  ;

expr ::=
  unary_operator*
  term
  (
    WHITESPACE*
    binary_operator
    WHITESPACE*
    unary_operator*
    term
  )*
  ;

atom ::= func | val ;
term ::= atom | "(" expr ")" ;

unary_operator ::= complement ;

complement ::= "¬" | "!" | "not" ;

binary_operator ::=
  intersect
  | union
  | difference
  | symmetric_difference
  ;

intersect ::= "∩" | "&" | "and" ;
union ::= "∪" | "|" | "or" | "+" ;
difference ::= "\\" | "-" ;
symmetric_difference ::= "Δ" | "^" | "xor" ;

val ::= id ;
func ::= id args ;

args ::= "(" arg ")" ;
arg ::= matcher ;

matcher =
  exact_matcher
  | contains_matcher
  | regex_matcher
  | plain_matcher
  ;

exact_matcher ::= "=" name
contains_matcher ::= "~" name
regex_matcher ::= "/" regex "/"
plain_matcher ::= name


id ::=
  ASCII_ALPHA
  (ASCII_ALPHANUMERIC | "-" | "_")*
  ;

name ::=
  ASCII_ALPHA
  (ASCII_ALPHANUMERIC | "-" | "_" | "/")*
  ;

regex ::=
  (
    ( "\\" "/")
    | (!"/" ANY)
  )+
  ;

WHITESPACE ::= " " | "\t" | "\r" | "\n" ;

Built-in Test Sets

Constants

The following constants are available, they can be written out in place of any expression.

NameExplanation
noneIncludes no tests.
allIncludes all tests.
ignoredIncludes tests with an ignored annotation
compile-onlyIncludes tests without references.
ephemeralIncludes tests with ephemeral references.
persistentIncludes tests with persistent references.
defaultA shorthand for !ignored, this is used as a default if no test set is passed.

Functions

The following functions operate on identifiers using matchers.

NameExampleExplanation
idid(=mod/name)Includes tests who's full identifier matches the pattern.
modmod(/regex/)Includes tests who's module matches the pattern.
namename(~foo)Includes tests who's name matches the pattern.
customcustom(foo)Includes tests with have a custom annotation and the given identifier.

Matchers

Matchers are patterns which are checked against identifiers.

NameExampleExplanation
=exact=mod/nameMatches by comparing the identifier exactly to the given term.
~contains~plotMatches by checking if the given term is contained in the identifier.
/regex//mod-[234]\/.*/Matches using the given regex, literal slashes / and backslashes \ must be escaped using a back slash \\.