Introduction
This project has been renamed to tytanic
, this repo is archived to keep CI running which still depends on backport
or ci-semi-stable
.
Installation
To install typst-test
on your PC, you must, for the time being, compile it from source.
Once typst-test
reaches 0.1.0, this restriction will be lifted and each release will provide precompiled binaries for major operating systems (Windows, Linux and macOS).
Installation From Source
To install typst-test
from source, you must have a Rust toolchain (Rust v1.80.0+) and cargo installed.
Run the following command to install the latest nightly version:
cargo install --locked --git https://github.com/tingerrr/typst-test
This version has the newest features but may have unfixed bugs or rough edges.
To install the latest backport version run:
cargo install --locked --git https://github.com/tingerrr/typst-test --branch backport
This version is more stable but doesn't contain most of the features listed in this book, it is mostly provided for backporting critical fixes until 0.1.0
is released.
Required Libraries
OpenSSL
OpenSSL (v1.0.1 to v3.x.x) or LibreSSL (v2.5 to v3.7.x) are required to allow typst-test
to download packages from the Typst Universe package registry.
When installing from source the vendor-openssl
feature can be used on operating systems other than Windows and macOS to vendor and statically link to OpenSSL, avoiding the need for it on the operating system.
Usage
typst-test
is a command line program, it can be run by simply invoking it in your favorite shell and passing the appropriate arguments, the binary is called tt
.
If you open a shell in the folder project
and typst-test
is at project/bin/tt
, then you can run it using ./project/bin/tt
.
Placing it directly in your project is most likely not what you want to do.
You should install it to a directory which is contained in your $PATH
, allowing you to simply run it using tt
directly.
How to add such folders to your PATH
depends on your operating system, but if you installed typst-test
using one of the recommended methods in Installation, then such a folder should be chosen for you automatically.
typst-test
will look for the project root by checking for directories containing a typst.toml
manifest file.
This is because typst-test
is primarily aimed at developers of packages.
If you want to use a different project root, or don't have a manifest file, you can provide the root directory using the --root
like so.
tt list --root ./path/to/root/
Keep in mind that you must pass this option to every command that operates on a project.
Alternatively the TYPST_ROOT
environment variable can be set to the project root.
Further examples assume the existence of a manifest, or the TYPST_ROOT
variable being set
If you're just following along and don't have a package to test this with, you can use an empty project with the following manifest:
[package]
name = "foo"
description = "A fancy Typst package!"
version = "0.1.0"
authors = ["John Doe"]
license = "MIT"
entrypoint = "src/lib.typ"
Once you have a project root to work with you can run various commands like tt add
or tt run
.
Check out the tests guide to find out how you can test your code.
Writing tests
To start writing tests, you only need to write regular typst
scripts, no special syntax or annotations are required.
Let's start with the most common type of tests, regression tests. We'll assume you have a normal package directory structure:
<project>
├─ src
│ └─ lib.typ
└─ typst.toml
Regression tests
Regression tests are found in the test
directory of your project (remember that this is where your typst.toml
manifest is found).
Let's write our first test, you can run tt add my-test
to add a new regression test, this creates a new directory called my-test
inside tests
and adds a test script and reference document.
This test is located in tests/my-test/tests.typ
and is the entrypoint script (like a main.typ
file).
Assuming you passed no extra options to tt add
, this test is going to be a persistent
regression test, this means that its output will be compared to a reference document which is stored in tests/my-test/ref/
as individual pages.
You could also pass --ephemeral
, which means to create a script which creates this document on every test run or --compile-only
, which means the test doesn't create any output and is only compiled.
Your project will now look like this:
<project>
├─ src
│ └─ lib.typ
├─ tests
│ └─ my-test
│ ├─ ref
│ │ └─ 1.png
│ └─ test.typ
└─ typst.toml
If you now run
tt run my-test
you should see something along the lines of
Starting 1 tests (run ID: 4863ce3b-70ea-4aea-9151-b83e25f11c94)
pass [ 0s 38ms 413µs] my-test
──────────
Summary [ 0s 38ms 494µs] 1/1 tests run: all 1 passed
This means that the test was run successfully.
Let's edit the test to actually do something, right now it simply contains Hello World
.
Write something else in there and see what happens:
-Hello World
+Typst is Great!
Once we run typst-test
again we'll see that the test no longer passes:
Starting 1 tests (run ID: 7cae75f3-3cc3-4770-8e3a-cb87dd6971cf)
fail [ 0s 44ms 631µs] my-test
Page 1 had 1292 deviations
hint: Diff images have been saved at '<project>/test/tests/my-test/diff'
──────────
Summary [ 0s 44ms 762µs] 1/1 tests run: all 1 failed
typst-test
has compared the reference output from the original Hello World
document to the new document and determined that they don't match.
It also told you where you can inspect the difference, the <project>/tests/my-test
contains a diff
directory.
You can take a look to see what changed, you can also take a look at the out
and ref
directories, these contain the output of the current test and the expected reference output respectively.
Well, but this wasn't a mistake, this was a deliberate change.
So, let's update the references to reflect that and try again.
For this we use the appropriately named update
command:
tt update my-test
You should see output similar to
Starting 1 tests (run ID: f11413cf-3f7f-4e02-8269-ad9023dbefab)
pass [ 0s 51ms 550µs] my-test
──────────
Summary [ 0s 51ms 652µs] 1/1 tests run: all 1 passed
and the test should once again pass.
This test is still somewhat arcane, let's actually test something interesting, like the API of your fancy package.
Let's say you have this function inside your src/lib.typ
file:
/// Frobnicates a value.
///
/// -> content
#let frobnicate(
/// The argument to frobnicate, cannot be `none`.
///
/// -> any
arg
) = {
assert.ne(type(arg), type(none), message: "Cannot frobnicate `none`!")
[Frobnicating #arg]
}
Because typst-test
comes with a custom standard library you can catch panics and extract their messages to ensure your code also works in the failure path.
Let's add another test where we check that this function behaves correctly and let's not return any output but instead just check how it behaves with various inputs:
tt add --compile-only frobnicate
You project should now look like this:
<project>
├─ src
│ └─ lib.typ
├─ tests
│ ├─ my-test
│ │ ├─ ref
│ │ │ └─ 1.png
│ │ └─ test.typ
│ └─ frobnicate
│ └─ test.typ
└─ typst.toml
Note that the frobnicate
test does not contain any other directories for references.
Because this test is within the project root it can access the projects internal files, even if they aren't reachable from the package entrypoint.
Let's import our function and test it:
#import "/src/lib.typ": frobnicate
// Passing `auto` should work:
#frobnicate(auto)
// We could even compare it:
#assert.eq(frobnicate("Strings work!"), [Frobnicate #"Strings work!"])
#assert.eq(frobnicate[Content works!], [Frobnicate Content works!])
// If we pass `none`, then this must panic, otherwise we did something wrong.
#assert-panic(() => frobnicate(none))
// We can also unwrap the panics and inspect their eror message.
// Note that we get an array of strings back if a panic occured, or `none` if
// there was no panic.
#assert.eq(
catch(() => frobnicate(none)),
"panicked with: Cannot frobnicate `none`!",
)
The exact interface of this library may change in the future.
See #73.
The more your project grows
Template tests
In the future you'll be able to automatically test your templates too, but these are currently unsupported
See #49.
Documentation tests
In the future you'll be able to automatically test your documentation examples too, but these are currently unsupported
See #34.
This should equip you with all the knowledge of how to reliably test your projects, but if you're still curious about all the details check out the reference for tests.
Using Test Sets
Why Tests Sets
Many operations such as running, comparing, removing or updating tests all need to somehow select which tests to operate on.
typst-test
offers a functional set-based language which is used to select tests, it visually resembles writing a predicate which is applied to each test.
Test set expresisons are passed using the --expression
or -e
flag, they support the following features:
- binary and unary operators like
&
/and
or!
/not
- built-in primitive test sets such as
ephemeral()
,compile-only()
orskip()
- identity test sets for easier scripting like
all()
andnone()
- identifier pattern such as
regex:foo-\d{2}
This allows you to concisely filter your test suite without having to remember a number of hard-to-compose CLI options. [^ref]
An Iterative Example
Suppose you had a project with the following tests:
tests
├─ features
│ ├─ foo1 persistent skipped
│ ├─ foo2 persistent
│ ├─ bar ephemeral
│ └─ baz compile-only
├─ regressions
│ ├─ issue-42 ephemeral skipped
│ ├─ issue-33 persistent
│ └─ qux compile-only
└─ frobnicate compile-only
You can use tt list
to ensure your test set expression is correct before running or updating tests.
This is not just faster, it also saves you the headache of losing a test you accidentally deleted.
If you just run tt list
without any expression it'll use all()
and you should see:
features/foo2
features/bar
features/baz
regressions/issue-33
regressions/qux
frobnicate
You may notice that we're missing two tests, those marked as skipped
above:
features/foo1
regressions/issue-42
If you want to refer to these skipped tests, then you need to pass the --no-implicit-skip
flag, otherwise the expression is wrapped in (...) ~ skip()
by default.
If you pass tests by name explicitly like tt list features/foo1 regressions/issue-42
, then this flag is implied.
Let's say you want to run all tests, which are either ephemeral or persistent, i.e. those which aren't compile-only, then you can use either ephemeral() | persistent()
or not compile-only()
.
Because there are only these three kinds at the moment those are equivalent.
If you run
tt list -e 'not compile-only()'
you should see
features/foo1
features/foo2
features/bar
regressions/issue-42
regressions/issue-33
The you can simply run tt run
with the same expression and it will run only those tests.
If you want to incldue or exclude various directories or tests by identifier you can use patterns.
Let's you want to only run feature tests, you can a pattern like c:features
or more correctly r:^features
.
If you run
tt list -e 'r:^features'
you should see
features/foo1
features/foo2
features/bar
features/baz
Any combination using the various operators also works.
If you wanted to only compile those tests which are both in features
and are not compile-only
, then you would combine them with an intersection, i.e the and
/&
operator.
If you run
tt list -e 'not compile-only() and r:^features'
you should see
features/baz
If you wanted to include all tests which are either you'd use the union instead:
If you run
tt list -e 'not compile-only() or r:^features'
you should see
features/foo1
features/foo2
features/bar
features/baz
regressions/qux
frobnicate
If you update or remove tests and the test set evaluates to more than one test, then you must either specify the all:
prefix in the test set expression, or confirm the operation in a terminal prompt.
Patterns
Note that patterns come in two forms:
- raw patterns: They are provided for convenience, they have been used in the examples above and are simply the pattern kind followed by a colon and any non-whitespace characters.
- string patterns: A generalization which allows for whitespace and usage in nested expressions.
This distinction is useful for scripting and some interactive use cases.
For example, a raw pattern would keep parsing any non whitespace character, when nesting patterns like (... | regex:foo-.*) & ...
the parser would therefor swallow the closing parenthesis and not close the group.
String patterns have delimiters with which this can be avoided: (... | regex:"foo-.*") & ...
will parse correctly and close the group before the &
.
Scripting
If you build up test set expressions programmatically, consider taking a look at the built-in test set functions.
Specifically the all()
and none()
test set constructors can be used as identity sets for certain operators, possibly simplifying the code generating the test sets.
Some of the syntax used in test sets may interfere with your shell, especially the use of whitespace and special tokens within patterns like $
in regexes.
Use non-interpreting quotes around the test set expression (commonly single quotes '...'
) to avoid interpreting them as shell specific sequences.
This should give you a rough overview of how test sets work, you can check out the reference to learn which operators, patterns and test sets exist.
Watching for Changes
typst-test
does not currently support a watch
sub command the same way typst
does.
However, you can work around this by using watchexec
or an equivalent tool which re-runs typst-test
whenever a file in your project changes.
Let's look at a concrete example with watchexec
.
Navigate to your project root directory, i.e. that whhich contains your typst.toml
manifest and run:
watchexec \
--watch . \
--clear \
--ignore 'tests/**/diff/**' \
--ignore 'tests/**/out/**' \
--ignore 'tests/**/ref/**' \
"tt run"
Of course a shell alias or task runner definition makes this more convenient.
While this is running, any change to a file in your project which is not excluded by the patterns proivided using the --ignore
flag will trigger a re-run of tt run
.
If you have other files youmay edit which don't influence the outcome of your test suite, then you should ignore them too.
Keep in mind that tt run
, will run all on every change, so this may not be appropriate for you if you have a large test suite.
Setting Up CI
Continuous integration can take a lot of manual work off of your shoulders.
In this chapter we'll look at how to run typst-test
in your GitHub CI to continuously to test your code and catch bugs before they get merged into your project.
If you simply want to get CI working without any elaborate explanation, skip ahead to the bottom and copy the full file.
There's a good chance that you can simply copy and paste the workflow as is and it'll work, but the guide should give you an idea on how to adjust it to your liking.
We start off by creating a .github/workflows
directory in our project and place a single ci.yaml
file in this directory.
The name is not important, but should be something that helps you distinguish which workflow you're looking at.
First, we configure when CI should be running:
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
The on.push
and on.pull_request
fields both take a branches
fields with a single pattern matching our main branch, this means that this workflow is run on pull requests and pushes to main.
We could leave out the branches
field and it would apply to all pushes or pull requests, but this is seldom useful.
If you have branch protection, you may not need the on.push
trigger at all, if you're paying for CI this may save you money.
Next, let's add the test job we want to run, we'll let it run on ubuntu-latest
, that's a fairly common runner for CI jobs.
More often than not, you won't need matrix or cross platform tests for Typst projects as Typst takes care of the OS differences for you.
Add this below the job triggers:
# ...
jobs:
tests:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
This adds a single step to our job (called tests
), which checks out the repository, making it available for the following steps.
For now, we'll need cargo
to download and install typst-test
, so we install it and cache the installation with a package cache action.
After this, we install typst-test
straight from GitHub using the backport
branch, this branch does not yet have features like test set expressions, but it is somewhat stable and receives critical fixes.
steps:
# ...
- name: Probe runner package cache
uses: awalsh128/cache-apt-pkgs-action@latest
with:
packages: cargo
version: 1.0
- name: Install typst-test from github
uses: baptiste0928/cargo-install@v3.0.0
with:
crate: typst-test
git: https://github.com/tingerrr/typst-test.git
branch: backport
Because the typst-test
version at backport
does not yet come with its own Typst compiler, it needs a Typst installation in the CI runner too. Add the following with your preferred Typst version:
steps:
# ...
- name: Setup typst
uses: yusancky/setup-typst@v2
with:
version: 'v0.11.1'
Then we're ready to run our tests, that's as simple as adding a step like so:
steps:
# ...
- name: Run test suite
run: typst-test run
CI may fail for various reasons, such as
- missing fonts
- system time dependent test cases
- or otherwise hard-to-debug differences between the CI runner and your local machine.
To make it easier for you to actually get a grasp at the problem you should make the results of the test run available.
You can do this by using an upload action, however, if typst-test
fails the step will cancel all regular steps after itself, so you need to ensure it runs regardless of test failure or success by using if: always()
.
The action then uploads all artifacts since some tests may produce both references and output on-the-fly and retains them for 5 days:
steps:
# ...
- name: Archive artifacts
uses: actions/upload-artifact@v4
if: always()
with:
name: artifacts
path: |
tests/**/diff/*.png
tests/**/out/*.png
tests/**/ref/*.png
retention-days: 5
And that's it, you can add this file to your repo, push it to a branch and open a PR, the PR will already start running the workflow for you and you can adjust and debug it as needed.
The full workflow file:
name: CI on: push: branches: [ main ] pull_request: branches: [ main ] jobs: tests: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v3 - name: Probe runner package cache uses: awalsh128/cache-apt-pkgs-action@latest with: packages: cargo version: 1.0 - name: Install typst-test from github uses: baptiste0928/cargo-install@v3.0.0 with: crate: typst-test git: https://github.com/tingerrr/typst-test.git branch: backport - name: Setup typst uses: yusancky/setup-typst@v2 with: version: 'v0.11.1' - name: Run test suite run: typst-test run - name: Archive artifacts uses: actions/upload-artifact@v4 if: always() with: name: artifacts path: | tests/**/diff/*.png tests/**/out/*.png tests/**/ref/*.png retention-days: 5
Tests
There are three types of tests:
- Regression tests, which are similar to unit or integration tests in other languages and are mostly used to test the API of a package and visual regressions through comparison with reference documents.
Regression tests are standalone files in a
tests
directory inside the project root and have additional features available inside typst using a custom standard library. - Template tests, which are similar to regression tests, but don't get any special features and are only available as persistent tests.
- Doc tests, example code in documentation comments which are compiled but not compared.
typst-test
can currently only collect and operate on regression tests.
In the future, template tests and doc tests will be added, see #34 and #49 respectively.
Any test may use annotations for configuration.
Read the guide, if you want to see some examples on how to write and run various tests.
Sections
- Regression tests explains the structure of regression tests.
- Regression test library lists the declarations of the custom standard library.
- Annotations lists the syntax for annotations and which are available.
Regression tests
Regression tests are those tests found in their on directory identified by a test.typ
script and are located in tests
.
Regression tests are the only tests which have access to an extended Typst standard library. This test library contains modules and functions to thoroughly test both the success and failure paths of your project.
Test kinds
There are three kinds of regression tests:
compile-only
: Tests which are compiled, but not compared to any reference, these don't produce any output.persistent
: Tests which are compared to persistent reference documents. The references for these tests are stored in aref
directory along side the test script as individual pages using PNGs. These tests can be updated with thett update
command.ephemeral
: Tests which are compared to the output of another script. The references for these tests are compiled on the fly using aref.typ
script.
Each of these kinds is available as a test set function.
Identifiers
The directory path within the test root tests
in your project is the identifier of a test and uses forward slahes as path separators on all platforms, the individual components of a test path must satisfy the following rules:
- must start with an ASCII alphabetic character (
a
-z
orA
-Z
) - may contain any additional sequence of ASCII alphabetic characters, numeric characters (
0
-9
), underscores_
or hyphens-
Test structure
Given a directory within tests
, it is considered a valid test, if it contains at least a test.typ
file.
The strucutre of this directory looks as follows:
test.typ
: The main test script, this is always compiled as the entrypoint.ref.typ
(optional): This makes a test ephemeral and is used to compile the reference document for eahc invocation.ref
(optional, temporary): This makes a test either persistent or ephemeral and is used to store the reference documents. If the test is ephemeral this directory is temporary.out
(temporary): Contains the test output document.diff
(temporary): Contains the difference of the output and reference documents.
The kind of a test is determined as follows:
- If it contains a
ref
directory but noref.typ
script, it is considered a persistent test. - If it contians a
ref.typ
script, it is considered an ephemeral test. - If it contains neither, it is considered compile only.
Temporary directories are ignored within the VCS if one is detected, this is currently done by simply adding an ignore file within the directory which ignores all entries inside it.
A test cannot contain other her tests, if a test script is found typst-test
will not search for any sub tests.
Regression test are compiled with the project root as their typst root, such that they can easily access package internals with absolute paths.
Comparison
Ephemeral and persistent tests are curently compared using a simple deviation threshold which determines if two images should be considered the same or different.
If the images have differnet dimensions consider them different.
Given two images of equal dimensions, pair up each pixel and compare them, if any of the 3 channels (red, green, blue) differ by at least min-delta
count it as a deviation.
If there are more than max-deviation
of such deviating pixels, consider the images different.
These values can be tweaked on the command line using the --max-deviation
and --min-delta
options respectively:
--max-deviation
takes a non-negative integer, i.e. any value from0
onwards.--min-delta
takes a byte, i.e. any value from0
to255
.
Both values default to 0
such that any difference will trigger a failure by default.
Annotations
Tests may contain annotations which are used for configuring the test runner for each test. These annotations are placed on a leading doc comment at the start of the test script, i.e. they must be before any content or imports. The doc comment may contain any content after the annotations, any empty lines are ignored.
For ephemeral regression tests only the main test file will be checked for annotations, the reference file will be ignored.
The syntax for annotations may change if typst adds first class annotation or documentation comment syntax.
/// [skip]
///
/// Synopsis:
/// ...
#import "/src/internal.typ": foo
...
The following annotations are available:
Annotation | Description |
---|---|
skip | Marks the test as part of the skip() test set. |
Test Library
The test library is an augmented standard library, it contains all definitions in the standard library plus some additional modules and functions which help testing packages more thoroughly and debug regressions.
It defines the following modules:
test
: a module with various testing helpers such ascatch
and additional asserts.
The following items are re-exported in the global scope as well:
assert-panic
: originallytest.assert-panic
catch
: originallytest.catch
test
Contains the main testing utilities.
assert-panic
Ensures that a function panics.
Panics if the function does not panic, returns none
otherwise.
Example
// panics with the given message
#assert-panic(() => {}, message: "Function did not panic!")
// catches the panic and keeps compilation running
#assert-panic(() => panic())
Parameters
assert-panic(
function,
message: str | auto,
)
function: function
required
positional
The function to test.
message: str | auto
The error message when the assertion fails.
catch
Returns the panic message generated by a function, if there was any, returns none
otherwise.
Example
#assert.eq(catch(() => {}), none)
#assert.eq(
catch(panics),
"panicked with: Invalid arg, expected `int`, got `str`",
)
Parameters
catch(
function,
)
function: function
required
positional
The function to test.
Test Set Language
The test set language is an expression based language, top level expression can be built up form smaller expressions consisting of binary and unary operators and built-in functions. They form sets which are used to specify which test should be selected for various operations.
Read the guide, if you want to see examples of how to use test sets.
Sections
- Grammar outlines operators and syntax.
- Evaluation explains the evaulation of test set expressions.
- Built-in Test Sets lists built-in test sets and functions.
Grammar
The exact grammar can be read from the source code at [grammar.pest]. Because it is a functional language it consists only of expressions, no statements.
It supports
- groups for precedence (
(...)
), - binary and unary operators (
and
,not
,!
, etc.), - functions (
func(a, b, c)
), - patterns (
r:^foo
), - and basic data types like strings (
"..."
,'...'
) and numbers (1
,1_000
).
Operators
The following operators are available:
Type | Prec. | Name | Symbols | Explanation |
---|---|---|---|---|
infix | 1 | union | | , or | Includes all tests which are in either the left OR right test set expression. |
infix | 1 | difference | ~ , diff | Includes all tests which are in the left but NOT in the right test set expression. |
infix | 2 | intersection | & , and | Includes all tests which are in both the left AND right test set expression. |
infix | 3 | symmetric difference | ^ , xor | Includes all tests which are in either the left OR right test set expression, but NOT in both. |
prefix | 4 | complement | ! , not | Includes all tests which are NOT in the test set expression. |
Be aware of precedence when combining different operators, higher precedence means operators bind more strongly, e.g. not a and b
is (not a) and b
, not not (a and b)
because not
has a higher precedence than and
.
Binary operators are left associative, e.g. a ~ b ~ c
is (a ~ b) ~ c
, not a ~ (b ~ c)
.
When in doubt, use parentheses to force the precedence of expressions.
Evaluation
Test set expressions restrict the set of all tests which are contained in the expression and are compiled to an AST which is checked against all tests sequentially.
A test set such as !skip()
would be checked against each test that is found by reading its annotations and filtering all tests out which do have an ignored annotation.
While the order of some operations like union and intersection doesn't matter semantically, the left operand is checked first for those where short circuiting can be applied.
As a consequence the expression !skip() & regex:'complicated regex'
is more efficient than regex:'complicated regex' & !skip()
, since it will avoid the regex check for skipped tests entirely, but this should not matter in practice.
Built-in Test Sets
Types
There are a few available types:
Type | Explanation |
---|---|
function | Functions which evaluate to another type upon compilation. |
test set | Represents a set of tests. |
number | Positive whole numbers. |
string | Used for patterns containing special characters. |
pattern | Special syntax for test sets which operator on test identifiers. |
A test set expression must always evaluate to a test set, otherwise it is ill-formed, all operators operate on test sets only.
The following may be valid set(1) & set("aaa", 2)
, but set() & 1
is not.
There is no arithmetic, and at the time of writing this literals like numbers and strings are included for future test set functionality.
Functions
The following functions are available, they can be written out in place of any expression.
Name | Explanation |
---|---|
none() | Includes no tests. |
all() | Includes all tests. |
skip() | Includes tests with a skip annotation |
compile-only() | Includes tests without references. |
ephemeral() | Includes tests with ephemeral references. |
persistent() | Includes tests with persistent references. |
Patterns
Patterns are special types which are checked against identifiers and automatically turned into test sets.
A pattern starts with a pattern type before a colon :
and is either followed by a raw pattern or a string literal.
Raw patterns don't have any delimiters and parse anything that's not whitespace.
String patterns are pattern prefixes directly followed by literal strings, they can be used to avoid parsing other tokens as part of a pattern, like when nesting pattern literals in expression groups or in function arguments.
The following pattern types exist:
Type | Example | Explanation |
---|---|---|
e /exact | exact:mod/name | Matches by comparing the identifier exactly to the given term. |
c /contains | c:plot | Matches by checking if the given term is contained in the identifier. |
r /regex | regex:mod-[234]/.* | Matches using the given regex. |
g /glob | g:foo/**/bar | Matches using the given glob battern. |
p /path | p:foo | Matches using the given glob battern. |