Creating parsers

Developing Tree-sitter parsers can have a difficult learning curve, but once you get the hang of it, it can be fun and even zen-like. This document should help you to build an effective mental model for parser development.

Understanding the problem

Writing a grammar requires creativity. There are an infinite number of CFGs (context-free grammars) that can be used to describe any given language. In order to produce a good Tree-sitter parser, you need to create a grammar with two important properties:

  1. An intuitive structure - Tree-sitter’s output is a concrete syntax tree; each node in the tree corresponds directly to a terminal or non-terminal symbol in the grammar. So in order to produce an easy-to-analyze tree, there should be a direct correspondence between the symbols in your grammar and the recognizable constructs in the language. This might seem obvious, but it is very different from the way that context-free grammars are often written in contexts like language specifications or Yacc/Bison parsers.

  2. A close adherence to LR(1) - Tree-sitter is based on the GLR parsing algorithm. This means that while it can handle any context-free grammar, it works most efficiently with a class of context-free grammars called LR(1) Grammars. In this respect, Tree-sitter’s grammars are similar to (but less restrictive than) Yacc and Bison grammars, but different from ANTLR grammars, Parsing Expression Grammars, or the ambiguous grammars commonly used in language specifications.

It’s unlikely that you’ll be able to satisfy these two properties just by translating an existing context-free grammar directly into Tree-sitter’s grammar format. There are a few kinds of adjustments that are often required. The following sections will explain these adjustments in more depth.

Installing the tools

The best way to create a Tree-sitter parser is with the Tree-sitter CLI, which is distributed as a Node.js module. To install it, first install node and its package manager npm on your system. Then use npm to create a new node module and add tree-sitter-cli and nan as dependencies:

mkdir tree-sitter-${YOUR_LANGUAGE_NAME}
cd tree-sitter-${YOUR_LANGUAGE_NAME}

# This will prompt you for input
npm init

npm install --save nan
npm install --save-dev tree-sitter-cli

This will install the CLI and its dependencies into the node_modules folder in your directory. An executable program called tree-sitter will be created at the path ./node_modules/.bin/tree-sitter. You may want to follow the Node.js convention of adding ./node_modules/.bin to your PATH so that you can easily run this program when working in this directory.

Once you have the CLI installed, create a file called grammar.js with the following skeleton:

module.exports = grammar({
  name: 'the_language_name',

  rules: {
    // The production rules of the context-free grammar
    source_file: $ => 'hello'
  }
});

Then run the the following command:

tree-sitter generate
npm install

This will generate the C code required to parse this trivial language, as well as all of the files needed to compile and load this native parser as a Node.js module. You can test this parser by creating a source file with the contents hello; and parsing it:

tree-sitter parse ./the-file

This should print:

(source_file [0, 0] - [0, 5])

When you make changes to the grammar, you can update the parser simply by re-running tree-sitter generate. The best way to recompile the C-code is to run the command node-gyp build. You may have to install the node-gyp tool separately by running npm install -g node-gyp.

Starting to define the grammar

It’s usually a good idea to find a formal specification for the language you’re trying to parse. This specification will most likely contain a context-free grammar. As you read through the rules of this CFG, you will probably discover a complex and cyclic graph of relationships. It might be unclear how you should navigate this graph as you define your grammar.

Although languages have very different constructs, their constructs can often be categorized in to similar groups like Declarations, Definitions, Statements, Expressions, Types, and Patterns. In writing your grammar, a good first step is to create just enough structure to include all of these basic groups of symbols. For an imaginary C-like language, this might look something like this:

{
  // ...

  rules: {
    source_file: $ => repeat($._definition),

    _definition: $ => choice(
      $.function_definition
      // TODO: other kinds of definitions
    ),

    function_definition: $ => seq(
      'func',
      $.identifier,
      $.parameter_list,
      $._type,
      $.block
    ),

    parameter_list: $ => seq(
      '(',
       // TODO: parameters
      ')'
    ),

    _type: $ => choice(
      'bool'
      // TODO: other kinds of types
    ),

    block: $ => seq(
      '{',
      repeat($._statement),
      '}'
    ),

    _statement: $ => choice(
      $.return_statement
      // TODO: other kinds of statements
    ),

    return_statement: $ => seq(
      'return',
      $._expression,
      ';'
    ),

    _expression: $ => choice(
      $.identifier,
      $.number
      // TODO: other kinds of expressions
    ),

    identifier: $ => /[a-z]+/,

    number: $ => /\d+/
  }
}

Some of the details of this grammar will be explained in more depth later on, but if you focus on the TODO comments, you can see that the overall strategy is breadth-first. Notably, this initial skeleton does not need to directly match an exact subset of the context-free grammar in the language specification. It just needs to touch on the major groupings of rules in as simple and obvious a way as possible.

With this structure in place, you can now freely decide what part of the grammar to flesh out next. For example, you might decide to start with types. One-by-one, you could define the rules for writing basic types and composing them into more complex types:

{
  // ...

  _type: $ => choice(
    $.primitive_type,
    $.array_type,
    $.pointer_type
  ),

  primitive_type: $ => choice(
    'bool',
    'int'
  ),

  array_type: $ => seq(
    '[',
    ']',
    $._type
  ),

  pointer_type: $ => seq(
    '*',
    $._type
  )
}

After developing the type sublanguage a bit further, you might decide to switch to working on statements or expressions instead. It’s often useful to check your progress by trying to parse some real code using tree-sitter parse.

Writing unit tests

For each rule that you add to the grammar, you should first create a test that describes how the syntax trees should look when parsing that rule. These tests are written using specially-formatted text files in a corpus directory in your parser’s root folder. Here is an example of how these tests should look:

==================
Return statements
==================

func x() int {
  return 1;
}

---

(source_file
  (function_definition
    (identifier)
    (parameter_list)
    (primitive_type)
    (block
      (return_statement (number))))

The name of the test is written between two lines containing only = characters. Then the source code is written, followed by a line containing three or more - characters. Then, the expected syntax tree is written as an S-expression. The exact placement of whitespace in the S-expression doesn’t matter, but ideally the syntax tree should be legible. Note that the S-expression does not show syntax nodes like func, ( and ;, which are expressed as strings and regexes in the grammar. It only shows the named nodes, as described in the previous page.

These tests are important. They serve as the parser’s API documentation, and they can be run every time you change the grammar to verify that everything still parses correctly. You can run these tests using this command:

tree-sitter test

To run a particular test, you can use the the -f flag:

tree-sitter test -f 'Return statements'

The recommendation is to be comprehensive in adding tests. If it’s a visible node, add it to a test file in your corpus directory. It’s typically a good idea to test all of the permutations of each language construct. This increases test coverage, but doubly acquaints readers with a way to examine expected outputs and understand the “edges” of a language.

Using the grammar DSL

The following is a complete list of built-in functions you can use to define Tree-sitter grammars. Use-cases for some of these functions will be explained in more detail in later sections.

In addition to the name and rules fields, grammars have a few other optional public fields that influence the behavior of the parser.

Adjusting existing grammars

Imagine that you were just starting work on the Tree-sitter JavaScript parser. You might try to directly mirror the structure of the ECMAScript Language Spec. To illustrate the problem with this approach, consider the following line of code:

return x + y;

According to the specification, this line is a ReturnStatement, the fragment x + y is an AdditiveExpression, and x and y are both IdentifierReferences. The relationship between these constructs is captured by a complex series of production rules:

ReturnStatement          ->  'return' Expression
Expression               ->  AssignmentExpression
AssignmentExpression     ->  ConditionalExpression
ConditionalExpression    ->  LogicalORExpression
LogicalORExpression      ->  LogicalANDExpression
LogicalANDExpression     ->  BitwiseORExpression
BitwiseORExpression      ->  BitwiseXORExpression
BitwiseXORExpression     ->  BitwiseANDExpression
BitwiseANDExpression     ->  EqualityExpression
EqualityExpression       ->  RelationalExpression
RelationalExpression     ->  ShiftExpression
ShiftExpression          ->  AdditiveExpression
AdditiveExpression       ->  MultiplicativeExpression
MultiplicativeExpression ->  ExponentiationExpression
ExponentiationExpression ->  UnaryExpression
UnaryExpression          ->  UpdateExpression
UpdateExpression         ->  LeftHandSideExpression
LeftHandSideExpression   ->  NewExpression
NewExpression            ->  MemberExpression
MemberExpression         ->  PrimaryExpression
PrimaryExpression        ->  IdentifierReference

The language spec encodes the 20 precedence levels of JavaScript expressions using 20 different non-terminal symbols. If we were to create a concrete syntax tree representing this statement according to the language spec, it would have twenty levels of nesting and it would contain nodes with names like BitwiseXORExpression, which are unrelated to the actual code.

Using precedence

To produce a readable syntax tree, we’d like to model JavaScript expressions using a much flatter structure like this:

{
  // ...

  _expression: $ => choice(
    $.identifier,
    $.unary_expression,
    $.binary_expression,
    // ...
  ),

  unary_expression: $ => choice(
    seq('-', $._expression),
    seq('!', $._expression),
    // ...
  ),

  binary_expression: $ => choice(
    seq($._expression, '*', $._expression),
    seq($._expression, '+', $._expression),
    // ...
  ),
}

Of course, this flat structure is highly ambiguous. If we try to generate a parser, Tree-sitter gives us an error message:

Error: Unresolved conflict for symbol sequence:

  '-'  _expression  •  '*'  …

Possible interpretations:

  1:  '-'  (binary_expression  _expression  •  '*'  _expression)
  2:  (unary_expression  '-'  _expression)  •  '*'  …

Possible resolutions:

  1:  Specify a higher precedence in `binary_expression` than in the other rules.
  2:  Specify a higher precedence in `unary_expression` than in the other rules.
  3:  Specify a left or right associativity in `unary_expression`
  4:  Add a conflict for these rules: `binary_expression` `unary_expression`

For an expression like -a * b, it’s not clear whether the - operator applies to the a * b or just to the a. This is where the prec function described above comes into play. By wrapping a rule with prec, we can indicate that certain sequence of symbols should bind to each other more tightly than others. For example, the '-', $._expression sequence in unary_expression should bind more tightly than the $._expression, '+', $._expression sequence in binary_expression:

{
  // ...

  unary_expression: $ => prec(2, choice(
    seq('-', $._expression),
    seq('!', $._expression),
    // ...
  ))
}

Using associativity

Applying a higher precedence in unary_expression fixes that conflict, but there is still another conflict:

Error: Unresolved conflict for symbol sequence:

  _expression  '*'  _expression  •  '*'  …

Possible interpretations:

  1:  _expression  '*'  (binary_expression  _expression  •  '*'  _expression)
  2:  (binary_expression  _expression  '*'  _expression)  •  '*'  …

Possible resolutions:

  1:  Specify a left or right associativity in `binary_expression`
  2:  Add a conflict for these rules: `binary_expression`

For an expression like a * b * c, it’s not clear whether we mean a * (b * c) or (a * b) * c. This is where prec.left and prec.right come into use. We want to select the second interpretation, so we use prec.left.

{
  // ...

  binary_expression: $ => choice(
    prec.left(2, seq($._expression, '*', $._expression)),
    prec.left(1, seq($._expression, '+', $._expression)),
    // ...
  ),
}

Hiding rules

You may have noticed in the above examples that some of the grammar rule name like _expression and _type began with an underscore. Starting a rule’s name with an underscore causes the rule to be hidden in the syntax tree. This is useful for rules like _expression in the grammars above, which always just wrap a single child node. If these nodes were not hidden, they would add substantial depth and noise to the syntax tree without making it any easier to understand.

LR conflicts

Lexical Analysis

Tree-sitter’s parsing process is divided into two phases: parsing (which is described above) and lexing - the process of grouping individual characters into the language’s fundamental tokens. There are a few important things to know about how Tree-sitter’s lexing works.

Conflicting Tokens

Grammars often contain multiple tokens that can match the same characters. For example, a grammar might contain the tokens ("if" and /[a-z]+/). Tree-sitter differentiates between these conflicting tokens in a few ways:

  1. Context-aware Lexing - Tree-sitter performs lexing on-demand, during the parsing process. At any given position in a source document, the lexer only tries to recognize tokens that are valid at that position in the document.

  2. Lexical Precedence - When the precedence functions described above are used within the token function, the given precedence values serve as instructions to the lexer. If there are two valid tokens that match the characters at a given position in the document, Tree-sitter will select the one with the higher precedence.

  3. Match Length - If multiple valid tokens with the same precedence match the characters at a given position in a document, Tree-sitter will select the token that matches the longest sequence of characters.

  4. Match Specificity - If there are two valid tokens with the same precedence and which both match the same number of characters, Tree-sitter will prefer a token that is specified in the grammar as a String over a token specified as a RegExp.

Keywords

Many languages have a set of keyword tokens (e.g. if, for, return), as well as a more general token (e.g. identifier) that matches any word, including many of the keyword strings. For example, JavaScript has a keyword instanceof, which is used as a binary operator, like this:

if (a instanceof Something) b();

The following, however, is not valid JavaScript:

if (a instanceofSomething) b();

A keyword like instanceof cannot be followed immediately by another letter, because then it would be tokenized as an identifier, even though an identifier is not valid at that position. Because Tree-sitter uses context-aware lexing, as described above, it would not normally impose this restriction. By default, Tree-sitter would recognize instanceofSomething as two separate tokens: the instanceof keyword followed by an identifier.

Keyword Extraction

Fortunately, Tree-sitter has a feature that allows you to fix this, so that you can match the behavior of other standard parsers: the word token. If you specify a word token in your grammar, Tree-sitter will find the set of keyword tokens that match strings also matched by the word token. Then, during lexing, instead of matching each of these keywords individually, Tree-sitter will match the keywords via a two-step process where it first matches the word token.

For example, suppose we added identifier as the word token in our JavaScript grammar:

grammar({
  word: $ => $.identifier,

  rules: {
    _expression: $ => choice(
      $.identifier,
      $.unary_expression,
      $.binary_expression
      // ...
    ),

    binary_expression: $ => choice(
      prec.left(1, seq($._expression, 'instanceof', $._expression)
      // ...
    ),

    unary_expression: $ => choice(
      prec.left(2, seq('typeof', $._expression))
      // ...
    ),

    identifier: $ => /[a-z_]+/
  }
})

Tree-sitter would identify typeof and instanceof as keywords. Then, when parsing the invalid code above, rather than scanning for the instanceof token individually, it would scan for an identifier first, and find instanceofSomething. It would then correctly recognize the code as invalid.

Aside from improving error detection, keyword extraction also has performance benefits. It allows Tree-sitter to generate a smaller, simpler lexing function, which means that the parser will compile much more quickly.