Package org.h2.bnf

Class Bnf

java.lang.Object
org.h2.bnf.Bnf

public class Bnf extends Object
This class can read a file that is similar to BNF (Backus-Naur form). It is made specially to support SQL grammar.
  • Field Details

    • ruleMap

      private final HashMap<String,RuleHead> ruleMap
      The rule map. The key is lowercase, and all spaces are replaces with underscore.
    • syntax

      private String syntax
    • currentToken

      private String currentToken
    • tokens

      private String[] tokens
    • firstChar

      private char firstChar
    • index

      private int index
    • lastRepeat

      private Rule lastRepeat
    • statements

      private ArrayList<RuleHead> statements
    • currentTopic

      private String currentTopic
  • Constructor Details

    • Bnf

      public Bnf()
  • Method Details

    • getInstance

      public static Bnf getInstance(Reader csv) throws SQLException, IOException
      Create an instance using the grammar specified in the CSV file.
      Parameters:
      csv - if not specified, the help.csv is used
      Returns:
      a new instance
      Throws:
      SQLException - on failure
      IOException - on failure
    • addAlias

      public void addAlias(String name, String replacement)
      Add an alias for a rule.
      Parameters:
      name - for example "procedure"
      replacement - for example "@func@"
    • addFixedRule

      private void addFixedRule(String name, int fixedType)
    • addRule

      private RuleHead addRule(String topic, String section, Rule rule)
    • parse

      private void parse(Reader reader) throws SQLException, IOException
      Throws:
      SQLException
      IOException
    • visit

      public void visit(BnfVisitor visitor, String s)
      Parse the syntax and let the rule call the visitor.
      Parameters:
      visitor - the visitor
      s - the syntax to parse
    • startWithSpace

      public static boolean startWithSpace(String s)
      Check whether the statement starts with a whitespace.
      Parameters:
      s - the statement
      Returns:
      if the statement is not empty and starts with a whitespace
    • getRuleMapKey

      public static String getRuleMapKey(String token)
      Convert convert ruleLink to rule_link.
      Parameters:
      token - the token
      Returns:
      the rule map key
    • getRuleHead

      public RuleHead getRuleHead(String title)
      Get the rule head for the given title.
      Parameters:
      title - the title
      Returns:
      the rule head, or null
    • parseRule

      private Rule parseRule()
    • parseOr

      private Rule parseOr()
    • parseList

      private Rule parseList()
    • parseExtension

      private RuleExtension parseExtension(boolean compatibility)
    • parseToken

      private Rule parseToken()
    • read

      private void read()
    • toString

      public String toString()
      Overrides:
      toString in class Object
    • tokenize

      private String[] tokenize()
    • getNextTokenList

      public HashMap<String,String> getNextTokenList(String query)
      Get the list of tokens that can follow. This is the main autocomplete method. The returned map for the query 'S' may look like this:
       key: 1#SELECT, value: ELECT
       key: 1#SET, value: ET
       
      Parameters:
      query - the start of the statement
      Returns:
      the map of possible token types / tokens
    • linkStatements

      public void linkStatements()
      Cross-link all statements with each other. This method is called after updating the topics.
    • updateTopic

      public void updateTopic(String topic, DbContextRule rule)
      Update a topic with a context specific rule. This is used for autocomplete support.
      Parameters:
      topic - the topic
      rule - the database context rule
    • getStatements

      public ArrayList<RuleHead> getStatements()
      Get the list of possible statements.
      Returns:
      the list of statements
    • getTokenizer

      public static StringTokenizer getTokenizer(String s)
      Get the tokenizer for the given syntax.
      Parameters:
      s - the syntax
      Returns:
      the tokenizer