Package org.terrier.indexing.tokenisation

Provides classes related to the tokenisation of documents. Tokenisers are responsible for breaking chunks of text into words to be indexed. Different tokenisers may be used for different languages. In particular, two tokenisers are provided by Terrier:

  • EnglishTokeniser - splits words on containing characters not in [A-Za-z0-9].
  • UTFTokeniser - splits words on containing characters that are not one of the following:
    1. Character.isLetterOrDigit() returns true
    2. Character.getType() returns Character.NON_SPACING_MARK
    3. Character.getType() returns Character.COMBINING_SPACING_MARK
In addition, both default Tokenisers apply rules such as:
  • Removing punctuation
  • Lowercasing all terms if the property lowercase is set (default to true).
  • Tokens longer than max.term.length are dropped.
  • Any term which has more than 4 digits is discarded.
  • Any term which has more than 3 consecutive identical characters are discarded.

Example Code

//get the default tokeniser, as set by property tokeniser
Tokeniser tokeniser = Tokeniser.getTokeniser();
String sentence = "This is a sentence.";
TokenStream toks = tokeniser.tokenise(new StringReader(sentence));
while(toks.hasNext())
{
  String token = toks.next();
}