public final class ClassicAnalyzer extends StopwordAnalyzerBase
ClassicTokenizer with ClassicFilter, LowerCaseFilter and StopFilter, using a list of
English stop words.
ClassicAnalyzer was named StandardAnalyzer in Lucene versions prior to 3.1.
As of 3.1, StandardAnalyzer implements Unicode text segmentation,
as specified by UAX#29.Analyzer.ReuseStrategy, Analyzer.TokenStreamComponents| Modifier and Type | Field and Description |
|---|---|
static int |
DEFAULT_MAX_TOKEN_LENGTH
Default maximum allowed token length
|
static CharArraySet |
STOP_WORDS_SET
An unmodifiable set containing some common English words that are usually not
useful for searching.
|
stopwordsGLOBAL_REUSE_STRATEGY, PER_FIELD_REUSE_STRATEGY| Constructor and Description |
|---|
ClassicAnalyzer()
Builds an analyzer with the default stop words (
STOP_WORDS_SET). |
ClassicAnalyzer(CharArraySet stopWords)
Builds an analyzer with the given stop words.
|
ClassicAnalyzer(java.io.Reader stopwords)
Builds an analyzer with the stop words from the given reader.
|
| Modifier and Type | Method and Description |
|---|---|
protected Analyzer.TokenStreamComponents |
createComponents(java.lang.String fieldName)
Creates a new
Analyzer.TokenStreamComponents instance for this analyzer. |
int |
getMaxTokenLength() |
protected TokenStream |
normalize(java.lang.String fieldName,
TokenStream in)
Wrap the given
TokenStream in order to apply normalization filters. |
void |
setMaxTokenLength(int length)
Set maximum allowed token length.
|
getStopwordSet, loadStopwordSet, loadStopwordSet, loadStopwordSetattributeFactory, close, getOffsetGap, getPositionIncrementGap, getReuseStrategy, getVersion, initReader, initReaderForNormalization, normalize, setVersion, tokenStream, tokenStreampublic static final int DEFAULT_MAX_TOKEN_LENGTH
public static final CharArraySet STOP_WORDS_SET
public ClassicAnalyzer(CharArraySet stopWords)
stopWords - stop wordspublic ClassicAnalyzer()
STOP_WORDS_SET).public ClassicAnalyzer(java.io.Reader stopwords)
throws java.io.IOException
stopwords - Reader to read stop words fromjava.io.IOExceptionWordlistLoader.getWordSet(Reader)public void setMaxTokenLength(int length)
public int getMaxTokenLength()
setMaxTokenLength(int)protected Analyzer.TokenStreamComponents createComponents(java.lang.String fieldName)
AnalyzerAnalyzer.TokenStreamComponents instance for this analyzer.createComponents in class AnalyzerfieldName - the name of the fields content passed to the
Analyzer.TokenStreamComponents sink as a readerAnalyzer.TokenStreamComponents for this analyzer.protected TokenStream normalize(java.lang.String fieldName, TokenStream in)
AnalyzerTokenStream in order to apply normalization filters.
The default implementation returns the TokenStream as-is. This is
used by Analyzer.normalize(String, String).Copyright © 2000–2025 The Apache Software Foundation. All rights reserved.