Package org.antlr.v4.runtime.atn
Class ATNSimulator
- java.lang.Object
-
- org.antlr.v4.runtime.atn.ATNSimulator
-
- Direct Known Subclasses:
LexerATNSimulator
,ParserATNSimulator
public abstract class ATNSimulator extends Object
-
-
Field Summary
Fields Modifier and Type Field Description ATN
atn
static DFAState
ERROR
Must distinguish between missing edge and edge we know leads nowhereprotected PredictionContextCache
sharedContextCache
The context cache maps all PredictionContext objects that are equals() to a single cached copy.
-
Constructor Summary
Constructors Constructor Description ATNSimulator(ATN atn, PredictionContextCache sharedContextCache)
-
Method Summary
All Methods Instance Methods Abstract Methods Concrete Methods Modifier and Type Method Description void
clearDFA()
Clear the DFA cache used by the current instance.PredictionContext
getCachedContext(PredictionContext context)
PredictionContextCache
getSharedContextCache()
abstract void
reset()
-
-
-
Field Detail
-
ERROR
public static final DFAState ERROR
Must distinguish between missing edge and edge we know leads nowhere
-
atn
public final ATN atn
-
sharedContextCache
protected final PredictionContextCache sharedContextCache
The context cache maps all PredictionContext objects that are equals() to a single cached copy. This cache is shared across all contexts in all ATNConfigs in all DFA states. We rebuild each ATNConfigSet to use only cached nodes/graphs in addDFAState(). We don't want to fill this during closure() since there are lots of contexts that pop up but are not used ever again. It also greatly slows down closure().This cache makes a huge difference in memory and a little bit in speed. For the Java grammar on java.*, it dropped the memory requirements at the end from 25M to 16M. We don't store any of the full context graphs in the DFA because they are limited to local context only, but apparently there's a lot of repetition there as well. We optimize the config contexts before storing the config set in the DFA states by literally rebuilding them with cached subgraphs only.
I tried a cache for use during closure operations, that was whacked after each adaptivePredict(). It cost a little bit more time I think and doesn't save on the overall footprint so it's not worth the complexity.
-
-
Constructor Detail
-
ATNSimulator
public ATNSimulator(ATN atn, PredictionContextCache sharedContextCache)
-
-
Method Detail
-
reset
public abstract void reset()
-
clearDFA
public void clearDFA()
Clear the DFA cache used by the current instance. Since the DFA cache may be shared by multiple ATN simulators, this method may affect the performance (but not accuracy) of other parsers which are being used concurrently.- Throws:
UnsupportedOperationException
- if the current instance does not support clearing the DFA.- Since:
- 4.3
-
getSharedContextCache
public PredictionContextCache getSharedContextCache()
-
getCachedContext
public PredictionContext getCachedContext(PredictionContext context)
-
-