mindformers.core
core module, including Runtime Context, Loss, Optimizer, Learning Rate, Callback, and Evaluation Metrics.
Runtime Context
Build the context from config. |
|
Get context attribute value according to the input key. |
|
Initialize the context. |
|
Set context for running environment. |
Loss
Calculate the cross entropy loss. |
Optimizer
This is the implementation of AdamW. |
|
Updates gradients by the Confidence-guided Adaptive Memory Efficient Optimization (Came) algorithm. |
Learning Rate
Constant Warm Up Learning Rate. |
|
It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts . |
|
Set the learning rate of each parameter group using a cosine annealing schedule, where |
|
Cosine with Restarts and Warm Up Learning Rate. |
|
Cosine with Warm Up Learning Rate. |
|
Learning Rate Wise Layer. |
|
Linear with Warm Up Learning Rate. |
|
Polynomial with Warm Up Learning Rate. |
Callback
Checkpoint Monitor For Save LossScale. |
|
Evaluate Callback used in training progress. |
|
Monitor loss and other parameters in training process. |
|
Profile analysis in training. |
|
Summary Monitor can help you to collect some common information, such as loss, learning late, computational graph and so on. |
Evaluation Metric
Evaluates the precision, recall, and F1 score of predicted entities against the ground truth. |
|
Calculate the Em and F1 scores for each example to evaluate the model's performance in prediction tasks. |
|
Perplexity is defined as the exponentiated average negative log-probability assigned by the model to each word in the test set. |
|
Computes the prompt acc of each entity. |