mini-mini-mecompletednovels149.9K params0s elapsed · Updated 54d ago
1L / 32D / 2H · cpu_ref · bpe · adamw· Created Feb 19, 2026 4:28 AM
Step 5 / 5100.0%
7.6223
Loss?
7.5838
Best Loss?
-0.3% from start
7.6040
Val Loss?
best: 7.6040
3.51e-5
Learning Rate?
450
Throughput?
tok/s (avg)
155
Speed?
ms/iter (avg)
1.070
Grad Norm?
avg: 1.080
320
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?1
Embedding?32
Heads?2
Vocab?2,000
Context?32
Dropout?0
Parameters?149.9K
Training Config
Total iters?5
Batch size?2
Max LR?0.0003
Optimizer?adamw
Backend?cpu_ref
Tokenizer?bpe
Seed?42
Weight decay?0.01
Grad clip?1
Eval interval?5
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
No GPU data
Perplexity
Train/Val Gap
No validation data
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Insufficient data
Gradient Clipping
No clipping data
GPU Operations
No GPU ops data
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data
Model Config (JSON)
{
"vocabSize": 2000,
"blockSize": 32,
"nLayer": 1,
"nEmbd": 32,
"nHead": 2,
"dropout": 0
}Training Config (JSON)
{
"iters": 5,
"batchSize": 2,
"lr": 0.0003,
"beta1": 0.9,
"beta2": 0.999,
"eps": 1e-8,
"weightDecay": 0.01,
"gradClip": 1,
"evalInterval": 5,
"evalIters": 1,
"seed": 42,
"backend": "cpu_ref",
"tokenizer": "bpe",
"optimizer": "adamw",
"logLevel": "info",
"trace": false
}