novels-5hrcompletednovels2.50M params5h 30m elapsed · Updated 36d ago
6L / 128D / 4H · cpu_ref · bpe · adamw· Created Feb 19, 2026 12:24 AM
Step 900 / 900100.0%
5.3346
Loss?
5.2128
Best Loss?
-30.0% from start
5.5975
Val Loss?
best: 5.5691
1.13e-9
Learning Rate?
52
Throughput?
tok/s (avg)
20,227
Speed?
ms/iter (avg)
1.522
Grad Norm?
avg: 1.775
921.6K
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?6
Embedding?128
Heads?4
Vocab?2,000
Context?128
Dropout?0
Parameters?2.50M
Training Config
Total iters?900
Batch size?8
Max LR?0.0003
Optimizer?adamw
Backend?cpu_ref
Tokenizer?bpe
Seed?42
Weight decay?0.01
Grad clip?1
Eval interval?100
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
No GPU data
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
No clipping data
GPU Operations
No GPU ops data
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data
Checkpoints (9) ?
StepFilenameSizeCreated
Model Config (JSON)
{
"vocabSize": 2000,
"blockSize": 128,
"nLayer": 6,
"nEmbd": 128,
"nHead": 4,
"dropout": 0
}Training Config (JSON)
{
"iters": 900,
"batchSize": 8,
"lr": 0.0003,
"beta1": 0.9,
"beta2": 0.999,
"eps": 1e-8,
"weightDecay": 0.01,
"gradClip": 1,
"evalInterval": 100,
"evalIters": 10,
"seed": 42,
"backend": "cpu_ref",
"tokenizer": "bpe",
"optimizer": "adamw",
"logLevel": "info",
"trace": false
}