20260224130615_3umncompletedchat17.44M params12m 20s elapsed · Updated 1d ago
8L / 384D / 8H · helios · bpe-4k · adamw· Created Feb 24, 2026 1:06 PM
Step 0 / 15,0000.0%
7.3665
Loss?
7.3665
Best Loss?
-11.9% from start
-
Val Loss?
2.21e-5
Learning Rate?
872
Throughput?
tok/s (avg)
11,746
Speed?
ms/iter (avg)
0.596
Grad Norm?
avg: 3.147
645.1K
Tokens
processed
9736ms
Forward
83% of step
1831ms
Backward
16% of step
119ms
GPU Sync
1% of step
1,214
GPU Ops
per step
0.3%
MFU
model FLOPS util
0.2x
Bwd/Fwd
ratio
Loss Curve ?
Validation / selection
Run diagnostics
Gradient + Loss
Train/val loss with validation, warmup, overfit, and instability markers.
Search-aware view
Architecture
Layers?8
Embedding?384
Heads?8
Vocab?4,000
Context?512
Dropout?0.1
Parameters?17.44M
Training Config
Total iters?15,000
Batch size?20
Max LR?0.00005
Optimizer?adamw
Backend?helios
Tokenizer?bpe-4k
Seed?42
Weight decay?0.1
Grad clip?1
Eval interval?1000
GPU & VRAM
Learning Rate
Grad Norm
Step Time Breakdown
Step Time Breakdown
Checkpoints (0) ?
No checkpoints saved
{
"vocabSize": 4000,
"blockSize": 512,
"nLayer": 8,
"nEmbd": 384,
"nHead": 8,
"dropout": 0.1
}{
"iters": 15000,
"batchSize": 20,
"lr": 0.00005,
"lrMin": 0.000005,
"warmupIters": 1000,
"beta1": 0.9,
"beta2": 0.95,
"eps": 0.000001,
"weightDecay": 0.1,
"gradClip": 1,
"evalInterval": 1000,
"evalIters": 10,
"seed": 42,
"backend": "helios",
"tokenizer": "bpe-4k",
"optimizer": "adamw",
"logLevel": "info",
"trace": false,
"gradAccumSteps": 1,
"sampleInterval": 500,
"spikeThreshold": 0
}