historic_chat_v2_20260309194951_rpoecompletedconcordance-chat34.16M params4h 22m elapsed · Updated 35d ago
8L / 512D / 8H · helios · bpe-8k-chat · adamw· Created Mar 9, 2026 7:49 PM
Step 22,500 / 22,500100.0%
1.8364
Loss?
1.7063
Best Loss?
-9.2% from start
1.9009
Val Loss?
best: 1.8835
5.00e-6
Learning Rate?
5,687
Throughput?
tok/s (avg)
2,881
Speed?
ms/iter (avg)
0.153
Grad Norm?
avg: 0.152
89.90M
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?8
Embedding?512
Heads?8
Vocab?8,000
Context?512
Dropout?0
Parameters?34.16M
Training Config
Total iters?22,500
Batch size?16
Max LR?0.00005
Optimizer?adamw
Backend?helios
Tokenizer?bpe-8k-chat
Seed?42
Weight decay?0.01
Grad clip?1
Eval interval?200
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
No GPU data
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data
Transformer Layer Analysis
Gradient Norm Heatmap
Per-Layer Gradient Evolution
Checkpoints (0) ?
No checkpoints saved
Model Config (JSON)
{
"vocabSize": 8000,
"blockSize": 512,
"nLayer": 8,
"nEmbd": 512,
"nHead": 8,
"dropout": 0,
"ffnActivation": "swiglu"
}Training Config (JSON)
{
"iters": 22500,
"batchSize": 16,
"lr": 0.00005,
"lrMin": 0.000005,
"warmupIters": 50,
"beta1": 0.9,
"beta2": 0.999,
"eps": 1e-8,
"weightDecay": 0.01,
"gradClip": 1,
"evalInterval": 200,
"evalIters": 10,
"seed": 42,
"backend": "helios",
"tokenizer": "bpe-8k-chat",
"optimizer": "adamw",
"logLevel": "info",
"logEvery": 25,
"trace": false,
"gradAccumSteps": 2,
"sampleInterval": 200,
"spikeThreshold": 5000,
"embGradScale": 1,
"syncEvery": 0,
"gcEvery": 0,
"packed": true,
"symbio": false,
"symbioConfig": null
}