super_chat_20260310074201_vxnnstaleunknown97.65M params10h 5m elapsed · Updated 34d ago
12L / 768D / 12H · helios · bpe-8k-chat · adamw· Created Mar 10, 2026 7:42 AM
Step 4,525 / 6,00075.4%
5.0352
Loss?
4.7247
Best Loss?
-44.9% from start
5.0464
Val Loss?
best: 5.0464
7.52e-5
Learning Rate?
2,040
Throughput?
tok/s (avg)
8,031
Speed?
ms/iter (avg)
0.219
Grad Norm?
avg: 0.250
74.14M
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?12
Embedding?768
Heads?12
Vocab?8,000
Context?512
Dropout?0
Parameters?97.65M
Training Config
Total iters?6,000
Batch size?8
Max LR?0.0003
Optimizer?adamw
Backend?helios
Tokenizer?bpe-8k-chat
Seed?42
Weight decay?0.1
Grad clip?1
Eval interval?500
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data
Transformer Layer Analysis
Gradient Norm Heatmap
Per-Layer Gradient Evolution
Checkpoints (0) ?
No checkpoints saved
Model Config (JSON)
{
"vocabSize": 8000,
"blockSize": 512,
"nLayer": 12,
"nEmbd": 768,
"nHead": 12,
"dropout": 0,
"ffnActivation": "gelu"
}Training Config (JSON)
{
"iters": 6000,
"batchSize": 8,
"lr": 0.0003,
"lrMin": 0.00003,
"warmupIters": 500,
"beta1": 0.9,
"beta2": 0.95,
"eps": 1e-8,
"weightDecay": 0.1,
"gradClip": 1,
"evalInterval": 500,
"evalIters": 10,
"seed": 42,
"backend": "helios",
"tokenizer": "bpe-8k-chat",
"optimizer": "adamw",
"logLevel": "info",
"logEvery": 1,
"trace": false,
"gradAccumSteps": 4,
"sampleInterval": 500,
"spikeThreshold": 5000,
"embGradScale": 1,
"syncEvery": 0,
"gcEvery": 0,
"packed": true,
"symbio": false,
"symbioConfig": null
}