soda_chat_20260306094744_7bcistalenanochat55.77M params29m 55s elapsed · Updated 38d ago
16L / 512D / 8H · helios · bpe-chat-4k · adamw· Created Mar 6, 2026 9:47 AM
Step 2,000 / 20,00010.0%
6.9301
Loss?
6.8342
Best Loss?
-17.4% from start
7.0279
Val Loss?
best: 7.0060
2.96e-4
Learning Rate?
2,287
Throughput?
tok/s (avg)
922
Speed?
ms/iter (avg)
145.184
Grad Norm?
avg: 93.069
4.10M
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?16
Embedding?512
Heads?8
Vocab?4,000
Context?512
Dropout?0
Parameters?55.77M
Training Config
Total iters?20,000
Batch size?4
Max LR?0.0003
Optimizer?adamw
Backend?helios
Tokenizer?bpe-chat-4k
Seed?42
Weight decay?0.1
Grad clip?1
Eval interval?250
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data
Transformer Layer Analysis
Gradient Norm Heatmap
Per-Layer Gradient Evolution
Checkpoints (0) ?
No checkpoints saved
Model Config (JSON)
{
"vocabSize": 4000,
"blockSize": 512,
"nLayer": 16,
"nEmbd": 512,
"nHead": 8,
"dropout": 0,
"ffnActivation": "swiglu"
}Training Config (JSON)
{
"iters": 20000,
"batchSize": 4,
"lr": 0.0003,
"lrMin": 0.00003,
"warmupIters": 500,
"beta1": 0.9,
"beta2": 0.95,
"eps": 1e-8,
"weightDecay": 0.1,
"gradClip": 1,
"evalInterval": 250,
"evalIters": 10,
"seed": 42,
"backend": "helios",
"tokenizer": "bpe-chat-4k",
"optimizer": "adamw",
"logLevel": "info",
"logEvery": 1,
"trace": false,
"gradAccumSteps": 1,
"sampleInterval": 500,
"spikeThreshold": 50,
"syncEvery": 1,
"gcEvery": 1,
"packed": true,
"symbio": false,
"symbioConfig": null
}