concordance-v4staleconcordance34.16M params25m 38s elapsed · Updated 36d ago
8L / 512D / 8H · helios · bpe-8k · adamw· Created Mar 9, 2026 1:14 AM
Step 675 / 100,0000.7%
7.3125
Loss?
7.1175
Best Loss?
-19.6% from start
7.5364
Val Loss?
best: 7.5364
3.00e-3
Learning Rate?
7,183
Throughput?
tok/s (avg)
2,281
Speed?
ms/iter (avg)
0.221
Grad Norm?
avg: 0.206
11.04M
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?8
Embedding?512
Heads?8
Vocab?8,000
Context?512
Dropout?0
Parameters?34.16M
Training Config
Total iters?100,000
Batch size?32
Max LR?0.003
Optimizer?adamw
Backend?helios
Tokenizer?bpe-8k
Seed?42
Weight decay?0.1
Grad clip?1
Eval interval?250
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
No GPU data
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data
Checkpoints (0) ?
No checkpoints saved
Model Config (JSON)
{
"vocabSize": 8000,
"blockSize": 512,
"nLayer": 8,
"nEmbd": 512,
"nHead": 8,
"dropout": 0,
"ffnActivation": "swiglu"
}Training Config (JSON)
{
"iters": 100000,
"batchSize": 32,
"lr": 0.003,
"lrMin": 0.0003,
"warmupIters": 200,
"beta1": 0.9,
"beta2": 0.95,
"eps": 1e-8,
"weightDecay": 0.1,
"gradClip": 1,
"evalInterval": 250,
"evalIters": 10,
"seed": 42,
"backend": "helios",
"tokenizer": "bpe-8k",
"optimizer": "adamw",
"logLevel": "info",
"logEvery": 25,
"trace": false,
"gradAccumSteps": 1,
"sampleInterval": 5000,
"spikeThreshold": 0,
"embGradScale": 1,
"syncEvery": 0,
"gcEvery": 0,
"packed": true,
"symbio": false,
"symbioConfig": null
}