A
Alpha
historic_chat_v2_20260309180350_im9xstaleconcordance-chat34.16M params47m 58s elapsed · Updated 35d ago
8L / 512D / 8H · helios · bpe-8k-chat · adamw· Created Mar 9, 2026 6:03 PM
Step 17,000 / 22,50075.6%
1.9188
Loss?
1.8983
Best Loss?
-19.8% from start
2.0918
Val Loss?
best: 2.0918
1.13e-5
Learning Rate?
5,688
Throughput?
tok/s (avg)
2,880
Speed?
ms/iter (avg)
0.209
Grad Norm?
avg: 0.212
16.38M
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?8
Embedding?512
Heads?8
Vocab?8,000
Context?512
Dropout?0
Parameters?34.16M
Training Config
Total iters?22,500
Batch size?16
Max LR?0.00005
Optimizer?adamw
Backend?helios
Tokenizer?bpe-8k-chat
Seed?42
Weight decay?0.01
Grad clip?1
Eval interval?200
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
No GPU data
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data

Transformer Layer Analysis

Gradient Norm Heatmap
Per-Layer Gradient Evolution
Checkpoints (0) ?
No checkpoints saved
Model Config (JSON)
{
  "vocabSize": 8000,
  "blockSize": 512,
  "nLayer": 8,
  "nEmbd": 512,
  "nHead": 8,
  "dropout": 0,
  "ffnActivation": "swiglu"
}
Training Config (JSON)
{
  "iters": 22500,
  "batchSize": 16,
  "lr": 0.00005,
  "lrMin": 0.000005,
  "warmupIters": 50,
  "beta1": 0.9,
  "beta2": 0.999,
  "eps": 1e-8,
  "weightDecay": 0.01,
  "gradClip": 1,
  "evalInterval": 200,
  "evalIters": 10,
  "seed": 42,
  "backend": "helios",
  "tokenizer": "bpe-8k-chat",
  "optimizer": "adamw",
  "logLevel": "info",
  "logEvery": 25,
  "trace": false,
  "gradAccumSteps": 2,
  "sampleInterval": 200,
  "spikeThreshold": 5000,
  "embGradScale": 1,
  "syncEvery": 0,
  "gcEvery": 0,
  "packed": true,
  "symbio": false,
  "symbioConfig": null
}