A
Alpha
minipile_1g_20260310192446_hpkqstaleunknown97.65M params4h 37m elapsed · Updated 34d ago
12L / 768D / 12H · helios · bpe-8k-chat · adamw· Created Mar 10, 2026 7:38 PM
Step 2,075 / 10,00020.8%
4.5053
Loss?
4.1745
Best Loss?
-50.7% from start
4.7986
Val Loss?
best: 4.7986
2.82e-4
Learning Rate?
2,043
Throughput?
tok/s (avg)
8,018
Speed?
ms/iter (avg)
0.271
Grad Norm?
avg: 0.251
33.98M
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?12
Embedding?768
Heads?12
Vocab?8,000
Context?512
Dropout?0
Parameters?97.65M
Training Config
Total iters?10,000
Batch size?8
Max LR?0.0003
Optimizer?adamw
Backend?helios
Tokenizer?bpe-8k-chat
Seed?42
Weight decay?0.1
Grad clip?1
Eval interval?500
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data

Transformer Layer Analysis

Gradient Norm Heatmap
Per-Layer Gradient Evolution
Checkpoints (0) ?
No checkpoints saved
Model Config (JSON)
{
  "vocabSize": 8000,
  "blockSize": 512,
  "nLayer": 12,
  "nEmbd": 768,
  "nHead": 12,
  "dropout": 0,
  "ffnActivation": "gelu"
}
Training Config (JSON)
{
  "iters": 10000,
  "batchSize": 8,
  "lr": 0.0003,
  "lrMin": 0.00003,
  "warmupIters": 500,
  "beta1": 0.9,
  "beta2": 0.95,
  "eps": 1e-8,
  "weightDecay": 0.1,
  "gradClip": 1,
  "evalInterval": 500,
  "evalIters": 10,
  "seed": 42,
  "backend": "helios",
  "tokenizer": "bpe-8k-chat",
  "optimizer": "adamw",
  "logLevel": "info",
  "logEvery": 1,
  "trace": false,
  "gradAccumSteps": 4,
  "sampleInterval": 500,
  "spikeThreshold": 5000,
  "embGradScale": 1,
  "syncEvery": 0,
  "gcEvery": 0,
  "packed": true,
  "symbio": false,
  "symbioConfig": null
}