A
Alpha
20260222182153_84ekcompletedconcordance152.20M params56s elapsed · Updated 1d ago
12L / 768D / 12H · helios · bpe-64k · adamw· Created Feb 22, 2026 6:21 PM
Step 1 / 50,0000.0%
10.6897
Loss?
10.6897
Best Loss?
0.0% from start
-
Val Loss?
1.20e-7
Learning Rate?
72
Throughput?
tok/s (avg)
56,716
Speed?
ms/iter (avg)
3.858
Grad Norm?
avg: 3.858
4.1K
Tokens
processed
Loss Curve ?
Waiting for more data...
Architecture
Layers?12
Embedding?768
Heads?12
Vocab?43,642
Context?256
Dropout?0
Parameters?152.20M
Training Config
Total iters?50,000
Batch size?16
Max LR?0.00006
Optimizer?adamw
Backend?helios
Tokenizer?bpe-64k
Seed?42
Weight decay?0.1
Grad clip?1
Eval interval?100
GPU & VRAM
No GPU data
Learning Rate
No data
Grad Norm
No data
Step Time Breakdown
Step Time Breakdown
No timing data
Checkpoints (0) ?
No checkpoints saved
{
  "vocabSize": 43642,
  "blockSize": 256,
  "nLayer": 12,
  "nEmbd": 768,
  "nHead": 12,
  "dropout": 0
}
{
  "iters": 50000,
  "batchSize": 16,
  "lr": 0.00006,
  "beta1": 0.9,
  "beta2": 0.95,
  "eps": 1e-8,
  "weightDecay": 0.1,
  "gradClip": 1,
  "evalInterval": 100,
  "evalIters": 10,
  "seed": 42,
  "backend": "helios",
  "tokenizer": "bpe-64k",
  "optimizer": "adamw",
  "logLevel": "info",
  "trace": false
}