A
Alpha
20260220162607_nmd6completedconcordance52.34M params2m 29s elapsed · Updated 3d ago
6L / 256D / 8H · helios · bpe · adamw· Created Feb 20, 2026 4:26 PM
Step 4 / 1,0000.4%
11.3495
Loss?
11.3495
Best Loss?
-1.3% from start
-
Val Loss?
1.20e-5
Learning Rate?
3
Throughput?
tok/s (avg)
74,873
Speed?
ms/iter (avg)
15.195
Grad Norm?
avg: 14.955
512
Tokens
processed
Loss Curve ?
Validation / selection
Run diagnostics
Gradient + Loss
Train/val loss with validation, warmup, overfit, and instability markers.
Search-aware view
Architecture
Layers?6
Embedding?256
Heads?8
Vocab?92,860
Context?256
Dropout?0
Parameters?52.34M
Training Config
Total iters?1,000
Batch size?1
Max LR?0.0003
Optimizer?adamw
Backend?helios
Tokenizer?bpe
Seed?42
Weight decay?0.01
Grad clip?1
Eval interval?100
GPU & VRAM
No GPU data
Learning Rate
Grad Norm
Step Time Breakdown
Step Time Breakdown
No timing data
Checkpoints (0) ?
No checkpoints saved
{
  "vocabSize": 92860,
  "blockSize": 256,
  "nLayer": 6,
  "nEmbd": 256,
  "nHead": 8,
  "dropout": 0
}
{
  "iters": 1000,
  "batchSize": 1,
  "lr": 0.0003,
  "beta1": 0.9,
  "beta2": 0.999,
  "eps": 1e-8,
  "weightDecay": 0.01,
  "gradClip": 1,
  "evalInterval": 100,
  "evalIters": 10,
  "seed": 42,
  "backend": "helios",
  "tokenizer": "bpe",
  "optimizer": "adamw",
  "logLevel": "info",
  "trace": false
}