A
Alpha
chords-runcompletedchords273.7K params38m 3s elapsed · Updated 5m ago
3L / 64D / 4H · cpu_ref · word · adamw· Created Feb 19, 2026 12:24 AM
Step 2,000 / 2,000100.0%
2.9471
Loss?
2.1365
Best Loss?
-42.8% from start
3.0450
Val Loss?
best: 2.8323
6.83e-10
Learning Rate?
454
Throughput?
tok/s (avg)
1,127
Speed?
ms/iter (avg)
2249.026
Grad Norm?
avg: 4537.844
1.02M
Tokens
processed
Loss Curve ?
Validation / selection
Run diagnostics
Gradient + Loss
Train/val loss with validation, warmup, overfit, and instability markers.
Search-aware view
Architecture
Layers?3
Embedding?64
Heads?4
Vocab?170
Context?64
Dropout?0
Parameters?273.7K
Training Config
Total iters?2,000
Batch size?8
Max LR?0.001
Optimizer?adamw
Backend?cpu_ref
Tokenizer?word
Seed?42
Weight decay?0.01
Grad clip?1
Eval interval?200
GPU & VRAM
No GPU data
Learning Rate
Grad Norm
Step Time Breakdown
Step Time Breakdown
No timing data
Checkpoints (10) ?
StepFilenameSizeCreated
2,000checkpoint-2000.json11.5 MB6d ago
1,800checkpoint-1800.json11.5 MB6d ago
1,600checkpoint-1600.json11.5 MB6d ago
1,400checkpoint-1400.json11.5 MB6d ago
1,200checkpoint-1200.json11.5 MB6d ago
1,000checkpoint-1000.json11.5 MB6d ago
800checkpoint-800.json11.5 MB6d ago
600checkpoint-600.json11.6 MB6d ago
400checkpoint-400.json11.7 MB6d ago
200checkpoint-200.json11.6 MB6d ago
{
  "vocabSize": 170,
  "blockSize": 64,
  "nLayer": 3,
  "nEmbd": 64,
  "nHead": 4,
  "dropout": 0
}
{
  "iters": 2000,
  "batchSize": 8,
  "lr": 0.001,
  "beta1": 0.9,
  "beta2": 0.999,
  "eps": 1e-8,
  "weightDecay": 0.01,
  "gradClip": 1,
  "evalInterval": 200,
  "evalIters": 10,
  "seed": 42,
  "backend": "cpu_ref",
  "tokenizer": "word",
  "optimizer": "adamw",
  "logLevel": "info",
  "trace": false
}