A
Alpha
20260223060953_gye0completedchat6.84M params3m 13s elapsed · Updated 1d ago
6L / 256D / 8H · helios · bpe-4k · adamw· Created Feb 23, 2026 6:10 AM
Step 105 / 10,0001.1%
7.6060
Loss?
7.6060
Best Loss?
-8.9% from start
-
Val Loss?
3.15e-5
Learning Rate?
9,474
Throughput?
tok/s (avg)
13,836
Speed?
ms/iter (avg)
41.284
Grad Norm?
avg: 29.049
1.84M
Tokens
processed
1517ms
Forward
11% of step
12260ms
Backward
89% of step
6ms
GPU Sync
0% of step
3,577
GPU Ops
per step
0.3%
MFU
model FLOPS util
8.1x
Bwd/Fwd
ratio
Loss Curve ?
Validation / selection
Run diagnostics
Gradient + Loss
Train/val loss with validation, warmup, overfit, and instability markers.
Search-aware view
Architecture
Layers?6
Embedding?256
Heads?8
Vocab?4,000
Context?256
Dropout?0
Parameters?6.84M
Training Config
Total iters?10,000
Batch size?128
Max LR?0.00015
Optimizer?adamw
Backend?helios
Tokenizer?bpe-4k
Seed?42
Weight decay?0.01
Grad clip?1
Eval interval?500
GPU & VRAM
Learning Rate
Grad Norm
Step Time Breakdown
Step Time Breakdown
Checkpoints (0) ?
No checkpoints saved
{
  "vocabSize": 4000,
  "blockSize": 256,
  "nLayer": 6,
  "nEmbd": 256,
  "nHead": 8,
  "dropout": 0
}
{
  "iters": 10000,
  "batchSize": 128,
  "lr": 0.00015,
  "beta1": 0.9,
  "beta2": 0.999,
  "eps": 0.000001,
  "weightDecay": 0.01,
  "gradClip": 1,
  "evalInterval": 500,
  "evalIters": 10,
  "seed": 42,
  "backend": "helios",
  "tokenizer": "bpe-4k",
  "optimizer": "adamw",
  "logLevel": "debug",
  "trace": true,
  "gradAccumSteps": 4
}