A
Alpha
concordance_v2_20260227074907_59amstaleconcordance306.73M params29m 14s elapsed · Updated 45d ago
21L / 1024D / 16H · helios · bpe-64k · adamw· Created Feb 27, 2026 7:49 AM
Step 141 / 20,0000.7%
9.8923
Loss?
9.8923
Best Loss?
0.0% from start
-
Val Loss?
6.81e-5
Learning Rate?
166
Throughput?
tok/s (avg)
12,353
Speed?
ms/iter (avg)
0.076
Grad Norm?
avg: 0.058
288.8K
Tokens
processed
1309ms
Forward
11% of step
10746ms
Backward
87% of step
118ms
GPU Sync
1% of step
8,964
GPU Ops
per step
0.3%
MFU
model FLOPS util
8.2x
Bwd/Fwd
ratio
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?21
Embedding?1024
Heads?16
Vocab?19,777
Context?512
Dropout?0
Parameters?306.73M
Training Config
Total iters?20,000
Batch size?1
Max LR?0.0003
Optimizer?adamw
Backend?helios
Tokenizer?bpe-64k
Seed?42
Weight decay?0.1
Grad clip?1
Eval interval?500
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
Perplexity
Train/Val Gap
No validation data
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
Forward
Backward
Grad Norm
Optimizer
GPU Sync
Data
Timing Phase Lines
Backward / Forward Ratio

Transformer Layer Analysis

Gradient Norm Heatmap
Per-Layer Gradient Evolution
Checkpoints (0) ?
No checkpoints saved
Model Config (JSON)
{
  "vocabSize": 19777,
  "blockSize": 512,
  "nLayer": 21,
  "nEmbd": 1024,
  "nHead": 16,
  "dropout": 0,
  "ffnActivation": "swiglu"
}
Training Config (JSON)
{
  "iters": 20000,
  "batchSize": 1,
  "lr": 0.0003,
  "lrMin": 0.00003,
  "warmupIters": 1000,
  "beta1": 0.9,
  "beta2": 0.95,
  "eps": 1e-8,
  "weightDecay": 0.1,
  "gradClip": 1,
  "evalInterval": 500,
  "evalIters": 10,
  "seed": 42,
  "backend": "helios",
  "tokenizer": "bpe-64k",
  "optimizer": "adamw",
  "logLevel": "info",
  "trace": false,
  "gradAccumSteps": 4,
  "sampleInterval": 200,
  "spikeThreshold": 10,
  "syncEvery": 1,
  "gcEvery": 1,
  "packed": false,
  "symbio": false,
  "symbioConfig": null
}