A
Alpha
fine_corpus_large_v5stalefine_corpus20.51M params19m 4s elapsed · Updated 36d ago
8L / 384D / 6H · helios · bpe-8k · adamw· Created Mar 8, 2026 10:23 AM
Step 4,592 / 78,0005.9%
6.3975
Loss?
5.3703
Best Loss?
-29.4% from start
7.6776
Val Loss?
best: 6.8596
5.98e-5
Learning Rate?
8,164
Throughput?
tok/s (avg)
252
Speed?
ms/iter (avg)
0.000
Grad Norm?
avg: 0.000
9.40M
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?8
Embedding?384
Heads?6
Vocab?8,000
Context?512
Dropout?0
Parameters?20.51M
Training Config
Total iters?78,000
Batch size?4
Max LR?0.00006
Optimizer?adamw
Backend?helios
Tokenizer?bpe-8k
Seed?42
Weight decay?0.1
Grad clip?0
Eval interval?500
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data

Transformer Layer Analysis

Gradient Norm Heatmap
Per-Layer Gradient Evolution
Checkpoints (0) ?
No checkpoints saved
Sample Generations (1)
#CheckpointPrompt (preview)Generated
1-The 36d ago
Prompt
The
Output
The efP"", " = alB"ore"""(
Model Config (JSON)
{
  "vocabSize": 8000,
  "blockSize": 512,
  "nLayer": 8,
  "nEmbd": 384,
  "nHead": 6,
  "dropout": 0,
  "ffnActivation": "gelu"
}
Training Config (JSON)
{
  "iters": 78000,
  "batchSize": 4,
  "lr": 0.00006,
  "lrMin": 0.000006,
  "warmupIters": 2000,
  "beta1": 0.9,
  "beta2": 0.95,
  "eps": 1e-8,
  "weightDecay": 0.1,
  "gradClip": 0,
  "evalInterval": 500,
  "evalIters": 10,
  "seed": 42,
  "backend": "helios",
  "tokenizer": "bpe-8k",
  "optimizer": "adamw",
  "logLevel": "info",
  "logEvery": 25,
  "trace": false,
  "gradAccumSteps": 1,
  "sampleInterval": 2000,
  "spikeThreshold": 0,
  "syncEvery": 1,
  "gcEvery": 1,
  "packed": true,
  "symbio": false,
  "symbioConfig": null
}