A
Alpha
soda_chat_20260306054023_w9lucompletednanochat88.33M params46s elapsed · Updated 36d ago
16L / 512D / 8H · helios · bpe-chat-4k · adamw· Created Mar 6, 2026 5:40 AM
Step 100 / 100100.0%
7.2817
Loss?
7.0405
Best Loss?
-6.1% from start
7.1916
Val Loss?
best: 7.1916
8.40e-5
Learning Rate?
2,253
Throughput?
tok/s (avg)
920
Speed?
ms/iter (avg)
2.943
Grad Norm?
avg: 2.580
102.4K
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?16
Embedding?512
Heads?8
Vocab?4,000
Context?512
Dropout?0
Parameters?88.33M
Training Config
Total iters?100
Batch size?4
Max LR?0.0003
Optimizer?adamw
Backend?helios
Tokenizer?bpe-chat-4k
Seed?42
Weight decay?0.1
Grad clip?1
Eval interval?50
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
Perplexity
Train/Val Gap
No validation data
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data
Checkpoints (1) ?
StepFilenameSizeCreated
100checkpoint-100.json303.3 MB38d ago
Model Config (JSON)
{
  "vocabSize": 4000,
  "blockSize": 512,
  "nLayer": 16,
  "nEmbd": 512,
  "nHead": 8,
  "dropout": 0,
  "ffnActivation": "swiglu"
}
Training Config (JSON)
{
  "iters": 100,
  "batchSize": 4,
  "lr": 0.0003,
  "lrMin": 0.00003,
  "warmupIters": 500,
  "beta1": 0.9,
  "beta2": 0.95,
  "eps": 1e-8,
  "weightDecay": 0.1,
  "gradClip": 1,
  "evalInterval": 50,
  "evalIters": 10,
  "seed": 42,
  "backend": "helios",
  "tokenizer": "bpe-chat-4k",
  "optimizer": "adamw",
  "logLevel": "info",
  "logEvery": 1,
  "trace": false,
  "gradAccumSteps": 1,
  "sampleInterval": 99999,
  "spikeThreshold": 50,
  "syncEvery": 1,
  "gcEvery": 1,
  "packed": true,
  "symbio": false,
  "symbioConfig": null
}