A
Alpha
soda_chat_20260306055025_0coqcompletednanochat88.33M params1m 33s elapsed · Updated 36d ago
16L / 512D / 8H · helios · bpe-chat-4k · adamw· Created Mar 6, 2026 5:50 AM
Step 200 / 200100.0%
7.2546
Loss?
6.2218
Best Loss?
3.4% from start
7.3244
Val Loss?
best: 7.3036
1.38e-5
Learning Rate?
2,201
Throughput?
tok/s (avg)
958
Speed?
ms/iter (avg)
118.572
Grad Norm?
avg: 145.285
204.8K
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?16
Embedding?512
Heads?8
Vocab?4,000
Context?512
Dropout?0
Parameters?88.33M
Training Config
Total iters?200
Batch size?4
Max LR?0.0003
Optimizer?adamw
Backend?helios
Tokenizer?bpe-chat-4k
Seed?42
Weight decay?0.1
Grad clip?1
Eval interval?50
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
No GPU data
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data
Checkpoints (1) ?
StepFilenameSizeCreated
200checkpoint-200.json303.3 MB38d ago
Model Config (JSON)
{
  "vocabSize": 4000,
  "blockSize": 512,
  "nLayer": 16,
  "nEmbd": 512,
  "nHead": 8,
  "dropout": 0,
  "ffnActivation": "swiglu"
}
Training Config (JSON)
{
  "iters": 200,
  "batchSize": 4,
  "lr": 0.0003,
  "lrMin": 0.00003,
  "warmupIters": 500,
  "beta1": 0.9,
  "beta2": 0.95,
  "eps": 1e-8,
  "weightDecay": 0.1,
  "gradClip": 1,
  "evalInterval": 50,
  "evalIters": 10,
  "seed": 42,
  "backend": "helios",
  "tokenizer": "bpe-chat-4k",
  "optimizer": "adamw",
  "logLevel": "info",
  "logEvery": 1,
  "trace": false,
  "gradAccumSteps": 1,
  "sampleInterval": 99999,
  "spikeThreshold": 50,
  "syncEvery": 1,
  "gcEvery": 1,
  "packed": true,
  "symbio": false,
  "symbioConfig": null
}