A
Alpha
super_chat_20260305075936_434fstalesuper_chat1.33M params1m 37s elapsed · Updated 39d ago
4L / 128D / 4H · helios · bpe · adamw· Created Mar 5, 2026 8:04 AM
Step 843 / 12,0007.0%
6.2145
Loss?
4.7091
Best Loss?
-18.5% from start
4.7467
Val Loss?
best: 4.7467
1.46e-4
Learning Rate?
32,021
Throughput?
tok/s (avg)
213
Speed?
ms/iter (avg)
116340.360
Grad Norm?
avg: 26754.509
5.18M
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?4
Embedding?128
Heads?4
Vocab?2,000
Context?256
Dropout?0
Parameters?1.33M
Training Config
Total iters?12,000
Batch size?12
Max LR?0.0002
Optimizer?adamw
Backend?helios
Tokenizer?bpe
Seed?42
Weight decay?0.01
Grad clip?1
Eval interval?200
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data
Checkpoints (0) ?
No checkpoints saved
Sample Generations (3)
#CheckpointPrompt (preview)Generated
1-<|user|> Hello, how are you? <|assistant|>39d ago
Prompt
<|user|> Hello, how are you? <|assistant|>
Output
<|user|> Hello, how are you? <|assistant|> ethe ysAo , kar o ean, cder I wh'ra I I Ar to uo ucye e r omd e e, to <|user|> tAe <|user|> om
2-<|user|> What do you like to do for fun? <|assistant|>39d ago
Prompt
<|user|> What do you like to do for fun? <|assistant|>
Output
<|user|> What do you like to do for fun? <|assistant|> gare sAo , ka o ean, cddr I r 'rarI I Ar to u, vcye e r omd e eo to <|user|> tAe <|user|> om
3-<|user|> Tell me about yourself. <|assistant|>39d ago
Prompt
<|user|> Tell me about yourself. <|assistant|>
Output
<|user|> Tell me about yourself. <|assistant|> ethe ysAo , kar the eanoucddwhI r 'ra I I Ar to to vcve e r ome ygo to <|user|> tAy <|user|> om
Model Config (JSON)
{
  "vocabSize": 2000,
  "blockSize": 256,
  "nLayer": 4,
  "nEmbd": 128,
  "nHead": 4,
  "dropout": 0,
  "ffnActivation": "gelu"
}
Training Config (JSON)
{
  "iters": 12000,
  "batchSize": 12,
  "lr": 0.0002,
  "lrMin": 0.00002,
  "warmupIters": 1200,
  "beta1": 0.9,
  "beta2": 0.95,
  "eps": 1e-8,
  "weightDecay": 0.01,
  "gradClip": 1,
  "evalInterval": 200,
  "evalIters": 10,
  "seed": 42,
  "backend": "helios",
  "tokenizer": "bpe",
  "optimizer": "adamw",
  "logLevel": "info",
  "logEvery": 25,
  "trace": false,
  "gradAccumSteps": 2,
  "sampleInterval": 200,
  "spikeThreshold": 0,
  "syncEvery": 1,
  "gcEvery": 1,
  "packed": true,
  "symbio": false,
  "symbioConfig": null
}