super_chat_20260306215918_ptedstalesuper_chat3.36M params3m 41s elapsed · Updated 38d ago
4L / 192D / 6H · helios · bpe-chat-4k · adamw· Created Mar 6, 2026 10:00 PM
Step 1,510 / 50,0003.0%
6.8727
Loss?
6.7991
Best Loss?
-17.7% from start
8.0457
Val Loss?
best: 7.4152
9.99e-4
Learning Rate?
42,471
Throughput?
tok/s (avg)
151
Speed?
ms/iter (avg)
8.791
Grad Norm?
avg: 551.054
9.25M
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?4
Embedding?192
Heads?6
Vocab?4,000
Context?256
Dropout?0
Parameters?3.36M
Training Config
Total iters?50,000
Batch size?12
Max LR?0.001
Optimizer?adamw
Backend?helios
Tokenizer?bpe-chat-4k
Seed?42
Weight decay?0.1
Grad clip?1
Eval interval?500
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data
Transformer Layer Analysis
Gradient Norm Heatmap
Per-Layer Gradient Evolution
Checkpoints (0) ?
No checkpoints saved
Sample Generations (1)
#CheckpointPrompt (preview)Generated
1-<|user|> Hello, how are you? <|assistant|>38d ago
Prompt
<|user|> Hello, how are you? <|assistant|>
Output
<|user|> Hello, how are you? <|assistant|>and fnliu? fthe ti. ethe eer to
Model Config (JSON)
{
"vocabSize": 4000,
"blockSize": 256,
"nLayer": 4,
"nEmbd": 192,
"nHead": 6,
"dropout": 0,
"ffnActivation": "gelu"
}Training Config (JSON)
{
"iters": 50000,
"batchSize": 12,
"lr": 0.001,
"lrMin": 0.0001,
"warmupIters": 500,
"beta1": 0.9,
"beta2": 0.95,
"eps": 1e-8,
"weightDecay": 0.1,
"gradClip": 1,
"evalInterval": 500,
"evalIters": 10,
"seed": 42,
"backend": "helios",
"tokenizer": "bpe-chat-4k",
"optimizer": "adamw",
"logLevel": "info",
"logEvery": 25,
"trace": false,
"gradAccumSteps": 2,
"sampleInterval": 500,
"spikeThreshold": 0,
"syncEvery": 0,
"gcEvery": 0,
"packed": true,
"symbio": false,
"symbioConfig": null
}