super_chat_20260307010642_qe06completedsuper_chat9.99M params13m 45s elapsed · Updated 36d ago
6L / 256D / 8H · helios · bpe-chat-4k · adamw· Created Mar 7, 2026 1:07 AM
Step 15,000 / 15,000100.0%
5.2090
Loss?
4.9867
Best Loss?
-37.6% from start
7.1044
Val Loss?
best: 6.5487
3.39e-6
Learning Rate?
26,398
Throughput?
tok/s (avg)
80
Speed?
ms/iter (avg)
13.501
Grad Norm?
avg: 21.895
20.48M
Tokens
processed
Loss Curve ? click any chart to add markers
?
?
?
?
Architecture
Layers?6
Embedding?256
Heads?8
Vocab?4,000
Context?256
Dropout?0.1
Parameters?9.99M
Training Config
Total iters?15,000
Batch size?4
Max LR?0.0001
Optimizer?adamw
Backend?helios
Tokenizer?bpe-chat-4k
Seed?42
Weight decay?0.1
Grad clip?1
Eval interval?500
Throughput (tok/s)
Step Time (ms/iter)
GPU & VRAM
Perplexity
Train/Val Gap
Learning Rate
Grad Norm
Smoothed Loss (EMA)
Loss Velocity
Gradient Clipping
GPU Operations
Step Time Breakdown
No timing data
Timing Phase Lines
No timing data
Backward / Forward Ratio
No timing data
Transformer Layer Analysis
Gradient Norm Heatmap
Per-Layer Gradient Evolution
Sample Generations (1)
#CheckpointPrompt (preview)Generated
1-<|user|> Hello, how are you? <|assistant|>38d ago
Prompt
<|user|> Hello, how are you? <|assistant|>
Output
<|user|> Hello, how are you? <|assistant|>? ? it<|end_of_text|>? s <|end_of_text|>s ? it? <|end_of_text|><|end_of_text|>
Model Config (JSON)
{
"vocabSize": 4000,
"blockSize": 256,
"nLayer": 6,
"nEmbd": 256,
"nHead": 8,
"dropout": 0.1,
"ffnActivation": "gelu"
}Training Config (JSON)
{
"iters": 15000,
"batchSize": 4,
"lr": 0.0001,
"lrMin": 0.00001,
"warmupIters": 500,
"beta1": 0.9,
"beta2": 0.95,
"eps": 1e-8,
"weightDecay": 0.1,
"gradClip": 1,
"evalInterval": 500,
"evalIters": 10,
"seed": 42,
"backend": "helios",
"tokenizer": "bpe-chat-4k",
"optimizer": "adamw",
"logLevel": "info",
"logEvery": 25,
"trace": false,
"gradAccumSteps": 2,
"sampleInterval": 500,
"spikeThreshold": 50,
"syncEvery": 0,
"gcEvery": 0,
"packed": true,
"symbio": false,
"symbioConfig": null
}