stale
super_chat_20260306213659_09e2
3.36M parameter super_chat model — bpe-chat-4k tokenizer, 4L/192D/6H
Overview
3.36M
Parameters
8.3448
Final Loss
-
Best Val Loss
4208.1
Perplexity
448,512
Tokens Processed
0.1
Tokens/Param
29,151 tok/s
Avg Throughput
0s
Training Time
Training Progress146 / 50,000 steps (0.3%)
Loss reduced by 0.0% from initial 8.3460
Dataset & Training
Domainsuper_chat
Tokenizerbpe-chat-4k
Total Iterations50,000
Batch Size12
Context Length256 tokens
Tokens per Batch3,072
Dataset Passes~1
Effective Tokens448,512
Training Pipeline
Warmupsteps 1–1
Learning rate warmup — model weights adjusting to data distribution
Loss: 8.346 → 8.346Linear LR warmup, gradient clipping
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
No Telemetry
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters3.36M
Layers4
Embedding Dim192
Attention Heads6
Head Dim32
FFN Dim768
FFN Activationgelu
Vocab Size4,000
Context Length256 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.001
LR Min0.0001
LR ScheduleCosine decay
Warmup Steps500
Batch Size12
Grad Accum Steps2
Effective Batch24
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-chat-4k
Seed42
Layer Structure
Token Embed
4,000×192
Pos Embed
256×192
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
LayerNorm
192
LM Head
192×4,000
Generated Samples
No samples generated yet. Samples appear at configured intervals during training.
Checkpoints
No checkpoints saved yet.
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: