stale
super_chat_20260307004502_3end
6.84M parameter super_chat model — bpe-chat-4k tokenizer, 6L/256D/8H
Overview
6.84M
Parameters
5.0983
Final Loss
6.8740
Best Val Loss
163.7
Perplexity
11,274,240
Tokens Processed
1.6
Tokens/Param
27,252 tok/s
Avg Throughput
13m 43s
Training Time
Training Progress11,010 / 50,000 steps (22.0%)
Loss reduced by 38.9% from initial 8.3462
Dataset & Training
Domainsuper_chat
Tokenizerbpe-chat-4k
Total Iterations50,000
Batch Size4
Context Length256 tokens
Tokens per Batch1,024
Dataset Passes~28
Effective Tokens11,274,240
Training Pipeline
Warmupsteps 1–500
Learning rate warmup — model weights adjusting to data distribution
Loss: 8.346 → 6.063Linear LR warmup, gradient clipping
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters6.84M
Layers6
Embedding Dim256
Attention Heads8
Head Dim32
FFN Dim1024
FFN Activationgelu
Vocab Size4,000
Context Length256 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.0003
LR Min0.00003
LR ScheduleCosine decay
Warmup Steps1,000
Batch Size4
Grad Accum Steps2
Effective Batch8
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-chat-4k
Seed42
Layer Structure
Token Embed
4,000×256
Pos Embed
256×256
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
LayerNorm
256
LM Head
256×4,000
Generated Samples
Step 0 — Mar 7, 2026 1:06 AM
Prompt: <|user|> Hello, how are you? <|assistant|>
<|user|> Hello, how are you? <|assistant|>? h? ? <|end_of_text|>? <|end_of_text|><|end_of_text|><|end_of_text|><|end_of_text|>? ? ? <|end_of_text|><|end_of_text|>
Checkpoints
No checkpoints saved yet.
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: