stale
super_chat_20260306221525_82zs
1.85M parameter super_chat model — bpe-chat-4k tokenizer, 4L/128D/4H
Overview
1.85M
Parameters
5.0094
Final Loss
7.2927
Best Val Loss
149.8
Perplexity
19,968,000
Tokens Processed
10.8
Tokens/Param
51,356 tok/s
Avg Throughput
1h 18m
Training Time
Training Progress4,875 / 50,000 steps (9.8%)
Loss reduced by 39.8% from initial 8.3236
Dataset & Training
Domainsuper_chat
Tokenizerbpe-chat-4k
Total Iterations50,000
Batch Size16
Context Length256 tokens
Tokens per Batch4,096
Dataset Passes~50
Effective Tokens19,968,000
Training Pipeline
Warmupsteps 1–500
Learning rate warmup — model weights adjusting to data distribution
Loss: 8.324 → 5.433Linear LR warmup, gradient clipping
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters1.85M
Layers4
Embedding Dim128
Attention Heads4
Head Dim32
FFN Dim512
FFN Activationgelu
Vocab Size4,000
Context Length256 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.001
LR Min0.0001
LR ScheduleCosine decay
Warmup Steps500
Batch Size16
Grad Accum Steps2
Effective Batch32
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-chat-4k
Seed42
Layer Structure
Token Embed
4,000×128
Pos Embed
256×128
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
LayerNorm
128
LM Head
128×4,000
Generated Samples
Step 0 — Mar 6, 2026 11:21 PM
Prompt: <|user|> Hello, how are you? <|assistant|>
<|user|> Hello, how are you? <|assistant|>la ea a ea ea sema ied
Checkpoints
No checkpoints saved yet.
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: