stale
super_chat_20260306180812_47eq
55.77M parameter nanochat model — bpe-chat-4k tokenizer, 16L/512D/8H
Overview
55.77M
Parameters
6.7453
Final Loss
6.8360
Best Val Loss
850.1
Perplexity
40,691,712
Tokens Processed
0.7
Tokens/Param
1,605 tok/s
Avg Throughput
3h 33m
Training Time
Training Progress19,869 / 20,000 steps (99.3%)
Loss reduced by 2.2% from initial 6.8992
Dataset & Training
Domainnanochat
Tokenizerbpe-chat-4k
Total Iterations20,000
Batch Size4
Context Length512 tokens
Tokens per Batch2,048
Dataset Passes~102
Effective Tokens40,691,712
Training Pipeline
Warmupsteps 2,001–2,001
Learning rate warmup — model weights adjusting to data distribution
Loss: 6.899 → 6.899Linear LR warmup, gradient clipping
Rapid Descentsteps 2,001–6,001
Steepest loss reduction — model learning primary patterns
Loss: 6.899 → 6.969Cosine LR schedule, AdamW optimization
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters55.77M
Layers16
Embedding Dim512
Attention Heads8
Head Dim64
FFN Dim2048
FFN Activationswiglu
Vocab Size4,000
Context Length512 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.0003
LR Min0.00003
LR ScheduleCosine decay
Warmup Steps500
Batch Size4
Grad Accum Steps1
Effective Batch4
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-chat-4k
Seed42
Layer Structure
Token Embed
4,000×512
Pos Embed
512×512
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
LayerNorm
512
LM Head
512×4,000
Generated Samples
Step 0 — Mar 7, 2026 12:06 AM
Prompt: <|user|> Hello! Who are you? <|assistant|>
<|user|> Hello! Who are you? <|assistant|><|assistant|>you yit s the i i<|user|><|assistant|>ing <|user|>a you <|assistant|>
Checkpoints
No checkpoints saved yet.
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: