stale
super_chat_20260306143135_2qtt
55.77M parameter nanochat model — bpe-chat-4k tokenizer, 16L/512D/8H
Overview
55.77M
Parameters
7.4821
Final Loss
-
Best Val Loss
1775.9
Perplexity
81,920
Tokens Processed
0.0
Tokens/Param
1,330 tok/s
Avg Throughput
43s
Training Time
Training Progress40 / 200 steps (20.0%)
Loss reduced by 10.7% from initial 8.3814
Dataset & Training
Domainnanochat
Tokenizerbpe-chat-4k
Total Iterations200
Batch Size4
Context Length512 tokens
Tokens per Batch2,048
Dataset Passes~0
Effective Tokens81,920
Training Pipeline
Warmupsteps 1–2
Learning rate warmup — model weights adjusting to data distribution
Loss: 8.381 → 8.402Linear LR warmup, gradient clipping
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters55.77M
Layers16
Embedding Dim512
Attention Heads8
Head Dim64
FFN Dim2048
FFN Activationswiglu
Vocab Size4,000
Context Length512 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.0006
LR Min0.00006
LR ScheduleCosine decay
Warmup Steps500
Batch Size4
Grad Accum Steps1
Effective Batch4
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-chat-4k
Seed42
Layer Structure
Token Embed
4,000×512
Pos Embed
512×512
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
LayerNorm
512
LM Head
512×4,000
Generated Samples
No samples generated yet. Samples appear at configured intervals during training.
Checkpoints
No checkpoints saved yet.
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: