stale
soda_chat_20260306105222_j14k
55.77M parameter nanochat model — bpe-chat-4k tokenizer, 16L/512D/8H
Overview
55.77M
Parameters
6.9269
Final Loss
-
Best Val Loss
1019.4
Perplexity
477,184
Tokens Processed
0.0
Tokens/Param
1,646 tok/s
Avg Throughput
19m 56s
Training Time
Training Progress466 / 20,000 steps (2.3%)
Loss reduced by 17.6% from initial 8.4055
Dataset & Training
Domainnanochat
Tokenizerbpe-chat-4k
Total Iterations20,000
Batch Size2
Context Length512 tokens
Tokens per Batch1,024
Dataset Passes~1
Effective Tokens477,184
Training Pipeline
Warmupsteps 1–200
Learning rate warmup — model weights adjusting to data distribution
Loss: 8.406 → 7.170Linear LR warmup, gradient clipping
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters55.77M
Layers16
Embedding Dim512
Attention Heads8
Head Dim64
FFN Dim2048
FFN Activationswiglu
Vocab Size4,000
Context Length512 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.0001
LR Min0.00001
LR ScheduleCosine decay
Warmup Steps500
Batch Size2
Grad Accum Steps4
Effective Batch8
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-chat-4k
Seed42
Layer Structure
Token Embed
4,000×512
Pos Embed
512×512
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
LayerNorm
512
LM Head
512×4,000
Generated Samples
No samples generated yet. Samples appear at configured intervals during training.
Checkpoints
No checkpoints saved yet.
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: