completed
super_chat_20260306141701_ocys
88.33M parameter nanochat model — bpe-chat-4k tokenizer, 16L/512D/8H
Overview
88.33M
Parameters
6.9162
Final Loss
-
Best Val Loss
1008.4
Perplexity
409,600
Tokens Processed
0.0
Tokens/Param
1,355 tok/s
Avg Throughput
2m 19s
Training Time
Training Progress200 / 200 steps (100.0%)
Loss reduced by 17.6% from initial 8.3939
Dataset & Training
Domainnanochat
Tokenizerbpe-chat-4k
Total Iterations200
Batch Size4
Context Length512 tokens
Tokens per Batch2,048
Dataset Passes~1
Effective Tokens409,600
Training Pipeline
Warmupsteps 1–2
Learning rate warmup — model weights adjusting to data distribution
Loss: 8.394 → 8.397Linear LR warmup, gradient clipping
Rapid Descentsteps 2–61
Steepest loss reduction — model learning primary patterns
Loss: 8.397 → 6.778Cosine LR schedule, AdamW optimization
Refinementsteps 61–157
Diminishing returns — model fine-tuning subtler patterns
Loss: 6.778 → 6.916Lower LR, gradient accumulation
Convergencesteps 157–157
Approaching minimum — model capacity saturation
Loss: 6.916 → 6.916Minimum LR, weight decay regularization
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters88.33M
Layers16
Embedding Dim512
Attention Heads8
Head Dim64
FFN Dim2048
FFN Activationswiglu
Vocab Size4,000
Context Length512 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.0006
LR Min0.00006
LR ScheduleCosine decay
Warmup Steps500
Batch Size4
Grad Accum Steps1
Effective Batch4
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-chat-4k
Seed42
Layer Structure
Token Embed
4,000×512
Pos Embed
512×512
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
LayerNorm
512
LM Head
512×4,000
Generated Samples
No samples generated yet. Samples appear at configured intervals during training.
Checkpoints
| Step | File | Size | Date |
|---|---|---|---|
| 200 | checkpoint-200.json | 303.4 MB | Mar 6, 2026 7:02 PM |
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: