A
Alpha
stale

super_chat_20260306114153_wazi

55.77M parameter nanochat model — bpe-chat-4k tokenizer, 16L/512D/8H

Overview

55.77M
Parameters
6.7954
Final Loss
6.6557
Best Val Loss
893.7
Perplexity
952,320
Tokens Processed
0.0
Tokens/Param
389 tok/s
Avg Throughput
16m 51s
Training Time
Training Progress465 / 500 steps (93.0%)
Loss reduced by 19.0% from initial 8.3938

Dataset & Training

Domainnanochat
Tokenizerbpe-chat-4k
Total Iterations500
Batch Size4
Context Length512 tokens
Tokens per Batch2,048
Dataset Passes~2
Effective Tokens952,320

Training Pipeline

Warmupsteps 15

Learning rate warmup — model weights adjusting to data distribution

Loss: 8.3948.277Linear LR warmup, gradient clipping
Rapid Descentsteps 5151

Steepest loss reduction — model learning primary patterns

Loss: 8.2776.589Cosine LR schedule, AdamW optimization

Training Metrics

Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry

Model Architecture

Model Configuration

ArchitectureGPT (decoder-only transformer)
Parameters55.77M
Layers16
Embedding Dim512
Attention Heads8
Head Dim64
FFN Dim2048
FFN Activationswiglu
Vocab Size4,000
Context Length512 tokens
Dropout0

Training Configuration

Optimizeradamw
Learning Rate0.0006
LR Min0.00006
LR ScheduleCosine decay
Warmup Steps500
Batch Size4
Grad Accum Steps1
Effective Batch4
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-chat-4k
Seed42

Layer Structure

Token Embed
4,000×512
Pos Embed
512×512
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
...10 more
LayerNorm
512
LM Head
512×4,000

Generated Samples

No samples generated yet. Samples appear at configured intervals during training.

Checkpoints

No checkpoints saved yet.

Chat with Model

Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: