A
Alpha
completed

soda_chat_20260306084529_jxyd

88.33M parameter nanochat model — bpe-chat-4k tokenizer, 16L/512D/8H

Overview

88.33M
Parameters
6.8327
Final Loss
6.7628
Best Val Loss
927.7
Perplexity
6,144
Tokens Processed
0.0
Tokens/Param
2,304 tok/s
Avg Throughput
2m 24s
Training Time
Training Progress3 / 3 steps (100.0%)
Loss reduced by 18.6% from initial 8.3954

Dataset & Training

Domainnanochat
Tokenizerbpe-chat-4k
Total Iterations3
Batch Size4
Context Length512 tokens
Tokens per Batch2,048
Dataset Passes~0
Effective Tokens6,144

Training Pipeline

Warmupsteps 11

Learning rate warmup — model weights adjusting to data distribution

Loss: 8.3958.395Linear LR warmup, gradient clipping
Convergencesteps 3153

Approaching minimum — model capacity saturation

Loss: 8.3606.833Minimum LR, weight decay regularization

Training Metrics

Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry

Model Architecture

Model Configuration

ArchitectureGPT (decoder-only transformer)
Parameters88.33M
Layers16
Embedding Dim512
Attention Heads8
Head Dim64
FFN Dim2048
FFN Activationswiglu
Vocab Size4,000
Context Length512 tokens
Dropout0

Training Configuration

Optimizeradamw
Learning Rate0.0006
LR Min0.00006
LR ScheduleCosine decay
Warmup Steps500
Batch Size4
Grad Accum Steps1
Effective Batch4
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-chat-4k
Seed42

Layer Structure

Token Embed
4,000×512
Pos Embed
512×512
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
...10 more
LayerNorm
512
LM Head
512×4,000

Generated Samples

No samples generated yet. Samples appear at configured intervals during training.

Checkpoints

StepFileSizeDate
3checkpoint-3.json303.3 MBMar 6, 2026 1:05 PM

Chat with Model

Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: