stale
fine_corpus_large_v7
20.51M parameter fine_corpus model — bpe-8k tokenizer, 8L/384D/6H
Overview
20.51M
Parameters
6.9837
Final Loss
6.7943
Best Val Loss
1078.9
Perplexity
11,100,160
Tokens Processed
0.5
Tokens/Param
7,406 tok/s
Avg Throughput
25m 2s
Training Time
Training Progress5,420 / 78,000 steps (6.9%)
Loss reduced by 22.9% from initial 9.0567
Dataset & Training
Domainfine_corpus
Tokenizerbpe-8k
Total Iterations78,000
Batch Size4
Context Length512 tokens
Tokens per Batch2,048
Dataset Passes~14
Effective Tokens11,100,160
Training Pipeline
Warmupsteps 1–780
Learning rate warmup — model weights adjusting to data distribution
Loss: 9.057 → 6.673Linear LR warmup, gradient clipping
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters20.51M
Layers8
Embedding Dim384
Attention Heads6
Head Dim64
FFN Dim1536
FFN Activationgelu
Vocab Size8,000
Context Length512 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.0003
LR Min0.00003
LR ScheduleCosine decay
Warmup Steps1,000
Batch Size4
Grad Accum Steps1
Effective Batch4
Grad Clip10
Weight Decay0.1
Backendhelios
Tokenizerbpe-8k
Seed42
Layer Structure
Token Embed
8,000×384
Pos Embed
512×384
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
LayerNorm
384
LM Head
384×8,000
Generated Samples
Step 0 — Mar 8, 2026 11:46 AM
Prompt: The
The utk"", "sh= e. = p,,.N
Checkpoints
No checkpoints saved yet.
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: