stale
fine_corpus_l4_20260308_053318
8.95M parameter fine_corpus model — bpe-8k tokenizer, 6L/256D/4H
Overview
8.95M
Parameters
7.9876
Final Loss
7.9997
Best Val Loss
2944.2
Perplexity
1,122,304
Tokens Processed
0.1
Tokens/Param
10,902 tok/s
Avg Throughput
4m 22s
Training Time
Training Progress1,096 / 78,000 steps (1.4%)
Loss reduced by 11.7% from initial 9.0445
Dataset & Training
Domainfine_corpus
Tokenizerbpe-8k
Total Iterations78,000
Batch Size2
Context Length512 tokens
Tokens per Batch1,024
Dataset Passes~1
Effective Tokens1,122,304
Training Pipeline
Warmupsteps 7–780
Learning rate warmup — model weights adjusting to data distribution
Loss: 9.044 → 7.903Linear LR warmup, gradient clipping
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters8.95M
Layers6
Embedding Dim256
Attention Heads4
Head Dim64
FFN Dim1024
FFN Activationgelu
Vocab Size8,000
Context Length512 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.0006
LR Min0.00006
LR ScheduleCosine decay
Warmup Steps500
Batch Size2
Grad Accum Steps2
Effective Batch4
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-8k
Seed42
Layer Structure
Token Embed
8,000×256
Pos Embed
512×256
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
LayerNorm
256
LM Head
256×8,000
Generated Samples
No samples generated yet. Samples appear at configured intervals during training.
Checkpoints
No checkpoints saved yet.
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: