stale
historic_chat_v2_20260309194440_re2f
34.16M parameter concordance-chat model — bpe-8k-chat tokenizer, 8L/512D/8H
Overview
34.16M
Parameters
2.3098
Final Loss
-
Best Val Loss
10.1
Perplexity
139,345,920
Tokens Processed
4.1
Tokens/Param
9,047 tok/s
Avg Throughput
18s
Training Time
Training Progress17,010 / 22,500 steps (75.6%)
Loss reduced by 5.3% from initial 2.4398
Dataset & Training
Domainconcordance-chat
Tokenizerbpe-8k-chat
Total Iterations22,500
Batch Size16
Context Length512 tokens
Tokens per Batch8,192
Dataset Passes~174
Effective Tokens139,345,920
Training Pipeline
Warmupsteps 17,001–17,001
Learning rate warmup — model weights adjusting to data distribution
Loss: 2.440 → 2.440Linear LR warmup, gradient clipping
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters34.16M
Layers8
Embedding Dim512
Attention Heads8
Head Dim64
FFN Dim2048
FFN Activationswiglu
Vocab Size8,000
Context Length512 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.00005
LR Min0.000005
LR ScheduleCosine decay
Warmup Steps50
Batch Size16
Grad Accum Steps2
Effective Batch32
Grad Clip1
Weight Decay0.01
Backendhelios
Tokenizerbpe-8k-chat
Seed42
Layer Structure
Token Embed
8,000×512
Pos Embed
512×512
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
LayerNorm
512
LM Head
512×8,000
Generated Samples
No samples generated yet. Samples appear at configured intervals during training.
Checkpoints
No checkpoints saved yet.
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: