stale
concordance-v2
97.65M parameter concordance model — bpe-8k tokenizer, 12L/768D/12H
Overview
97.65M
Parameters
7.2877
Final Loss
-
Best Val Loss
1462.1
Perplexity
3,891,200
Tokens Processed
0.0
Tokens/Param
1,996 tok/s
Avg Throughput
32m 29s
Training Time
Training Progress475 / 100,000 steps (0.5%)
Loss reduced by 20.4% from initial 9.1592
Dataset & Training
Domainconcordance
Tokenizerbpe-8k
Total Iterations100,000
Batch Size16
Context Length512 tokens
Tokens per Batch8,192
Dataset Passes~5
Effective Tokens3,891,200
Training Pipeline
Warmupsteps 1–1
Learning rate warmup — model weights adjusting to data distribution
Loss: 9.159 → 9.159Linear LR warmup, gradient clipping
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters97.65M
Layers12
Embedding Dim768
Attention Heads12
Head Dim64
FFN Dim3072
FFN Activationswiglu
Vocab Size8,000
Context Length512 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.0006
LR Min0.00006
LR ScheduleCosine decay
Warmup Steps500
Batch Size16
Grad Accum Steps1
Effective Batch16
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-8k
Seed42
Layer Structure
Token Embed
8,000×768
Pos Embed
512×768
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
LayerNorm
768
LM Head
768×8,000
Generated Samples
No samples generated yet. Samples appear at configured intervals during training.
Checkpoints
No checkpoints saved yet.
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: