A
Alpha
stale

concordance-v1

119.51M parameter concordance model — bpe-64k tokenizer, 12L/768D/12H

Overview

119.51M
Parameters
8.6721
Final Loss
8.5093
Best Val Loss
5838.0
Perplexity
4,505,600
Tokens Processed
0.0
Tokens/Param
1,813 tok/s
Avg Throughput
41m 21s
Training Time
Training Progress550 / 100,000 steps (0.5%)
Loss reduced by 14.9% from initial 10.1928

Dataset & Training

Domainconcordance
Tokenizerbpe-64k
Total Iterations100,000
Batch Size16
Context Length512 tokens
Tokens per Batch8,192
Dataset Passes~2
Effective Tokens4,505,600

Training Pipeline

Warmupsteps 11

Learning rate warmup — model weights adjusting to data distribution

Loss: 10.19310.193Linear LR warmup, gradient clipping

Training Metrics

Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry

Model Architecture

Model Configuration

ArchitectureGPT (decoder-only transformer)
Parameters119.51M
Layers12
Embedding Dim768
Attention Heads12
Head Dim64
FFN Dim3072
FFN Activationswiglu
Vocab Size22,226
Context Length512 tokens
Dropout0

Training Configuration

Optimizeradamw
Learning Rate0.0006
LR Min0.00006
LR ScheduleCosine decay
Warmup Steps500
Batch Size16
Grad Accum Steps1
Effective Batch16
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-64k
Seed42

Layer Structure

Token Embed
22,226×768
Pos Embed
512×768
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
...6 more
LayerNorm
768
LM Head
768×22,226

Generated Samples

No samples generated yet. Samples appear at configured intervals during training.

Checkpoints

No checkpoints saved yet.

Chat with Model

Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: