A
Alpha
stale

concordance-v4

34.16M parameter concordance model — bpe-8k tokenizer, 8L/512D/8H

Overview

34.16M
Parameters
7.3125
Final Loss
7.5364
Best Val Loss
1498.9
Perplexity
11,059,200
Tokens Processed
0.3
Tokens/Param
7,180 tok/s
Avg Throughput
25m 38s
Training Time
Training Progress675 / 100,000 steps (0.7%)
Loss reduced by 19.6% from initial 9.0988

Dataset & Training

Domainconcordance
Tokenizerbpe-8k
Total Iterations100,000
Batch Size32
Context Length512 tokens
Tokens per Batch16,384
Dataset Passes~14
Effective Tokens11,059,200

Training Pipeline

Warmupsteps 11

Learning rate warmup — model weights adjusting to data distribution

Loss: 9.0999.099Linear LR warmup, gradient clipping

Training Metrics

Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry

Model Architecture

Model Configuration

ArchitectureGPT (decoder-only transformer)
Parameters34.16M
Layers8
Embedding Dim512
Attention Heads8
Head Dim64
FFN Dim2048
FFN Activationswiglu
Vocab Size8,000
Context Length512 tokens
Dropout0

Training Configuration

Optimizeradamw
Learning Rate0.003
LR Min0.0003
LR ScheduleCosine decay
Warmup Steps200
Batch Size32
Grad Accum Steps1
Effective Batch32
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-8k
Seed42

Layer Structure

Token Embed
8,000×512
Pos Embed
512×512
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
...2 more
LayerNorm
512
LM Head
512×8,000

Generated Samples

No samples generated yet. Samples appear at configured intervals during training.

Checkpoints

No checkpoints saved yet.

Chat with Model

Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: