A
Alpha
stale

concordance-v3

97.65M parameter concordance model — bpe-8k tokenizer, 12L/768D/12H

Overview

97.65M
Parameters
7.2879
Final Loss
7.5745
Best Val Loss
1462.5
Perplexity
4,300,800
Tokens Processed
0.0
Tokens/Param
2,003 tok/s
Avg Throughput
35m 49s
Training Time
Training Progress525 / 100,000 steps (0.5%)
Loss reduced by 20.4% from initial 9.1599

Dataset & Training

Domainconcordance
Tokenizerbpe-8k
Total Iterations100,000
Batch Size16
Context Length512 tokens
Tokens per Batch8,192
Dataset Passes~5
Effective Tokens4,300,800

Training Pipeline

Warmupsteps 11

Learning rate warmup — model weights adjusting to data distribution

Loss: 9.1609.160Linear LR warmup, gradient clipping

Training Metrics

Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry

Model Architecture

Model Configuration

ArchitectureGPT (decoder-only transformer)
Parameters97.65M
Layers12
Embedding Dim768
Attention Heads12
Head Dim64
FFN Dim3072
FFN Activationswiglu
Vocab Size8,000
Context Length512 tokens
Dropout0

Training Configuration

Optimizeradamw
Learning Rate0.001
LR Min0.0001
LR ScheduleCosine decay
Warmup Steps200
Batch Size16
Grad Accum Steps1
Effective Batch16
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-8k
Seed42

Layer Structure

Token Embed
8,000×768
Pos Embed
512×768
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
...6 more
LayerNorm
768
LM Head
768×8,000

Generated Samples

No samples generated yet. Samples appear at configured intervals during training.

Checkpoints

No checkpoints saved yet.

Chat with Model

Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: