stale
concordance_en_55m_20260310175221_6ahj
97.65M parameter unknown model — bpe-8k-chat tokenizer, 12L/768D/12H
Overview
97.65M
Parameters
7.8335
Final Loss
-
Best Val Loss
2523.7
Perplexity
512,000
Tokens Processed
0.0
Tokens/Param
2,042 tok/s
Avg Throughput
16m 43s
Training Time
Training Progress125 / 5,000 steps (2.5%)
Loss reduced by 14.3% from initial 9.1448
Dataset & Training
Domainunknown
Tokenizerbpe-8k-chat
Total Iterations5,000
Batch Size8
Context Length512 tokens
Tokens per Batch4,096
Dataset Passes~1
Effective Tokens512,000
Training Pipeline
Warmupsteps 1–50
Learning rate warmup — model weights adjusting to data distribution
Loss: 9.145 → 8.057Linear LR warmup, gradient clipping
Training Metrics
Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry
Model Architecture
Model Configuration
ArchitectureGPT (decoder-only transformer)
Parameters97.65M
Layers12
Embedding Dim768
Attention Heads12
Head Dim64
FFN Dim3072
FFN Activationgelu
Vocab Size8,000
Context Length512 tokens
Dropout0
Training Configuration
Optimizeradamw
Learning Rate0.0003
LR Min0.00003
LR ScheduleCosine decay
Warmup Steps500
Batch Size8
Grad Accum Steps4
Effective Batch32
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-8k-chat
Seed42
Layer Structure
Token Embed
8,000×768
Pos Embed
512×768
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
LayerNorm
768
LM Head
768×8,000
Generated Samples
No samples generated yet. Samples appear at configured intervals during training.
Checkpoints
No checkpoints saved yet.
Chat with Model
Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: