A
Alpha
stale

super_chat_20260306203249_2vw9

6.90M parameter super_chat model — bpe-chat-4k tokenizer, 6L/256D/8H

Overview

6.90M
Parameters
6.5805
Final Loss
-
Best Val Loss
720.9
Perplexity
528,384
Tokens Processed
0.1
Tokens/Param
8,290 tok/s
Avg Throughput
4m 2s
Training Time
Training Progress258 / 50,000 steps (0.5%)
Loss reduced by 21.2% from initial 8.3531

Dataset & Training

Domainsuper_chat
Tokenizerbpe-chat-4k
Total Iterations50,000
Batch Size4
Context Length512 tokens
Tokens per Batch2,048
Dataset Passes~1
Effective Tokens528,384

Training Pipeline

Warmupsteps 11

Learning rate warmup — model weights adjusting to data distribution

Loss: 8.3538.353Linear LR warmup, gradient clipping

Training Metrics

Loss Curve
?
?
?
?
Smoothed Loss
Perplexity
Learning Rate
Gradient Norm
Throughput (tok/s)
Timing Breakdown
No Telemetry

Model Architecture

Model Configuration

ArchitectureGPT (decoder-only transformer)
Parameters6.90M
Layers6
Embedding Dim256
Attention Heads8
Head Dim32
FFN Dim1024
FFN Activationgelu
Vocab Size4,000
Context Length512 tokens
Dropout0

Training Configuration

Optimizeradamw
Learning Rate0.0006
LR Min0.00006
LR ScheduleCosine decay
Warmup Steps2,000
Batch Size4
Grad Accum Steps4
Effective Batch16
Grad Clip1
Weight Decay0.1
Backendhelios
Tokenizerbpe-chat-4k
Seed42

Layer Structure

Token Embed
4,000×256
Pos Embed
512×256
Block 0
Attn+FFN
Block 1
Attn+FFN
Block 2
Attn+FFN
Block 3
Attn+FFN
Block 4
Attn+FFN
Block 5
Attn+FFN
LayerNorm
256
LM Head
256×4,000

Generated Samples

No samples generated yet. Samples appear at configured intervals during training.

Checkpoints

No checkpoints saved yet.

Chat with Model

Send a message to chat with this model
Generated Invalid Date Invalid Date — Alpha Training SystemConfig hash: