this repo has no description
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Update eval (per-split sampling, verbose mode); align save_steps with eval; drop DeepSeek from README

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

+26 -21
+3 -12
README.md
··· 3 3 Fine-tuning vision-language models to transcribe mathematical expressions and 4 4 document fragments into [Typst](https://typst.app) notation. 5 5 6 - Two model targets are supported: 7 - 8 - - **Gemma 4 E2B** (`src/train.py`) -- Unsloth QLoRA, faster iteration 9 - - **DeepSeek-OCR-2** (`src/train_deepseek.py`) -- 3B model, stronger baseline OCR 6 + Model: **Gemma 4 E2B** (`src/train.py`) -- Unsloth QLoRA fine-tuning. 10 7 11 8 --- 12 9 ··· 42 39 | `crohme_gen_2023` | ~2,682 | none | Generated CROHME-style images, 2023 grammar. | 43 40 | `crohme_gen_syntactic` | ~15,653 | none | Syntactically diverse generated CROHME expressions. | 44 41 45 - Caps are applied in `train_deepseek.py` to prevent synthetic data from 42 + Caps are applied in `train.py` to prevent synthetic data from 46 43 dominating training. After capping the effective mix is: 47 44 48 45 - Real/semi-real handwriting: ~43k (34%) ··· 149 146 ### Train 150 147 151 148 ```bash 152 - # DeepSeek-OCR-2 (recommended for 12 GB VRAM) 153 - uv run train-deepseek --smoke-test # validate forward+backward first 154 - uv run train-deepseek 155 - 156 - # Gemma 4 E2B (via Unsloth) 157 149 uv run train 158 150 ``` 159 151 160 152 ### Evaluate 161 153 162 154 ```bash 163 - uv run evaluate 164 - uv run probe-deepseek --n 10 155 + uv run evaluate --checkpoint checkpoints/gemma-4-e2b/final --n 100 165 156 ```
+21 -7
src/eval.py
··· 36 36 return decoded.strip() 37 37 38 38 39 - def evaluate(checkpoint: str, batch_size: int = 8, n: int | None = None) -> float: 39 + def evaluate(checkpoint: str, batch_size: int = 8, n: int | None = None, verbose: bool = False) -> float: 40 40 model, processor = FastVisionModel.from_pretrained(checkpoint, load_in_4bit=True) 41 41 FastVisionModel.for_inference(model) 42 42 model.eval() 43 43 44 - records = load_records(TEST_SPLITS, dedupe=False) 44 + import random 45 45 if n is not None: 46 - records = records[:n] 46 + rng = random.Random(42) 47 + records = [] 48 + for split in TEST_SPLITS: 49 + split_recs = load_records([split], dedupe=False) 50 + records.extend(rng.sample(split_recs, min(n, len(split_recs)))) 51 + else: 52 + records = load_records(TEST_SPLITS, dedupe=False) 47 53 correct = 0 48 54 49 55 from PIL import Image ··· 75 81 decoded = processor.decode(out[0], skip_special_tokens=False) 76 82 pred = extract_assistant(decoded) 77 83 78 - if normalize(pred) == normalize(r["typst"]): 84 + match = normalize(pred) == normalize(r["typst"]) 85 + if match: 79 86 correct += 1 87 + if verbose: 88 + status = "OK" if match else "FAIL" 89 + print(f"[{status}] split={r['split']}") 90 + print(f" GT: {r['typst']}") 91 + print(f" PRED: {pred}") 80 92 81 93 exprate = correct / len(records) 82 - print(f"ExpRate: {exprate:.4f} ({correct}/{len(records)})") 94 + split_label = f" ({n}/split)" if n is not None else "" 95 + print(f"ExpRate{split_label}: {exprate:.4f} ({correct}/{len(records)})") 83 96 return exprate 84 97 85 98 ··· 87 100 parser = argparse.ArgumentParser() 88 101 parser.add_argument("--checkpoint", default="checkpoints/baseline/final") 89 102 parser.add_argument("--batch-size", type=int, default=8) 90 - parser.add_argument("--n", type=int, default=None, help="Evaluate only first N examples") 103 + parser.add_argument("--n", type=int, default=None, help="Evaluate N random examples per split") 104 + parser.add_argument("--verbose", action="store_true", help="Print GT and predicted label for each example") 91 105 args = parser.parse_args() 92 - evaluate(args.checkpoint, args.batch_size, args.n) 106 + evaluate(args.checkpoint, args.batch_size, args.n, args.verbose) 93 107 94 108 95 109 if __name__ == "__main__":
+2 -2
src/train.py
··· 109 109 logging_steps=50, 110 110 eval_strategy="steps", 111 111 eval_steps=500, 112 - save_steps=100, 113 - save_total_limit=10, 112 + save_steps=500, 113 + save_total_limit=5, 114 114 load_best_model_at_end=False, 115 115 output_dir=out_dir, 116 116 run_name="gemma-4-e2b",