why?
how is this any better than a rules based engine like https://github.com/mixmark-io/turndown?
why use something that's a few orders of magnitude slower like reader-lm?
They also used turndow, and answer precisely that question in their blog post: https://jina.ai/news/reader-lm-small-language-models-for-cleaning-and-converting-html-to-markdown/
At first glance, using LLMs for data cleaning might seem excessive due to their low cost-efficiency and slower speeds. But what if we're considering a small language model (SLM) โ one with fewer than 1 billion parameters that can run efficiently on the edge? That sounds much more appealing, right? But is this truly feasible or just wishful thinking?
You can compare LM vs rule based approach here: https://huggingface.co/spaces/maxiw/HTML-to-Markdown
I wonder if "hallucinations" being a failure mode for LLMs could mean drastically different edge cases for LLM based vs rules based in practice.
(I say this as I tried using this in ollama q4 1.5b model - and getting hallucinated output for pasted html - I've not tried the API version - perhaps chat mangled in the input/template)
It has a ton of use cases.
Existing approaches are fragile and a pain to maintain. If this works, being able to deploy a small model inside a container and run it basically anywhere, using a few lines of code.. that's much more attractive in lots data processing pipelines, especially if you don't massively care about speed.
Also, best of luck handling multiple languages The Old Way. It's a nightmare wrapped in pain.
I'm about to try it... will be interesting to see if it works. ๐
(I say this as I tried using this in ollama q4 1.5b model - and getting hallucinated output for pasted html - I've not tried the API version - perhaps chat mangled in the input/template)
temperature = 0 ?