The OCR leaderboards I’ve seen leave a lot to be desired.
With the rapid release of so many of these models, I wish there were a better way to know which ones are actually the best.
I also feel like most/all of these models don’t handle charts, other than to maybe include a link to a cropped image. It would be nice for the OCR model to also convert charts into markdown tables, but this is obviously challenging.
philipkglass 34 minutes ago [-]
I have been trying to catch up with recent OCR developments too. My documents have enough special requirements that public benchmarks didn't tell me enough to decide. Instead I'm building a small document OCR project with visualization tools for comparing bounding boxes, extracted text, region classification, etc. GLM-OCR is my favorite so far [1]. Apple's VisionKit is very good at text recognition, and fast, but it doesn't do high level layout detection and it only works on Apple hardware. It's another useful source of data for cross-validation if you can run it.
This project has been pretty easy to build with agentic coding. It's a Frankenstein monster of glue code and handling my particular domain requirements, so it's not suitable for public release. I'd encourage some rapid prototyping after you've spent an afternoon or so catching up on what's new. I did a lot of document OCR and post-processing with commercial tools and custom code 15 years ago. The advent of small local VLMs has made it practical to achieve higher accuracy and more domain customization than I would have previously believed.
[1] If you're building an advanced document processing workflow, be sure to read the post-processing code in the GLM code repo. They're doing some non-trivial logic to fuse layout areas and transform text for smooth reading. You probably want to store the raw model results and customize your own post-processing for uncommon languages or uncommon domain vocabulary. Layout is also easier to validate if you bypass their post-processing; it can make some combined areas "disappear" from the layout data.
dweekly 7 minutes ago [-]
I'm going to be the obnoxious person who asks you to please create this leaderboard because you care and have a modicum of knowledge in this space.
mixedmath 34 minutes ago [-]
Are there leaderboards that you follow or trust?
Also, do you have preferred OCR models in your experience? I've had some success with dots.OCR, but I'm only beginning to need to work with OCR.
coder543 30 minutes ago [-]
> Are there leaderboards that you follow or trust?
Not for OCR.
Regardless of how much some people complain about them, I really do appreciate the effort Artificial Analysis puts into consistently running standardized benchmarks for LLMs, rather than just aggregating unverified claims from the AI labs.
I don't think LMArena is that amazing at this point in time, but at least they provide error bars on the ELO and give models the same rank number when they're overlapping.
> Also, do you have preferred OCR models in your experience?
It's a subject I'm interested in, but I don't have enough experience to really put out strong opinions on specific models.
StableAlkyne 2 hours ago [-]
How do these compare to something like Tesseract?
I remember that one clearing the scoreboard for many years, and usually it's the one I grab for OCR needs due to its reputation.
kergonath 2 hours ago [-]
Tesseract does not understand layout. It’s fine for character recognition, but if I still have to pipe the output to a LLM to make sense of the layout and fix common transcription errors, I might as well use a single model. It’s also easier for a visual LLM to extract figures and tables in one pass.
chaps 2 hours ago [-]
For my workflows, layout extraction has been so inconsistent that I've stopped attempting to use it. It's simpler to just throw everything into postgis and run intersection checks on size-normalized pages.
kergonath 1 hours ago [-]
Interesting. What kind of layout do you have?
My documents have one or two-column layouts, often inconsistently across pages or even within a page (which tripped older layout detection methods). Most models seem to understand that well enough so they are good enough for my use case.
chaps 52 minutes ago [-]
Documents that come from FOIA. So, some scanned, some not. Lots of forms and lots of hand writing to add info that the form format doesn't recognize. Lots of repeated documents, but lots of one-off documents that have high signal.
chaps 2 hours ago [-]
Tesseract v4 when it was released was exceptionally good and blew everything out of the water. Have used it to OCR millions of pages. Tbh, I miss the simplicity of tesseract.
The new models are similarly better compared to tesseract v4. But what I'll say is that don't expect new models to be a panacea for your OCR problems. The edge case problems that you might be trying to solve (like, identifying anchor points, or identifying shared field names across documents) are still pretty much all problematic still. So you should still expect things like random spaces or unexpected characters to jam up your jams.
Also some newer models tend to hallucinate incredibly aggressively. If you've ever seen an LLM get stuck in an infinite, think of that.
ELO scores for OCR don't really make much sense - it's trying to reduce accuracy to a single voting score without any real quality-control on the reviewer/judge.
I think a more accurate reflection of the current state of comparisons would be a real-world benchmark with messy/complex docs across industries, languages.
coder543 1 hours ago [-]
It is missing both models that I mentioned, so yes, I would say one reason it is not accurate is because it is so incomplete.
It also doesn't provide error bars on the ELO, so models that only have tens of battles are being listed alongside models that have thousands of battles with no indication of how confident those ELOs are, which I find rather unhelpful.
A lot of these models are also sensitive to how they are used, and offer multiple ways to be used. It's not clear how they are being invoked.
That leaderboard is definitely one of the ones that leaves a lot to be desired.
alaanor 2 hours ago [-]
There was so many OCR models released in the past few months, all VLM models and yet none of them handle Korean well. Every time I try with a random screenshot (not a A4 document) they just fail at a "simple" task. And funnily enough Qwen3 8B VL is the best model that usually get it right (although I couldn't get the bbox quite well). Even more funny, whatever is running on an iphone locally on cpu is insanely good, same with google's OCR api. I don't know why we don't get more of the traditional OCR stuff. Paddlepaddle v5 is the closest I could find. At this point, I feel like I might be doing something wrong with those VLMs.
Stagnant 2 hours ago [-]
Chrome ships a local OCR model for text extraction from PDFs which is better than any of the VLM or open source OCR models i've tried. I had a few hundred gigs of old newspaper scans and after trying all the other options I ended up building a wrapper around the DLL it uses to get the text and bboxes. Performance and accuracy on another level compared to tesseract, and while VLM models sometimes produced good results they just seemed unreliable.
I've thought of open sourcing the wrapper but havent gotten around to it yet. I bet claude code can build a functioning prototype if you just point it to "screen_ai" dir under chrome's user data.
mwcampbell 9 minutes ago [-]
What's the name of this DLL? I assume it's separate from the monster chrome.dll, and that the model is proprietary.
zzleeper 54 minutes ago [-]
Surprisingly, I have a few hundred gigs of old newspaper scans so am very curious.
How fast was it per page? Do you recall if it's CPU or GPU based? TY!
ghrl 2 hours ago [-]
I remember someone building a meme search engine for millions of images using a cluster of used iPhone SE's because of Apple's very good and fast OCR capabilities.
Quite an interesting read as well:
https://news.ycombinator.com/item?id=34315782
fzysingularity 2 hours ago [-]
Apple OCR even on the Mac is insanely good, in fact way better than AWS textract/GCP cloud vision OCR.
Any idea what model is being used?
AlphaSite 2 hours ago [-]
Probably some custom model built for their hardware.
aliljet 4 hours ago [-]
This is actually the thing I really desperately need. I'm routinely analyzing contracts that were faxed to me, scanned with monstrously poor resolution, wet signed, all kinds of shit. The big LLM providers choke on this raw input and I burn up the entire context window for 30 pages of text. Understandable evals of the quality of these OCR systems (which are moving wicked fast) would be helpful...
And here's the kicker. I can't afford mistakes. Missing a single character or misinterpreting it could be catastrophic. 4 units vacant? 10 days to respond? Signature missing? Incredibly critical things. I can't find an eval that gives me confidence around this.
coder543 3 hours ago [-]
If you want OCR with the big LLM providers, you should probably be passing one page per request. Having the model focus on OCR for only a single page at a time seemed to help a lot in my anecdotal testing a few months ago. You can even pass all the pages in parallel in separate requests, and get the better quality response much faster too.
But, as others said, if you can't afford mistakes, then you're going to need a human in the loop to take responsibility.
staticman2 2 hours ago [-]
Gemini Pro 3 seems to be built for handling multiple page PDFs.
I can feed it a multiple page PDF and tell it to convert it to markdown and it does this well. I don't need to load the pages one at a time as long as I use the PDF format. (This was tested on A.i. studio but I think the API works the same way).
coder543 1 hours ago [-]
It's not that they can't do multiple pages... but did you compare against doing one page at a time?
How many pages did you try in a single request? 5? 50? 500?
I fully believe that 5 pages of input works just fine, but this does not scale up to larger documents, and the goal of OCR is usually to know what is actually written on the page... not what "should" have been written on the page. I think a larger number of pages makes it more likely for the LLM to hallucinate as it tries to "correct" errors that it sees, which is not the task. If that is a desirable task, I think it would be better to post-process the document with an LLM after it is converted to text, rather than asking the LLM to both read a large number of images and correct things at the same time, which is asking a lot.
Once the document gets long enough, current LLMs will get lazy and stop providing complete OCR for every page in their response.
One page at a time keeps the LLM focused on the task, and it's easy to parallelize so entire documents can be OCR'd quickly.
HPsquared 3 hours ago [-]
You could maybe then do a second pass on the whole text (as plain text not OCR) to look for likely mistakes.
kergonath 2 hours ago [-]
This is not always easy. The models I tried were too helpful and rewrote too much instead of fixing simple typos. When I tried I ended up with huge prompts and I still found sentences where the LLM was too enthusiastic. I ended up applying regexes with common typos and accepted some residual errors. It might be better now, though. But since then I’ve moved to all-in-one solutions like Mathpix and Mistral-OCR which are quite good for my purpose.
2 hours ago [-]
chrsw 3 hours ago [-]
I'm keeping my eye on progress in this area as well. I need to free engineering design data from tens of thousands of PDF pages and make them easily and quickly accessible to LLMs.
aliljet 3 hours ago [-]
All of healthcare is crying. Trust me.
Imustaskforhelp 3 hours ago [-]
I suppose tears of joy?
fragmede 2 hours ago [-]
Of sadness because they're not allowed to use it yet.
daveguy 3 hours ago [-]
If your needs are that sensitive, I doubt you'll find anything anytime soon that doesn't require a human in the loop. Even SOTA models only average 95% accuracy on messy inputs. If that's a per character accuracy (which OCR is generally measured by), that's going to be 5+ errors per page of 100+ words. If you really can't afford mistakes you have to consider the OCR inaccurate. If you have key components like "days to respond" and "units vacant" you need to identify the presence of those specifically with bias in favor of false positives (over false negatives), and human confirmation of the source-> OCR.
kergonath 2 hours ago [-]
> If you really can't afford mistakes you have to consider the OCR inaccurate.
Isn’t this close to the error rate of human transcription for messy input, though? I seem to remember a figure in that ballpark. I think if your use case is this sensitive, then any transcription is suspicious.
aliljet 1 hours ago [-]
This is precisely the real question. If you're exceeding human transcription, you may be generally pretty good. The question is what happens when you tell a human to become surgical about some part of the document, how then does the comparison change..
renewiltord 34 minutes ago [-]
I’m sure you’ve tried all this but you’ve tried inter-rater agreement via multiple attempts on same LLM vs different LLM? Perhaps your system would work better if you ran it through 5 models 3 times and then highlighted diffs for human chooser.
3 hours ago [-]
cinntaile 3 hours ago [-]
Deciphering fax messages? What is this, the 90s?
kergonath 2 hours ago [-]
We have decades of internal reports on film that we’d like to make accessible and searchable. We don’t do it with new documents, but we have a huge backlog.
xyproto 2 hours ago [-]
Fax is still hard to hack, so some organizations have kept it alive for security.
mikae1 1 hours ago [-]
Text me back when there's a working PDF to EPUB conversion tool. I've been waiting (and searching for one) long enough. :D
I've been trying different OCR models on what should be very simple - subtitles (these are simple machine-rendered text). While all models do very well (95+% accuracy), I haven't seen a model not occasionally make very obvious mistakes. Maybe it will take a different approach to get the last 1%...
sinandrei 2 hours ago [-]
Has anyone experiment with using VLM to detect "marks"? Thinking of pen/pencil based markings like underlines, circles,checkmarks.. Can these models do it?
leetharris 1 hours ago [-]
None of them do it well from our experience. We had to write our own custom pipeline with a mixture of legacy CV approaches to handle this (AI contract analysis). We constantly benchmark every new multimodal and VLM model that comes out and are consistently disappointed.
coder543 1 hours ago [-]
If someone releases a benchmark/dataset, I'm sure that significantly increases the chances of one of these AI labs training on the task.
rdos 3 hours ago [-]
Is it possible for such a small model to outperform gemini 3 or is this a case of benchmarks not showing the reality? I would love to be hopeful, but so far an open source model was never better than a closed one even when benchmarks were showing that.
amluto 3 hours ago [-]
Off the top of my head: for a lot of OCR tasks, it’s kind of worse for the model to be smart. I don’t want my OCR to make stuff up or answer questions — I want to to recognize what is actually on the page.
rdos 3 hours ago [-]
Interesting. Won't stuff like entity extraction suffer? Especially in multilingual use cases. My worry is that a smaller model might not realize some text is actually a persons name because it is very unusual.
kergonath 60 minutes ago [-]
The model does not need to be that smart to understand that a name it does not know that starts with a capital letter is a the name of a place or a person. It does not need to be aware of whom this refers to, it just needs to transcribe it.
Also, there are generalist models that have enough of a grasp of a dozen or so languages that fit comfortably in 7B parameters. Like the older Mistral, which had the best multi-lingual support at the time, but newer models around that size are probably good candidates. I am not surprised that a multilingual specialised model can fit in 8B or so.
I tested this pretty extensively and it has a common failure mode that prevents me from using: extracting footnotes and similar from the full text of academic works. For some reason, many of these models are trained in a way that results in these being excluded, despite these document sections often containing import details and context. Both versions of DeepseekOCR have the same problem. Of the others I’ve tested, dot-ocr in layout mode works best (but is slow) and then datalab’s chandra model (which is larger and has bad license constraints).
droidjj 2 hours ago [-]
I have been looking for an OCR model that can accurately handle footnotes. It’s essential for processing legal texts in particular, which often have footnotes that break across pages. Sadly I’ve yet to encounter a good solution.
kergonath 53 minutes ago [-]
I found Mathpix to be quite good with this type of documents, including footnotes but to be fair my documents did not have that many. It’s also proprietary.
2 hours ago [-]
raphaelmolly8 2 hours ago [-]
[dead]
Rendered at 18:57:52 GMT+0000 (Coordinated Universal Time) with Vercel.
I’ve also heard very good things about these two in particular:
- LightOnOCR-2-1B: https://huggingface.co/lightonai/LightOnOCR-2-1B
- PaddleOCR-VL-1.5: https://huggingface.co/PaddlePaddle/PaddleOCR-VL-1.5
The OCR leaderboards I’ve seen leave a lot to be desired.
With the rapid release of so many of these models, I wish there were a better way to know which ones are actually the best.
I also feel like most/all of these models don’t handle charts, other than to maybe include a link to a cropped image. It would be nice for the OCR model to also convert charts into markdown tables, but this is obviously challenging.
This project has been pretty easy to build with agentic coding. It's a Frankenstein monster of glue code and handling my particular domain requirements, so it's not suitable for public release. I'd encourage some rapid prototyping after you've spent an afternoon or so catching up on what's new. I did a lot of document OCR and post-processing with commercial tools and custom code 15 years ago. The advent of small local VLMs has made it practical to achieve higher accuracy and more domain customization than I would have previously believed.
[1] If you're building an advanced document processing workflow, be sure to read the post-processing code in the GLM code repo. They're doing some non-trivial logic to fuse layout areas and transform text for smooth reading. You probably want to store the raw model results and customize your own post-processing for uncommon languages or uncommon domain vocabulary. Layout is also easier to validate if you bypass their post-processing; it can make some combined areas "disappear" from the layout data.
Also, do you have preferred OCR models in your experience? I've had some success with dots.OCR, but I'm only beginning to need to work with OCR.
Not for OCR.
Regardless of how much some people complain about them, I really do appreciate the effort Artificial Analysis puts into consistently running standardized benchmarks for LLMs, rather than just aggregating unverified claims from the AI labs.
I don't think LMArena is that amazing at this point in time, but at least they provide error bars on the ELO and give models the same rank number when they're overlapping.
> Also, do you have preferred OCR models in your experience?
It's a subject I'm interested in, but I don't have enough experience to really put out strong opinions on specific models.
I remember that one clearing the scoreboard for many years, and usually it's the one I grab for OCR needs due to its reputation.
My documents have one or two-column layouts, often inconsistently across pages or even within a page (which tripped older layout detection methods). Most models seem to understand that well enough so they are good enough for my use case.
The new models are similarly better compared to tesseract v4. But what I'll say is that don't expect new models to be a panacea for your OCR problems. The edge case problems that you might be trying to solve (like, identifying anchor points, or identifying shared field names across documents) are still pretty much all problematic still. So you should still expect things like random spaces or unexpected characters to jam up your jams.
Also some newer models tend to hallucinate incredibly aggressively. If you've ever seen an LLM get stuck in an infinite, think of that.
I think a more accurate reflection of the current state of comparisons would be a real-world benchmark with messy/complex docs across industries, languages.
It also doesn't provide error bars on the ELO, so models that only have tens of battles are being listed alongside models that have thousands of battles with no indication of how confident those ELOs are, which I find rather unhelpful.
A lot of these models are also sensitive to how they are used, and offer multiple ways to be used. It's not clear how they are being invoked.
That leaderboard is definitely one of the ones that leaves a lot to be desired.
I've thought of open sourcing the wrapper but havent gotten around to it yet. I bet claude code can build a functioning prototype if you just point it to "screen_ai" dir under chrome's user data.
How fast was it per page? Do you recall if it's CPU or GPU based? TY!
Any idea what model is being used?
And here's the kicker. I can't afford mistakes. Missing a single character or misinterpreting it could be catastrophic. 4 units vacant? 10 days to respond? Signature missing? Incredibly critical things. I can't find an eval that gives me confidence around this.
But, as others said, if you can't afford mistakes, then you're going to need a human in the loop to take responsibility.
I can feed it a multiple page PDF and tell it to convert it to markdown and it does this well. I don't need to load the pages one at a time as long as I use the PDF format. (This was tested on A.i. studio but I think the API works the same way).
How many pages did you try in a single request? 5? 50? 500?
I fully believe that 5 pages of input works just fine, but this does not scale up to larger documents, and the goal of OCR is usually to know what is actually written on the page... not what "should" have been written on the page. I think a larger number of pages makes it more likely for the LLM to hallucinate as it tries to "correct" errors that it sees, which is not the task. If that is a desirable task, I think it would be better to post-process the document with an LLM after it is converted to text, rather than asking the LLM to both read a large number of images and correct things at the same time, which is asking a lot.
Once the document gets long enough, current LLMs will get lazy and stop providing complete OCR for every page in their response.
One page at a time keeps the LLM focused on the task, and it's easy to parallelize so entire documents can be OCR'd quickly.
Isn’t this close to the error rate of human transcription for messy input, though? I seem to remember a figure in that ballpark. I think if your use case is this sensitive, then any transcription is suspicious.
EDIT: https://github.com/overcuriousity/pdf2epub looks interesting.
Also, there are generalist models that have enough of a grasp of a dozen or so languages that fit comfortably in 7B parameters. Like the older Mistral, which had the best multi-lingual support at the time, but newer models around that size are probably good candidates. I am not surprised that a multilingual specialised model can fit in 8B or so.