Captchas Don't Prove You're Human
Predominantly these days there are two types:
- a quiz-show style picture board where you have to click on every image of say a truck, a bicycle or a taxi. These are now low-resolution, cropped images that defy OCR software's attempts to run against a library of object images. No longer are we looking at whole bikes, buses or taxis but snipped segments.
- a scanned snippet of text which you have to re-type in the answer box. These are mostly highlighted sections of printed store checkout receipts. Usually tilted so that standard OCR software can't decipher the string. The highlighted sections are mostly snippets from the middle of lines that OCR software lacks the context to start interpreting.
There are two problems for humans answering these.
Object images are now so low-res and poor, even us humans struggle to identify the objects in the picture squares. Add to that not every country has yellow cabs and international users may struggle to answer the challenge.
Scanned text images have to have a pre-entered or properly OCR'ed answer in the Captcha database against which to compare the human answers. These often seem to be wrong. Were they OCR'ed badly at source?
There's another issue with bots and AI answering these: it's possible for bots to add dynamically to libraries of known Captcha images, using simple pixel and pattern recognition. In some cases the bots can do better answering Captchas than the humans.
The day is coming when Sky Net can send out a series of Captchas and eliminate those respondents with the poorest answer rates. Those will be the humans. RC