by Taniya Arya
You may have heard about the fatal crash involving Tesla’s Model S when it collided with a truck a year ago, while it was in the Autopilot mode. On similar lines, Google’s Self-Driving Cars experienced 272 failures between September 2014 and November 2015 and would have crashed at least 13 times if human test drivers had not intervened, according to a study by California-based Department of Motor Vehicle (DMV). Though autonomous systems can dramatically bring down traffic-related deaths, they can still be flummoxed in certain scenarios. Getting 99% correct results from a Machine Learning System can be relatively easy but scaling its accuracy to 99.9999%, which is where it has to be, is quite difficult without the Human-in-the-loop (HITL) factor. The same thing can be said about Artificial Intelligence (AI) and its divisions like Machine Learning, Cognitive Computing, Neural Networking, and Sentiment Analysis. In fact, AI naysayers believe that AI accuracy is only about 80%!
Demystifying the Human-in-the-loop Concept: 80:20 principle
Pure Machine Learning based systems tend to fall short when it comes to acceptable accuracy rates. On the other hand, the combination of machine-driven classification enhanced by human-in-the-loop factor lights up the path towards acceptable accuracy. Take the case of Self-driving cars, the cars mostly drive themselves but when it senses something unusual on the road, say construction, it hands back the control to the driver.
Many believe that Pareto’s famous 80:20 principle can be applied to Machine Learning systems as well. These systems can produce better results with 80% being computer-driven AI, and the rest being taken care of by humans. In other words, when the machine isn’t sure what the answer is, it relies on a human judgement.
A closer look at the problems that having HITL Solves
Problem 1: Training Data
Factors like quantity and quality of training data matter in making sure a Machine Learning algorithm works well. This is why technology companies deliberate about the data collection process. Machine learning models parse and learn from the unfathomable amount of data coming uninterruptedly. These models can’t determine what to do with new data or how to make judgments, without training data being fed by humans. To put it simply, machines learn from the data that humans create. So, whether you are filing a simple CAPTCHA online or tagging your friends on Facebook, all this will end up in a dataset that a Machine Learning algorithm will be trained on.
Problem 2: Accuracy
There are some scenarios in which machine learning algorithms figure out the right answer right away. However, there are confusing situations (like loopy handwriting) in which it is maddeningly difficult for the machine algorithm to perform well. This is the reason why it is easy to get to 80% accuracy and really tough to achieve 99% via machine learning algorithms alone. This is where HITL steps in. Humans are passed the processes and decisions where an algorithmic judgement falls in the low-confidence zone. By involving humans in the decision-making process, a gap is filled in the machine’s understanding of data.
This is why the partnership between computers and humans is essential for the best and most accurate results. Along these lines, we at KritiKal Solutions develop machine learning products and applications by leveraging the HITL factor. In this regard, we have two in-house products i.e. KLiPR – KritiKal’s License Plate Reader and TRAZER® – TRaffic AnalyZER and have worked on a variety of machine algorithms relating to Object Detection & Recognition, Optical Character Recognition, Face / Skin Detection and Recognition, Logo detection & Recognition, etc.
To sum up- computers might be great at analyzing tough tactical situations but still cede to humans when it comes to understanding long-term tactics. Therefore, the world is safe, for now!