How computers started to read – ‘neurolinguistic’ programming.
Neurolinguistics – is the study of the neural mechanisms in the human brain that control the comprehension, production, and acquisition of language. When it is done by a computer we call this neurolinguistic programming.
Whoah! What does neurolinguistics really mean or do? It’s how we understand sarcasm and nuances when speaking with someone.
Modern fast computers try to mimic the human brain for neurolinguistic programming. The two parts of the brain that have the biggest involvement in speech recognition are the bundle of nerve fibres (green) that connect Broca’s area (purple) and Wernicke’s area (orange) in human brain. Credit: Dorling Kindersley/Getty Images
Because of the awesome speed and power of modern computers, every day we come closer to recreating digital models which replicate some of the abilities of the human brain – we call this artificial intelligence.
The first step towards neurolinguistics was OCR or optical character recognition. Developed to load letters and from papwer into an electronic document, it required someone to check the computers reading and sense of the characters.
Today, we are used to spell checking and grammar checking on our PCs. This, was the next step towards computers being able to read. But computers still made mistakes because we don’t always use words properly and we ‘bend’ their meaning. So, the next step was to look at what people said, or wrote, and decide what they meant.
Phrases and the meaning of words
Google created a programme to look at each word in the context of a string to determine what it meant. This allowed the program to learn how to interpret the word; a great example is the common use of ‘wicked’ to mean really good or bad/sick and other words meaning the opposite.
IBM developed an Artificial Intelligence (AI) tool called Watson which can mimic the way humans process language. It has its own “AI Neurolinguistics”. Watson can go beyond the meaning of the words you write to determine your personality just through typed text!
The way it does this is by taking a sentence and breaking it down it to smaller parts and analyses the concepts and relationships used in the sentence. Once each part has been assessed, what’s being conveyed and the context becomes clear.
IBM started to use the ‘Big 5’ personality test model to determine the persona of the individual who wrote the text that it analysed with the programme.
Humans do this too. As a child, you learned how to read and speak by grouping things into categories, like animals can be dogs, cats, or cows, and emotions can be happy, sad, or angry.
As you grew up your language improved and changed; you were able to understand inflections and tone when someone speaks to you, which gave context to a sentence. For example, kids don’t understand sarcasm, but as they get older, they start to pick up on it.
As you matured you listened to someone speaking to identify aspects of their personality. The more you engaged with them the bigger the ‘database’ of information you had to decide what they were like.
Where are tools like Watson used? Social Media for one!
Everything you post on the internet can be processed, analysed, and used to provide the type of personality you have. Many companies do this now to gather data about their customers or potential employees.
Like Watson, the ‘equals mirror’ uses neurolinguistic programming to look at your online profile. Understanding how you ‘come over’ digitally when applying for a job will help you spot issues before a potential employer. Two more tips, make sure your settings are private and you take care in what you say in public posts or Tweets. Check out the equals mirror website and read the tips.