There is a considerable amount of confusion and misconception around the term Artificial Intelligence or AI, and its uses. Often thought of as something too difficult to understand, researchers have been working on it for decades now, and are trying their best to mainstream the idea into industrial applications.
The term AI was coined by John McCarthy, who is also considered its father in 1956, making the entire concept much older than many around the world think it is. But why has the term swirled around technological corners and in the ears of the respective audience only recently? Much of the hype around AI comes from the recent media coverage awarded to anthropomorphic and robots that tend to display extraordinary human character.
What we fail in is differentiating between machines’ performance and competence. For instance, if an AI system beats a human at chess, we may think it possesses the richness of understanding same as humans, but in fact the system doesn’t know its playing a game.
AI has undoubtedly become a buzzword in industrial corners, like robot was before it. But in all this buzz people tend to forget what AI is and what it isn’t. Rodney Brooks, chairman and CTO of Rethink Robotics notes, what was considered AI in the 60s is today the first course of computer programming.
This simple means that as advancements are made in the field, the term AI would expand and change names, sometimes developing subsets in the form of deep learning, reinforcement learning or machine learning as a whole.
A vital distinction between machine intelligence and human intelligence is context. It’s simple; as humans we have a much wider understanding of the world around us, compared to AI. Even after 60 years, AI has been successful in very narrow ways. Surely, breakthroughs have been made in speech recognition and image processing, examples of the former being Amazon’s Alexa and Apple’s Siri, but even these intelligent assistants become clueless at some point.
AI Research in the Real World
There is an impending need for a diverse combination of people and machines to work in collaboration to solve problems and innovate solutions. This is especially important as AI applications need to be real-world ready, instead of leaving system engineers confused, as to whether they should be relied upon.
It must also be realized that in the end humans have to reap the rewards. AI wouldn’t work for itself, instead its sole purpose is to make our lives easier, which in this context would be more efficient processes and reliable workflows. Once machines powered by AI become streamlined into everyday processes, not only will it have benefits in terms of productivity but it would also allow humans to work on more creative and excruciating problems rather than on tasks of repetitive nature.
Primarily, there are three directions in which AI is growing:
Supervised learning: which consists of pattern recognition, examples being translations from English to Chinese.
Unsupervised learning: which means you feed the system images without any labels and it tries to learn from this data to figure out solutions at a later stage.
Reinforcement learning: something that is very difficult and demanding, it involves providing the system a goal, e.g. high score in a video game, assembling two parts, etc.
The last direction in particularly has the tendency to turn bad as a negative goal may result in faulty operations, loss of property or even life in the worst case. The important thing over here is to ensure humans and AI don’t grow in a vacuum from each other. As machines become smarter, our capabilities should as well, ensuring a healthy balance, and keeping us one step ahead of any untoward incident.
Interested in learning more? Visit our website www.premierautomation.com, or talk to one of our specialists today.