Artificial intelligence (AI) brings with it a promise of genuine human-to-machine interaction. When machines become intelligent, they can understand requests, connect data points, and draw conclusions. They can reason, observe, and plan. Consider:
Leaving for a business trip tomorrow? Your intelligent device will automatically offer weather reports and travel alerts for your destination city.
Planning a large birthday celebration? Your smart bot will help with invitations, make reservations, and remind you to pick up the cake.
Planning a direct marketing campaign? Your AI assistant can instinctively segment your customers into groups for targeted messaging and increased response rates.
We’re not talking about robotic butlers. This isn’t a Hollywood movie. But we are at a new level of cognition in the artificial intelligence field that has grown to be truly useful in our lives.
Where are we today with AI?
With AI, you can ask a machine questions – out loud – and get answers about sales, inventory, customer retention, fraud detection, and much more. The computer can also discover information that you never thought to ask. It will offer a narrative summary of your data and suggest other ways to analyze it. It will also share information related to previous questions from you or anyone else who asked similar questions. You’ll get the answers on a screen or just conversationally.
How will this play out in the real world? In health care, treatment effectiveness can be more quickly determined. In retail, add-on items can be more quickly suggested. In finance, fraud can be prevented instead of just detected. And so much more.
In each of these examples, the machine understands what information is needed, looks at relationships between all the variables, formulates an answer – and automatically communicates it to you with options for follow-up queries.
We have decades of artificial intelligence research to thank for where we are today. And we have decades of intelligent human-to-machine interactions to come.
Companies across industries are using, investing in, or planning to invest in artificial intelligence (AI). AI is improving industry processes and making machines “smart.” It is expected to be one of the most disruptive technologies impacting industry and business. As the market for AI grows, boards should understand how this technology will affect their company’s strategy.
We’ve all heard of artificial intelligence (AI). It’s in the movies and is now in our everyday lives at home and at work. AI is more than just a single independent technology. It’s making smart devices smarter, data more valuable, and cloud-based tools more efficient. It’s turning autonomous vehicles into reality. It will disrupt business models, create new ways of working, and facilitate digital transformation. In the not-so-distant future, most technology applications will
likely incorporate or harness the output of some form of artificial intelligence.
So what is it, exactly? In a nutshell, AI enables computers and other devices to perceive, analyze, and adapt to their environments. Using software algorithms, these devices can perform tasks that would normally require human intelligence. AI enables machines to sense their environments, think, learn, and respond on their own, becoming increasingly autonomous. In effect, AI allows machines to contribute more intelligently to business activities. They do this by recognizing and interpreting digitized text, sound, and images, making it possible to answer questions, suggest solutions, and diagnose problems or take action. As a result, AI can reduce the amount of rotework humans are faced with every day.
What is Artificial Intelligence?
The concept of what defines AI has changed over time, but at the core, there has always been the idea of building machines that are capable of thinking like humans.
After all, human beings have proven uniquely capable of interpreting the world around us and using the information we pick up to effect change. If we want to build machines to help us do this more efficiently, then it makes sense to use ourselves as a blueprint!
AI, then, can be thought of as simulating the capacity for abstract, creative, deductive thought – and particularly the ability to learn – using the digital, binary logic of computers.
Research and development work in AI is split between two branches. One is labeled “applied AI” which uses these principles of simulating human thought to carry out one specific task. The other is known as “generalized AI” –which seeks to develop machine intelligence that can turn their hands to any task, much like a person.
Research into applied, specialized AI is already providing breakthroughs in fields of study from quantum physics where it is used to model and predict the behavior of systems comprised of billions of subatomic particles, to medicine where it is being used to diagnose patients based on genomic data.
In industry, it is employed in the financial world for uses ranging from fraud detection to improving customer service by predicting what services customers will need. In manufacturing it is used to manage workforces and production processes as well as for predicting faults before they occur, therefore enabling predictive maintenance.
In the consumer world more and more of the technology we are adopting into our everyday lives is becoming powered by AI – from smartphone assistants like Apple’s Siri and Google’s Google Assistant, to self-driving and autonomous cars which many are predicting will outnumber manually driven cars within our lifetimes.
Generalized AI is a bit further off – to carry out a complete simulation of the human brain would require both a more complete understanding of the organ than we currently have and more computing power than is commonly available to researchers. But that may not be the case for long, given the speed with which computer technology is evolving. A new generation of computer chip technology known as neuromorphic processors is being designed to more efficiently run brain-simulator code. Systems such as IBM’s Watson cognitive computing platform use high-level simulations of human neurological processes to carry out an ever-growing range of tasks without being specifically taught how to do them.
What is the future of AI?
That depends on who you ask, and the answer will vary wildly!
Real fears that the development of intelligence which equals or exceeds our own, but can work at far higher speeds, could have negative implications for the future of humanity have been voiced, and not just by apocalyptic sci-fi such as The Matrix or The Terminator, but respected scientists like Stephen Hawking.
Even if robots don’t eradicate us or turn us into living batteries, a less dramatic but still nightmarish scenario is that automation of labor (mental as well as physical) will lead to profound societal change – perhaps for the better, or perhaps for the worse.
This understandable concern led to the foundation last year, by several tech giants including Google, IBM, Microsoft, Facebook, and Amazon, of the Partnership in AI. This group will research and advocate for ethical implementations of AI, and to set guidelines for future research and deployment of robots and AI.