Artificial intelligence is often misunderstood in its current uses and misinterpreted in its future potential. We regularly read about AI in predictive, future-tense terms but its applications are already widespread, many which affect our everyday lives. (Think Alexa, video games, news generation, and fraud detection to name a few.)
For many, an emotional response is elicited in response to hearing about AI– generally either fear or anticipation. Education and a realistic, mindful outlook on its future are necessary to come to a more balanced view on the topic.
What is Artificial Intelligence?
The definition seems to be elusive. The words were used in the title of a summer research program at Dartmouth in 1955, whose mission was based on the hypothesis that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
Here's one definition that seems to fit AI's broad applications: ‘the technology enabling systems to encapsulate cognitive functions along with adaptive and learning capabilities — leading to self-improvement’.
The key here is "adaptive." Fueled by data, AI systems learn and change based on input.
Of the many current applications of AI, these are a few lesser known:
Suicide prevention is one use already in practice with Crisis Text Line analyzing word and emoji usage to indicate the likelihood of suicide.
Facebook has already been using algorithms to proactively detect suicide risk in its users based on posts and even videos, by analyzing any spike emoji responses from viewers at specific times in the video.
Security surveillance is using AI for video analytics to spot potential threats. Human threat detection is often error prone so supplementing traditional surveillance with AI has improved security and also made systems more efficient.
The hiring process is rarely known for its speed or efficiency on either side of the table, but with Google's recent rollout of Google Hire, AI is now being used to automate tasks like interview scheduling, resume review, and calling candidates.
With all the advances in Artificial Intelligence, there are inevitably controversial applications of the technology especially when it comes to personal privacy.
Facial recognition is one such AI tool where privacy concerns are paramount. Several researchers at the University of Toronto are building their own AI system to counteract the technology. It is trained to "dynamically disrupt" the AI process of facial recognition making slight "disturbances" in the photo so an accurate face identification can't be made.
In one study, the amount of detectable faces was reduced from 100% to a mere 0.5%, a remarkable feat. What's fascinating is that it could be used as a simple photo filter with no noticeable changes from the user's perspective. It's just enough of a change, however, counteract facial recognition and starve the system of face data, therefore reducing its ability to adapt and learn from new information.
With any advanced technology, there are ethical questions that arise. We as a society, particularly in the tech field, must regularly ask ourselves important questions to mindfully take steps toward future where AI is used for the betterment of our world, collectively and individually.
The World Economic Forum published an article outlining "Top 9 ethical issues in artificial intelligence" which poses important questions that don't yet have clear answers.
Here are a few that are particularly thought-provoking:
How will we deal with unemployment caused by automation? This is by no means a new question as automation has been eliminating jobs for decades.
How will machines affect our behavior and interaction? AI technology can predict behavior, interpret facial expressions, and interact with us in intelligent ways. In addition, tech addiction is now recognized as a legitimate dependency, one that tech companies work to optimize in how and how often users interact with their devices.
How can we eliminate bias? We as humans don't have a great reputation for holding unbiased views or acting in unbiased ways, both consciously and, more notably, unconsciously. Are we training AI to be just as biased as we have a tendency to be?
How to we control AI so it doesn't control us? This is sometimes used as a scare tactic for proponents of controlling AI. How can we manage a complex, intelligent system so we remain in control?
Max Tegmark, a Physics professor at MIT, gives a great Ted Talk about a bright future with AI only if we can be careful in steering it in a conscious direction, asking important questions along the way.
AI “Beyond the Hype”
It seems trite to say there untapped potential with the use of AI. However, there is a lot of hype on one side of that coin as well as fear on the other that will determine its future. Neither hype nor fear are recommended decision-making tools moving forward.
This Hacker Noon article provides a strong outline of AI's current state, while acknowledging its future risk and potential.
While AI is changing the way we work, think, and interact, current laws, frameworks and systems must be changed to adapt to these evolving technologies.
And while the future is bright from a technological standpoint, we must consciously be willing to, as individuals and as a society, learn and adapt in this changing world to maintain relevance and skills that match current needs.
Gain comes with a certain degree of loss as we shed the past and transform our future.