Artificial Intelligence has been the inspiration for many works of fiction all throughout the ages, sometimes as a helping hand or overlord of a dystopian nightmare. Recently though, AI has become a real thing. There is great potential for it to help humanity greatly. But what is Artificial Intelligence anyways? And is it too risky to use?
Artificial Intelligence works by feeding sets of data into an algorithm. After a round of processing data, the algorithm checks its performance. It learns what is the best method to complete a goal set by the programmers from its experience. Data could be every possible scenario for a game of checkers, or it could be images of bread. The AI eventually figures out the worst moves in chess, and the precise differences between images of white and brown bread. The machine learning process of AI can be very resource intensive and it is important to have the right hardware. Lots of RAM, CPU, and GPU are necessary. Thousands of scenarios and equations could be running in just seconds!
In 1956, Allen Newell, Cliff Shaw, and Herbert Simon managed to create an artificial intelligence program called Logic Theorist. Many consider Logic Theorist the first AI. It was designed to mimic human problem-solving skills and could solve symbolic logic equations. Logic Theorist inspired AI research to come.
Machine learning can solve many issues for us. In a short amount of time, an AI can learn how to identify and differentiate similar objects. It can see very subtle differences and patterns much better than humans can. For example, AI’s have been taught to recognize emotional tone in speech, cancer cells vs healthy cells, and human faces from all angles. Some AI can also suggest the best possible outcomes to scenarios and calculate future financial expenses. Imagine all of the great things that this could be put to use for.
Ignoring people who worry about AI’s developing emotions or becoming the next Skynet, there are many valid concerns about the implementation of AI in critical situations. What would happen if a constantly learning AI that holds serious power finds an unorthodox solution to a problem? It solves the problem alright, but it causes larger issues. It can be difficult to shape an AI to solve complicated issues exactly how we want it to. They can be unpredictable. And what if it is fed questionable statistics and develops a bias? Black New Yorkers get stopped by police twice as often as white people. An AI could associate race with crime. For now, putting moral tasks in the hand of AI is too risky.
But even in simple ways, AI’s prove to be beneficial. Consider advanced AI flawlessly diagnosing patients, prescribing exactly the right medicine. AI-aided trafficlights, learning the patterns of traffic, viewing the road, and reacting accordingly so no car has to wait longer than it needs to. We have a long way to go, but in the end, AI can benefit us a lot.
RESOURCES vvv
https://www.cser.ac.uk/research/risks-from-artificial-intelligence/#:~:text=AI%20also%20raises%20near%2Dterm,digitisation%20and%20nuclear%20weapons%20systems.
https://www.cser.ac.uk/research/risks-from-artificial-intelligence/#:~:text=AI%20also%20raises%20near%2Dterm,digitisation%20and%20nuclear%20weapons%20systems.
https://www.cser.ac.uk/research/risks-from-artificial-intelligence/#:~:text=AI%20also%20raises%20near%2Dterm,digitisation%20and%20nuclear%20weapons%20systems.
https://www.cser.ac.uk/research/risks-from-artificial-intelligence/#:~:text=AI%20also%20raises%20near%2Dterm,digitisation%20and%20nuclear%20weapons%20systems.
https://www.cser.ac.uk/research/risks-from-artificial-intelligence/#:~:text=AI%20also%20raises%20near%2Dterm,digitisation%20and%20nuclear%20weapons%20systems.