The Future of Computing
"How to quickly find photos" was a banner at the top of my iPhone recently.
As a person who (embarrassingly) has no sorting system to their photos beyond the "well, I think I was still in grad school when this happened, so let me flip back to at least 2017," the idea that I could simply search for a photo based on what was in it excited me like a kid on Christmas morning.
I typed in "beach" and all the beach photos came up. I typed in "cat" and all the photos of my cat from the last eight years came up. I even typed in my kid's name and every photo of her, from when she was a newborn baby to a preschooler, came up.
It seems simple enough, but the engine behind this search-and-find task is one of the most transformative science and computing technologies of our time: artificial intelligence (AI).
AI is so ubiquitous, it's almost invisible, performing tasks ranging from spam-filtering to talking with our phones. It's also headline-making for good (AlphaGo, anyone?), for bad (perpetuating racism) and everywhere in between (self-driving cars). Wherever AI goes next, the one thing that's for sure is that it's not going anywhere.
In this Century of Science theme, Science News looks back on the history of computing and where we're going next. Throughout the last century, two adages formed the bedrock of computing advancements: faster and smarter. Faster has been mostly about hardware. How small can we make a processor and how many units can we fit? As computers got faster, we figured out how to make them a lot smarter through AI.
AI is everywhere, but what actually is it? It's a computer algorithm that mirrors humans' flexible problem-solving skills. As the AI algorithm takes in new information, it adapts, changing how it solves a problem to give the most accurate and updated answer possible.
There are a few different ways to design an AI algorithm, but some of the most powerful are those designed to mimic the way our brains work. This is a sub-branch of AI called "machine learning" since it mirrors how we learn.
In our brains, neurons connect to each other and send information back and forth as we learn to complete new tasks. These tasks could be anything, like catching a ball, doing a math problem, or figuring out where the U-505 is located in MSI just to name a few examples. Engineers develop machine learning algorithms in similar modular units that transmit information back and forth. These modular units and their connections are called a "neural network."
In recent years, one application of machine learning has been circling the news cycle—sometimes on purpose and sometimes slipping in undetected at first. Deepfakes are AI-generated visual and audio media where one person's voice and/or face has been replaced with another person's voice and/or face. Deepfakes are incredibly realistic, and have been used in everything from blackmail and political matches to TikTok videos of Tom Cruise doing a coin trick.
Recently, a video of President Volodymyr Zelenskyy of Ukraine showed him telling troops to "lay down arms" to Russian soldiers. The video was boosted in Russian media and social media until it was debunked as a deepfake.
Deepfakes come from not one, but two AI neural networks that are pitted against each other (the duo is called a "generative adversarial network" or GAN). The first neural network is charged with creating an image while the second neural network judges whether the image is real or not real. If the second judges correctly that the image is not real, the first network goes back and improves its fake. Back and forth they go, with the two neural networks duking it out until the first neural network consistently outwits the second, producing incredibly realistic media.
AI algorithms can be incredibly intelligent, outperforming human intelligence in a huge range of tasks, but it also can miss really obvious things that humans would never miss. Simply adding stickers to a sign can make it unintelligible to an AI, but have no impact on human's understanding. To improve on these mishaps, researchers are developing intelligence tests for AI. Many of these tests involve finding patterns that humans can readily see—like some patterns in Numbers in Nature—but that AI algorithms tend to struggle with.
Improving AI intelligence is critical to ensuring that when our safety is in the hands of an AI algorithm, we know we can trust it. While improvements in computing hardware and AI algorithms will continue to improve machine intelligence, some researchers think that the best way to improve AI is to give it experiences that most resemble human experiences, e.g. loading AI into a robot and letting the robot interact with the physical world around it. Sounds like the next guest star at the Robot Block Party!