Here’s a snippet of an article I just read at The Verge:
...the dream of a fully autonomous car may be further than we realize. There’s growing concern among AI experts that it may be years, if not decades, before self-driving systems can reliably avoid accidents. As self-trained systems grapple with the chaos of the real world, experts like NYU’s Gary Marcus are bracing for a painful recalibration in expectations, a correction sometimes called “AI winter.” That delay could have disastrous consequences for companies banking on self-driving technology, putting full autonomy out of reach for an entire generation.
This highlights a mistake I’ve seen growing for the past year or so: The author is thinking that autonomous cars, and any other complex coding, represent Artificial Intelligence. But AI is something pretty specific, the hoped-for ability of a computer program to learn new facts that can then be used to make better decisions. Ideally it would be able to modify its own programming, which is what (so to speak) humans do; as an approximation of that goal, I think AI experts would be happy to achieve code that stores new facts in a database and refers to the database, and call that “intelligence” although strictly speaking it may not be.
But a complex program that plays chess, or attempts (badly) to play bridge, or that drives a car without human interaction, that’s not AI; that’s just an impressive program.
I’ll bet trying to correct this error is about as hopeless a task as getting people to remember that “hacker” doesn’t mean someone who breaks into others’ computers....or didn't originally.
Thu Jul 5
This message read 70 times