In the mid-1990s, Bill Gates wrote a major bestselling book called The Road Ahead. If it mentioned the internet, I must have missed it.
Around the same time, Jeff Bezos visited The Washington Post to tell the personal tech columnist about his plan to sell books online. The columnist later acknowledged that he had shown Bezos to the elevator.
In that decade, Mark Zuckerberg was still in grade school. At Harvard, “The Facebook” was how incoming students glimpsed pictures of other freshmen, an early form of today’s practice of swiping photos on dating sites.
We have been transformed by these things we did not know about and certainly could not have understood. Since then, in the twenty-first century, we have been told that blockchain, the Metaverse, and lesser breakthroughs like Google Glass were the next big deal.
Now, aye, yai, yai, it is about AI, artificial intelligence.
AI is amazing, frightening, revolutionary and -- to what I imagine is a great many of us -- just confusing. Here are a sample of headlines from one week or so this summer on Bloomberg:
AI Won’t Supercharge the Economy
There’s Too Much Money Going to AI Doomers
Do Oppenheimer’s Warnings About Nuclear Weapons Apply to AI?
CEOs Must Soldier On Even As AI Anxieties Loom
Rather than attempt to decipher AI for myself (let alone anyone else), I’ve been looking back at other transformative developments in the twentieth and twenty-first centuries and how society accommodated their impact: electricity, telephones, automobiles, aviation, radio and television, nuclear fission, the space race, and, of course, the internet. Each of these has come with its own consequences, context, and economics. For example, the automobile has taken millions of lives in crashes, and it was decades before seat belts and other safety devices were mandated.
Now it’s all about electric and self-driving cars. Planes -- prop, jet, supersonic -- are remarkably safe by comparison, but every time one goes down it is terrifying and there is a new round of scrutiny and regulation.
Television swept across the country in the 1950s. From the outset, regulation of the TV sets themselves as appliances was separate from oversight of the content. What was called “a vast wasteland” on broadcast in 1962 is now more pervasive than ever on cable, streaming, and, of course, the internet. The New York Times has helpfully compiled timelines for government regulation of these advances in civilization, to make the point that they take time, often decades.
For more than a quarter century, the internet has been a different matter. We use devices to access digital materials – computers of various kinds and what are universally called “phones” – yet we still haven’t come up with a way to manage the content; how to monitor it or monetize it for the good of all; and how to decide what to do with something that is indispensable and still out of control.
And then there is Elon Musk and his Tesla, SpaceX, and X (formerly known as Twitter). How has a person so mercurial and so erratic become so dominant? Who actually manages these places as Musk goes through his hourly bizarre peregrinations? And now he too is after AI.
In 2009, I wrote a lengthy essay for the Columbia Journalism Review on the issue of the day titled “What’s a Fair Share in the Age of Google?” Given the (print) publication it was in, the piece dealt primarily with news in the “link economy” – barely more than a decade after we encountered the term.
I raised three questions which the essay tried to answer: How to measure fair use, fair compensation, and fair conduct? The answers were tentative and incomplete and still are. But the questions remain the same as we embark on the AI era. ChatGPT and its many variants use an inexhaustible amount of information and data that is digitally transmitted. Determining who should get credit and/or payment for the effort in devising that data is just beginning to be considered.
Say there’s a mistake, or worse, in the material that appears, where does a person go to complain and get a correction?
Disinformation or deliberately corrupt data is inevitable. What do we do if we can’t identify the proprietor or even measure its consequences?
I am sufficiently able to write and launch these Substack pieces, which suggests some familiarity with today’s communication technology and protocols. Do I understand the algorithms of readership? No. Do I have a clue of what it means to have Facebook friends or LinkedIn connections? I almost never hear from them.
Why? Because, I’ve been told, I don’t chase after them with assiduous posts. So, it’s my fault…..
I use the internet. I rely on it all the time. How many of us these days have really figured it out?
So, bring on AI. It will definitely make a difference on a scale TBD. History would indicate – and there is reason to believe the major AI mavens recognize this fact – that we should take the time to study exactly what is artificial intelligence.
Stumbling into the future is perilous.