

May 2, 2023
5/2/2023 | 55m 38sVideo has Closed Captions
Connor Leahy and Marietje Schaake; Yo-Yo Ma; Ben Smith
Connor Leahy and Marietje Schaake discuss the dangers of A.I. In his latest project, "Our Common Nature," cellist Yo-Yo Ma seeks to enhance our humanity by deepening our ties with the natural world. Ben Smith, founding editor-in-chief of Buzzfeed News, explores the history of online journalism in his new book, "Traffic."
Problems with Closed Captions? Closed Captioning Feedback
Problems with Closed Captions? Closed Captioning Feedback

May 2, 2023
5/2/2023 | 55m 38sVideo has Closed Captions
Connor Leahy and Marietje Schaake discuss the dangers of A.I. In his latest project, "Our Common Nature," cellist Yo-Yo Ma seeks to enhance our humanity by deepening our ties with the natural world. Ben Smith, founding editor-in-chief of Buzzfeed News, explores the history of online journalism in his new book, "Traffic."
Problems with Closed Captions? Closed Captioning Feedback
How to Watch Amanpour and Company
Amanpour and Company is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.

Watch Amanpour and Company on PBS
PBS and WNET, in collaboration with CNN, launched Amanpour and Company in September 2018. The series features wide-ranging, in-depth conversations with global thought leaders and cultural influencers on issues impacting the world each day, from politics, business, technology and arts, to science and sports.Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> Hello everyone and welcome.
Here's what's coming up.
The Godfather of Artificial intelligence on the alarm about his own dangerous creation.
Is AI a major threat to humanity or world saving breakthrough?
I ask a senior AI researcher and the head of cyber policy at Stanford University.
Also.
♪ An ode to mother nature.
World-renowned cellist Yo-Yo Ma tells me about his new project, the harmony between music and our natural world.
Then.
>> You didn't want to write things that weren't true.
We are trying to do traditional journalism and a new term.
>> asking Ben Smith where the race to go viral went wrong.
♪ >> Amanpour and Company is made possible by the Anderson family fund, the family foundation of Leila and make Easter.
Mark J blush in her.
Bernard and Denise Schwartz.
Committing to Brigid -- bridging cultural differences in our communities.
Barbara Hope Zuckerberg.
We try to live in the moment, to not miss what's right in front of us.
At mutual of America, we believe taking care of tomorrow can help you make the most of today.
Retirement services and investments.
Additional support provided by these funders.
And by contributions to your PBS station from viewers like you.
Thank you.
>> welcome to the program everyone.
The man known as the godfather of artificial intelligence is now scared of the very technology he helped pioneer.
Jeffrey Hinton has left Google to warn the world about the dangers of AI.
His decades long research has shaped the AI products and systems that we use today.
In 2018, he was a co-owner of the touring prize, a Nobel for computer science.
Now he says he regrets his work.
Here he is speaking to the BBC.
Jeffrey: It discovered -- we discovered it works better than we expected.
What do we do to mitigate the long-term risks of things more intelligent than us taking control?
Christiane: he joins a chorus of experts worrying that bad AI could conceivably even lead to the extinction of the human race .
Today, Samsung banned its staff from using tools like chat JBT, citing security concerns.
Meanwhile, IBM announced that it will cause on hiring for roles that AI could potentially fill.
That puts 8000 jobs at risk in the next five years.
How do we innovate and protect our future by ensuring the so-called moral alignment of this expanding technology?
first, to an expert.
The CEO of the AI company conjecture Connor Leahy.
Welcome.
Thank you indeed.
Do you share Jeffrey Hinton's worries?
>> Absolutely.
>> Do you believe that it's not inconceivable that it could lead to the extinction of the human race?
Connor: I think it's quite likely, unfortunately.
I'm not the only one saying this.
More and more people, such as Hinton, the closest we have to when Einstein in the field of AI is now taking the risk seriously and going to the public to speak about them.
Christiane: this is very dystopian.
You say, not just conceivably.
How?
what is the current danger and the nature of this technology that is so dangerous for us?
Connor: companies that are working on this technology, they explicitly in their goals for what they state they are trying to do is to build Godlike intelligence.
They are not trying to build an auto complete system.
It is explicitly stated in their founding doctrine.
Christiane: what does that mean?
Connor: that it outstrips humans in every form of capability.
Every type of reasoning, physical task, every type of skill-based task.
I believe that if we create a system of any kind that is vastly more intelligent than the human race, I don't expect that to end well.
Christiane: so, what can be done now?
some of these people, big AI and tech giants, names that we recognize, signed a letter.
2000 of them calling for a pause.
Do you remember that?
what were they saying and what happened?
Connor: the point of that letter was to call for a moratorium for six months on the development of larger and more powerful AI systems that have been built so far.
It's quite important to explain quickly that the difference between AI systems and software systems.
A software system, you write code.
A programmer writes code which solves a problem.
You have a problem, you wanted to do something, you write the code to make it do that.
AI is different.
AI's are not written.
They are grown.
You have a sample of data of what you want to accomplish.
You don't know how it will solve the problem.
You just have a description or a sample.
You you -- you use supercomputers to crunch these numbers, organically grow a program that solves these problems.
Importantly, we have no idea how these programs work internally.
They are black boxes.
We don't understand how their internals work.
This is an unsolved scientific problem.
We don't know how to control these things.
Christiane: this is the bit I don't understand.
Human beings are making the stuff, the hardware, the bits.
How do you not know?
This is the part I find difficult to comprehend.
How are you not able to control it?
Connor: it's a great question.
We can take examples of synthetic evolution in biology.
Sometimes, you would like a bacterium that produces better milk for example.
We don't really know how all of the genes work in the bacteria.
We can select for good milk bacteria.
We can try different bacteria and keep the ones who make good milk.
Then we breed those and we get some more and so on.
It is similar to this.
It's not exactly like this.
Basically, instead of us writing a program, we just try an incredible number of programs and we search for the ones who are the best.
The best programs.
But the way these programs are written is not in human language.
It's not in code.
It's in neural weights.
You can imagine, it's like a massive list of numbers.
Billions of numbers and knobs on the box.
You have a supercomputer that whittles all the knobs.
Really fast.
Eventually, it finds a setting of the knobs that works.
What do they mean?
it's unclear.
Christiane: for those who are not critics of this, can you give an idea of how it could be used to the betterment of humanity?
can I I -- AI solve world peace?
Connor: at this point, definitely not.
I think it's important to be clear.
What makes humidity great is our intelligence.
The reason we are not him is that we have intelligence.
We developed technology.
We have language and culture.
We developed societies and art and beautiful things.
These are wonderful things.
I love intelligence.
I love being human.
I love my human friends.
AI can help us with this, of course.
We are seeing a revolution in tools that allow us to automate simple tasks or complex tax.
New art and media that allow us to translate text much better than any previous method allowed us to get we are breaking down the barriers between languages.
So can intelligence at some point solve these problems you described?
Christiane: like global hunger.
Connor: I mean, probably.
I don't know obviously.
Not current systems.
If we has a system which is superior to humans in every conceivable facet, I expected to be capable of solving problems that humans currently can't solve.
Christiane: currently, what is its main positive?
we hear the word efficiency.
To many, that means replacing humans as we just saw.
IBM might.
Connor: yeah.
I wish I had a positive story but there isn't one.
This is a classic risk that happens when you get modern technology and better tools.
Some people get replaced.
Usually new jobs are created until they aren't.
At some point, we will run out of things for humans to do.
When we created this dimension, it allowed it -- allowed people to do more cognitive labor because the machines could do the heavy lifting.
But if the machines do all the talking and thinking, what's left?
I don't know.
Currently, they are still very useful.
There are many applications in science and medicine that benefit greatly from artificial intelligence technology.
There are therapeutics, generating art or code.
Many developers nowadays use products that aid them and answer their questions and makes writing code faster which is convenient.
Christiane: in some of the reading I've done, it appears that what is scary is the amount of resources put into the capability of this I I, outstripping the resources put into the safety aspect of it, what they call the moral alignment to make sure it's not bad and disruptive.
Can you see that continuing like that?
Connor: it seems unsustainable to me.
Billions of dollars, tens of thousands of our brightest engineers and scientists are working to create ever more powerful systems.
The number of people who work full-time on the alignment problem is less than 200 people if I had to guess.
Christiane: making it safe, the moral alignment.
Connor: the AI safety field in general, including other concerns, is larger.
Not very much.
The AI alignment field, the question of if we have super intelligence, how do we make that go well?
it's an important scientific problem, engineering problem that we have to understand.
The number of people working on this and the amount of funding accessible to them is extort merrily small.
Christiane: can you put the genie back into the box?
how do you regulate?
what does your company do on this?
Connor: great question.
My feeling on regulations here is, can you put the genie back in the bottle?
the answer is, I don't know.
I hope you can.
I think this is going to be necessary to some degree.
I think if we continue at this pace and we continue to let the bottle have smoke billowing out, it won't end well.
What I think is the first step I would advocate for is that I think the public deserves to know what's going on.
This is a topic that people have been talking about for years.
Heads of the labs have stated that they think there are extinction risks with these things.
These are all discussions that the public isn't informed about.
I think parliament, Congress should call upon these labs to testify under oath and state what's going on, how risky they think these things are, what we can do about it.
And then we also have to think about how he put the genie back in the bottle, how we progress safely.
There are ways to do this.
Christiane: I wonder what your company is doing.
The sermon model.
The biggest particle physics lab in the world operates not necessarily on a prof it -- profit model.
They do experiments and research in an island.
Not in the public until they've developed the right things.
Connor: I would love this.
I think this would be fantastic.
I would love if governments could come together and control AI resources in particular.
There's many spot -- small applications which do not pose significant risks.
The super intelligence research -- let me be frank here.
There's currently more regulation on selling a sandwich to the public then there is to building godlike intelligence by private companies.
There's is no regulatory oversight.
There are no audits.
There are no licensing processes.
There's nothing.
Anyone can grab a billion dollars and start doing cutting-edge work on this and releasing it on the Internet and no one can stop them.
Christiane: why did that call for the six-month pause, why did it go nowhere?
what happened?
Connor: that's a good question.
I would like to ask this question to the regulators.
A lot of people aren't informed.
There's a very funny dynamic that happens very often when we talk to other people in the field.
People suspect that, we can't stop this.
People don't care.
I think people do care.
This is something that affects all of us.
It's not something that a few tech people should be able to decide upon.
This is something that affects all of us.
It affects all governments, all people.
It is not something far away.
Jeffrey Hinton himself has said that he used to think that this is decades away.
He no longer thinks this.
This is something that will affect you and me and our children.
Christiane: Connor Leahy.
Thank you so much indeed.
Let's turn to the deeper dive into regulation.
How can we do this?
Marietje Schaake.
She works with the cyber University Center and she says tech is facing a regulation regular -- revolution and she is joining me from Stanford.
Welcome to the program.
You heard us talking about this.
How it's not something in the distant future.
First and foremost, what can you do?
it goes back to the splitting of the atom.
People will do the progress and innovation that they can.
What needs to be done and what can be done to regulate?
Marietje: right.
What we are seeing is this race between companies that are really looking more at their competitors for how quickly they can churn out products and updates and really release very underresearched AI applications into society just like that.
I think the companies, Microsoft, Google, are losing track of the real issue here which is the societal risk that we need to focus on.
The fact that these companies have so much power and agency to experiment in real-time with all the risk that the experts are pointing to is unacceptable.
It's important that Democratic lawmakers step up, both in terms of which laws already apply.
I don't agree with the notion that it's entirely Alali space.
Discrimination is illegal.
When it AI discriminates, it is still illegal.
Will we find out?
can we look in the inner workings of the apps that companies build to see whether we have been mistreated?
part of that is known.
There are systemic biases built into the way that data sets are formed in the way products are built on top of those data sets, discriminating against black people for example.
Part of it we may never know.
All of this powerful technology, all the insights into it are in the hands of private companies.
That is a risk to the rule of law.
Christiane: let me play this from the Google cofounder Steve Wozniak who spoke on CNN this morning.
He's one of the cofounders.
He is talking out about this.
I will play this little bit.
>> look at how many bad people out there are hitting us with spam and trying to get our password and take over our accounts and mess up our lives.
Now AI is another more powerful tool.
It will be used by those people for basically evil purposes.
I hate to see technology being used that way.
It shouldn't be.
Probably some types of regulation are needed.
Christiane: so it's really interesting that the actual developers of all of this are the ones sounding the largest and loudest alarm.
So is there anything underway right now, by countries, by ever got -- intergovernmental or individual like the EU, is anything happening to regulate right now?
Marietje: absolutely.
The EU is in the final stages of concluding NAI act, a law that applies to AI applications, the way they are used for example in screening peoples CDs when they apply for a job or when they apply for college.
But also when there might be fraud detection through AI systems.
Very complicated and consequential applications where the EU says there's a risk attached to how AI can make decisions about one's liberties, one's rights, once access to information or education.
Or employment.
In that context, there needs to be mitigating measures in place depending on the level of risk.
So I think the question is, what will the final AI act look like?
final stages of negotiation.
Nothing release because of this generated AI developments that happen since the process of this law.
So the EU is definitely leaning when it comes to developing laws specific to AI.
Christiane: you are ahead of the cyber policy unit at Stanford University.
What is the United States doing?
frankly, the leader in all of this tech innovation.
Marietje: well, I don't think the U.S. Congress or American leadership is doing enough in the interest of the public, in the interest of the rule of law in preserving democracy.
There's been a long-term trend in the United States to trust market forces.
That may explain why there's not a federal data protection law and data is a very important research -- resource for AI.
If you don't have laws cap -- governing the use of data, that will bear on the way data can be fed into the trainings of these AI applications.
So you see almost a domino effect now happening in the United States.
A lack of regulations.
The political climate is so polarized.
Very few people have high expectations of what Congress can achieve.
I do have to say that the concerns about AI are now making for coalitions of concern politicians that I've never seen before.
There are Democrats and Republicans concerned.
There are people in Europe and the United States are concerned.
Maybe, just maybe the urgency that we here expressed by the experts working in the company should make us all wonder as public leaders, what do they know that we don't know?
how do we find out exactly what's going on?
we have to realize what people are talking about.
The destruction of the human race.
The end of human civilization.
Who would want to continue playing with that risk?
it's preposterous.
I think it's almost absurd when you think about it.
But it's happening today.
Companies are continuing.
Letters may be written to say, we need to pause.
People may raise the alarm bells and resign their child's.
-- jobs.
There's not enough to vestment and real action by the experts to say, we need to change our behavior in the interest of protecting humanity.
Christiane: I mean, it's extraordinary.
It sounds absurd that serious people like yourself, these tech people can say, can talk about the end of the human race.
It really concentrates the mind.
In the meantime, the threat to democracy.
You've seen the deepfake AI generated, AI generated Republican ad that was launched in response to Joe Biden's presidential reelection campaign.
You may have come across some of these things that have been commissioned apparently by the president of Venezuela or his people.
There were some deepfake accounts, trying to confuse everyone with fake anchors, fake news about how wonderful Venezuela is, how great the economy.
Everything that is not right now in terms of infrastructure and political dysfunction.
YouTube took them down.
The company in question then said that they would ban people using their AI for that kind of behavior.
Is that enough self-regulation Marietje: unfortunately it's not.
Many countries are offering synthetic media options.
Ways in which people can start creating things at home.
Many of the viewers have experience to -- experimented with images or text.
The concerns we've heard about chatty BT making peoples a homework or academic papers or writing code.
It's increasingly easy to generate synthetic media and the quality will become better and better.
Indeed, it will erode trust.
It can amplify and make it much easier to generate a lot of misinformation at a moment where we don't need more undermining of trust or confusion in our democratic society.
So I think that's definitely a source of concern.
We heard people atop -- people talking about, we have to be careful that bad actors don't get their hands on this technology.
It's a very political question.
What is good, bad?
those are political questions that are being answered by companies.
I take issue with the notion of calling it a godlike capability.
We shouldn't forget, these are not affects that fall from the heavens above.
These artifacts that are sought after, designed, improved, tested by people over and over again.
Day in and day out.
It's concerning that a lot of the people that had invested in this technology have actually pushed the front tier to come to the point of where we are now.
Only to suddenly realize, what a risk it is.
To so many parts including democracy.
It's definitely concerning.
I wish I had a good solution to make sure that people could detect synthetic media.
But it will be incredibly difficult.
-- for people to discern authentic from computer-generated.
Christiane: it's extraordinary.
We will keep our spotlight on this.
Presumably you are doing stuff on your powerful platform in the heart of technology land at Stanford University.
Marietje Schaake.
Thank you very much.
I want to read this by Robert Oppenheimer who led the U.S. effort to develop the atomic bomb.
He said, when you see something technically sweet, you do it.
You argue about what to do about it only after you've achieved success.
We heard why that's a very dangerous pattern to follow.
Let's turn now to how we can enhance our humanity by actually deepening our ties with the natural world, through music even.
The ever innovating world-class cellist Yo-Yo Ma.
Exploring this unity in his latest project, our common nature.
He swapped playing for presidents and royalty to perform in and for nature.
Here's a clip of him playing Bach at the foothills of the great Smoky Mountains.
♪ That's beautiful in sound and vision.
Yo-Yo Ma joins me now from Chicago.
Welcome back.
I wonder, before we talk about your antidote to this craziness that we've been experiencing, has it come across your desk as well, the threat of AI?
Yo-yo" your last interview, we talked about the erosion of trust.
That makes me think about what it is that we as humans, what our purpose is.
From looking at nature or talking about AI or music, all the technical means that we have to achieve something in music, we talk about, music starts to happen when we transcend technique.
Right now, we are talking about the technique of AI.
But what we aren't talking about is, what is our common human nature and purpose?
music, another value that comes to me from music is the idea that you are working towards something that's bigger than yourself.
Christiane: that is the title.
That's your project, isn't it?
our common project.
Our common nature project rather.
You've been performing in these amazing landscapes.
The Grand Canyon, Smoky Mountain come elsewhere.
What motivated you?
what made you think of going out and doing that there, now?
Yo-Yo: well, I have to admit, I'm a city boy.
I'm an urban dweller.
I lived in Paris, New York, Boston.
Lately, I've realized that the time that I spend in nature is what brings me back to something much bigger than myself.
I'm going to ask you a question.
It brings me to wonder.
Here's a question for you.
Who said this?
a shaman, a scientist, or an artist.
Nature has the greatest imagination but she guards her secrets jealously.
Christiane: I'm going to say it was a scientist.
Yo-Yo: you're so right.
[LAUGHTER] Yo-yo: You are so right.
Christiane what is your message?
Yo-yo: There are two groups of people that hold new knowledge and old knowledge and I'm fast nailted.
And these people are indigenous folk, natives and scientists.
I think we know so much.
We have such capacity but in fact, that capacity, what is the purpose for it?
If it is to advance humanity, that's one thing.
If we are talking about as your last flew said, interview said, there is a distinct erosion of trust and say why are we living and what is our purpose, to live, to care for and what is our job as individuals, citizens, family members to ourselves as well as to the world around us if we find ourselves as part of nature, we start to care for it the way we try to care for ourselves.
Christiane: We have some of your performance in Kentucky this past weekend.
Let's see you there at the mammoth -- Mammoth company cave -- cave national park.
♪ ♪ Christiane: It is extraordinary and looking at this.
It's all dark and you have the lights over the music and see the audience behind you.
You have said that this is not transactional for you.
You are making relationships and not going to end these relationships and pursue this and go on to other places, wherever.
What are you getting from the people from the people you encounter in these outdoor natural environments?
Yo-yo: Community building.
I think everybody we talk to, teddy Abrams, and Zach, the staging director, everybody to the park rangers, to the citizens to the guides said oh my gosh, you must do this for 1500 people standing around.
You need to tell the story of those caves.
Millions of years old.
5,000 years of history from natives and indigenous people to what its story is written in right in there.
But it takes a musical narrative to bring it into the heart and minds to the people who are listening.
The war of 1812, all the ammunition, Jefferson said, would be available from the saltpeter dugout from that cave.
It was the second largest visitor site in the United States in the 1800's after Niagra falls and so the descend events of the owners of the land and slaves as well as seven generation of slaves are the guides who are friends and leading the thousands of people who go into the caves every month and it tells a story of our country's history.
But much more so.
It goes way beyond.
That is one way of concretely using culture to show and to make us feel what a country's history is, but in relationship to our planet.
And I think, you know, to have that in concrete form, I think, changes lives and gives us a different perspective.
Christiane: You are known as performing in many instances, whether tbhawg regulates or times of global warming or great celebration, you bring the U.S. and world together.
You are doing it in a completely different environment.
What are you -- are you trying to bring us all together for the planet, culture or for work?
Yo-yo: First of all, I'm trying to explore what I'm interested in.
And I think at my age, I am very much interested in meaning and purpose, and if we go back to the founding of nations, which isn't a human invention, we need to examine what is our purpose and what is meaning and what is our relationship to each other as well as to the world around us.
We can find that, then we solve the problems of I I.
But it is building trust and searching for truth and making sure that what we discover is for the service of us.
Very much like the model you talked about before.
Christiane: Amazing.
Thank you so much.
The social media social revolution.
Buzzfeed news one of the first is shutting down while Vice Media.
Ben submit was the editor-in chief and joining Walter Isaccson what the last decade has shown us.
Walter: Thank you to the show.
Your great book "traffic" is all about Buzzfeed gawker when everyone was chasing traffic, it feels that era may be ending that Buzzfeed news which you were a part of, has closed down.
Is this the end of an era and what is happening now?
Ben: The era that was defined in 2010, I would say when Joe Biden got elected that was a sign that people were tired of the drama and conflict that defined that era.
But in the last few weeks, it has felt really like this is drawing to a close and figure out what is next.
Walter: You helped find Buzzfeed news division.
Why did it close?
Ben: That is a lot of reasons.
The biggest reason was our goal was to build a new news channel for the social web.
We imagined these new platforms like Facebook and twitter were the new twitter.
We would do it on the social media platforms.
I don't think they are enduring.
The whole era is changing and consumers are moving away from them.
The biggest problem which is we were building for an age that never arrived, that Cayman went.
There was media companies imagined they would be the ones that made the money off of this and Facebook and twitter, Facebook in particular, was the only company.
Walter: You talk in your book a guy who starts Buzzfeed.
His rivalry who starts Gawker."
Tell me about them.
Ben: When I went back to figure out what is the origin moment and it was in this City of Manhattan in 2000's.
Two guys who had optimism optimistic and kind of internet that would produce Barack Obama.
And that is Jonah who started Buzzfeed.
The kinds of things that people are going to share on Facebook will be more constructive and more positive.
And nick denton who founded Gawker and express the things they wouldn't express before.
Not the polite all truisms but real things that journalists would say to each other in the bars.
ANDREA: The porn that they really wanted.
Walter: What was Jonah's insight about going viral?
Ben: The core insight where media had been distributed through cable, broadcast towers, especially hand to hand on the internet we were our own distributors and the media rused things that they wanted to share and that was the core insight.
The insight that is neutral.
That can be pictures of kittens, that can be ante semitic propaganda.
Walter: Did it turn to be neutral or as Steve Bannon says, more enragement, more engagement?
Ben: It edged in a lot of different ways and began with a lot of harmless stuff.
In the mid-2010's and Facebook and twitter had been set up and the rules of the game.
What was more successful was the most engaging thing.
And I say something unbelievely and you reply to kill myself and we have a 15-minute comment exchange and you say fabulous, these people are engaged.
Walter: Is that because they had to inflame us and enrage us or could they have been written in a way that he would have wanted was to connect us and feel better?
Ben: I think they were technical choices.
Elements of human nature are not avoidable.
Some with Jonah and me, people are basically better than they are.
People would never publicly say the sorts of things you see on the internet.
Walter: The relationship between Buzzfeed, Buzzfeed News seem to drive this book.
How did Facebook affect the decade?
Ben: Facebook engineers were trying to get people to use our platform and come to something called.
ANDREA: For a while that felt kind of delightful to consumers.
Facebook freaked out about it.
And started taking criticism from people between you and me me and started to try to figure out, how can we keep our business and keep it sticky and engage people.
Walter: Was the biggest mistake?
Ben: After Donald trump was elected, they said, you know what?
People are engaging in ways that are not meaningful to them and they feel bad about it and switch to a technical measure called meaningful, social interaction that is about signs like writing a comment that you really care about this thing that you saw.
And what it did was inflame the absolute worst.
And there is an email that Jonah sent saying I don't know if you see what you are doing here, but we are finding the things that spread most on Facebook are inside jokes about race in particular that escape that inside.
There was a post like things white people like to do that was a funny joke among friends.
That if it spread widely enough, people found it offensive and insulted.
And Facebook said this is meaningful, let's show it to more people.
And was amplifying the most racially divisive content at the time.
Walter: You look at the Buzzfeed crowd, it was generally trying to find a way that our Barack Obama hope moment and yet it ends up producing not only a popularrism but a Donald trum.
trump.
How did that happen?
Ben: To get someone like Obama and thought that that represented the culmination of this internet and Obama visits Facebook and Facebook is a Democratic party thing, young progressive people.
And yet all along the people who found the new far right.
Andrew Breitbart was among the founders helping to post.
And Buzzfeed offices and they were learning from all of these techniques that we were creating.
But we were very kon sustained and didn't want to write things that weren't true and trying to do traditional journalism in a new form with all of the caveats and questions about fairness that came with that.
In 2016, I sat down with Steve Bannon in trump tower and he was totally perplexed we didn't turn into a Bernie sanders' propaganda outlet.
He said that's where the traffic was and the signal he had followed.
There were no and tearing down the system.
They were much more successful in the system than anyone else.
Walter: In your book there is a wonderful chapter on matt druj he is the Godfather and aggregating that people can click on.
But dealing with a political slant.
And you talked about a seminal moment in internet history when I was at "Time" magazine.
Nobody was publishing the story on Monica lieuins sky.
Druj publishes it.
No longer gatekeepers.
Tell me how that affected all of this.
Ben: This assault on the gatekeepers, I think in a way I saw my work at Buzzfeed and this was soon after the Iraq war and gatekeepers were seen as corrupt and having really profoundly messed up the most important story of that generation.
And so there was this real kind of positive energy around.
We have to build a new media that is more transparent open to outside voices and listen to people and weapons of mass destruction.
That was feeding a lot of that energy.
I think if you look back now, you say we did a number on these institutions and in terrible shape and the project we have to buttress the remaining one?
Walter: You think maybe this help undermine our institutions and kind of sorry for that?
Ben: I do.
I think that the institutions -- it's complicated.
These institutions earned their undermining.
There would be anger after the Iraq war was totally justified and it was healthy for them to face a challenge.
That said, 15 years later -- and I don't think what a few blogs attack.
All institutions in society have been shaken by a number of factors.
But I do think if you think about where we are now, the project is about building institutions and buttress and strengthen the existing ones that came under assault in part from social media.
Walter: One of the most self-reflective chapters is about the steele dose year.
I like to turn to this.
Buzzfeed news publish dances tell me whether you did that right or not?
Ben: I do.
I think we should have published it.
I think the specifics matter.
Probably the reason we publish it is we did have this instinct and tendency to say we are not gate keepers.
Walter: Let me push back, it was wrong and misinformation.
Ben: Nobody thinks if I send you an email with false allegations, that you should tweet it.
This document had been influencing American politics and Terry reed written a letter to James comey, I know you have these secrets and release it.
And James comey briefed it to two presidents, sitting President, Barack Obama and President-elect Donald trump and cc reported there is a document that has been briefed and affecting policy and it says the President of the United States has been compromised by the Russians.
You can't sit there and say I have a in my hand a list of communists and not show you the list.
As the court later found, the notion that it should sit there and you and I should say to my viewers and readers, we have seen it.
It would burn your eyes.
We don't trust you to look at it.
I think it is not tenable.
That said, when we published, we wrote -- we had been trying to but we found errors in it.
And descriptive things that were wrong and wrote.
Published the story.
ANDREA: Went -- and the caveat was cast aside and the document was symbolic element of gospel and I don't know if it made that much of a difference if we stapled them together but I regret that.
Walter: The traffic area, we'll call it, is trying to capture people's attention, certain number of eyeballs in the whole world that you can capture and advertising.
Was there something structurally wrong about this business model?
Ben: There was a core mistake about traffic.
People said we can click on the Web site and struck oil.
The more money we are going to make.
In 2003 and selling ads for $9,000 and selling 1,000 times more of them purview.
Oil is scarce and traffic is plentiful and not a commodity.
The price today, the price of the kind of ads this were in selling in 2003.
Not adjusted for inflation.
And so the core notion that you could sell limited attention just was swallowed by the scale in particular with people on Facebook who had infin it.
Walter: And your new publication seems to be a new way and people will pay for, not be holden to clicks and advertising revenue.
Explain what you are doing and how a few others are saying this is the next way?
Ben: In this new moment in the rubble of social media and things that we built and what consumers want to see.
We hire journalists who know what they are talking about and transparent about their own opinions and you could say, here's what I reported and here's what I think about and what someone else thinks about it.
And views from around the world and pull it together so you don't have to read a story and google other stories to try to get to the truth which is how people try to navigate this moment.
Walter: Thank you for being with us.
Christiane: Remembering gordon lightfoot made him one of the successful artists.
He has died at age 84 through tales about heartbreak, loneliness and adventure.
He provided sound track.
And a number of songs by Johnny cash, and Bob Dylan said he wish lightfoot's songs would last forever.
And we leave you now with the sounds of "sundown."
♪ ♪
Cellist Yo-Yo Ma on His Latest Profect, "Our Common Nature"
Video has Closed Captions
Clip: 5/2/2023 | 5m 58s | Yo-Yo Ma joins the show. (5m 58s)
How Early Digital Media Led to Trump and Alt-Right
Video has Closed Captions
Clip: 5/2/2023 | 18m 17s | Ben Smith joins the show. (18m 17s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipSupport for PBS provided by: