Send a spy to spread rumors on the other side of the front line. Drop leaflets into enemy territory. Debilitate the enemy using its own people, in their own language - Lord Haw-Haw, Tokyo Rose - over their own radios. The tactics of demoralization are as old as politics - as old as war - and now we know what the second-decade-of-the-21st-century version looks like, too.
Pushed by a congressional investigation, Facebook has finally turned over some 3,000 advertisements and links to pages created and paid for by Russian trolls.
Among them was "Secured Borders," a fake, Kremlin-backed "organization" that appeared to be based in Idaho. It pumped out messages about immigrant "scum" and attracted 133,000 followers before it was shut down. In August 2016, its Russian backers actually promoted a rally in Twin Falls to protest an alleged "upsurge of violence against American citizens."
At the same time, a different set of Russian operatives sponsored and advertised two black rappers who bashed "racist b----" Hillary Clinton.
They also borrowed the identity of a Muslim group that claimed Clinton "created, funded and armed" al- Qaida and the Islamic State. Meanwhile, thousands of computerized bots pushed repetitive pro-Trump messages on Twitter, persuading many actual humans to respond.
All these games are familiar: Russians have used similar tactics for years in Europe, where pro-Russian social-media users on Facebook, Twitter and many other platforms have long sought to amplify support for parties of the far left and the far right.
During Germany's recent elections, official Russian media and networks of Russian bots tweeted and posted messages warning of immigration's dire threat to Germany and pushing the cause of Alternative for Germany, an anti-immigrant party.
As in the past, the Russian advertisements did not create ethnic strife or political divisions, either in the United States or in Europe. Instead, they used divisive language and emotive messages to exacerbate existing divisions. As in the past, it's enormously misleading to name "Russia" as the source of the problem.
The old KGB had whole departments devoted to the invention of rumors and the creation of fake extremists; the KGB's institutional descendants simply realized, sooner than most, that social-media campaigns are a cheap way for an impoverished ex-superpower to meddle in other countries' politics. But in 2016, they were one of many groups who built targeted Facebook groups and bought divisive advertisements aimed at carefully sliced and segmented bits of the population.
The real problem is far broader than Russia: Who will use these methods next - and how?
If Russians worked out how to create fake "Black Lives Matter" Twitter accounts, why can't others?
I can imagine multiple groups, many of them proudly American, who might well want to manipulate a range of fake accounts during a riot or disaster to increase anxiety or fear.
I can imagine a lot of people who might want to take control of Defense Department accounts, as Russian hackers also tried to do, to send false information during a military conflict.
There is no big barrier to entry in this game: It doesn't cost much, it doesn't take much time, it isn't particularly high-tech, and it requires no special equipment. Facebook, Google and Twitter, not Russia, have provided the technology to create fake accounts and false advertisements, as well as the technology to direct them at particular parts of the population. Many other countries and political groups - on the left, the right, you name it - will quickly figure out how to use them.
In part, this malicious world grew so quickly out of ignorance - people didn't know, simply, how this all worked - but that's not an excuse any longer. There is no reason existing laws on transparency in political advertising, on truth in advertising or indeed on libel should not apply to social media as well as traditional media.
There is a better case than ever against anonymity, at least against anonymity in the public forums of social media and comment sections, as well as for the elimination of social-media bots.
Facebook's own experiments have shown that conversations are more civilized when people use their own names. The right to free speech is something that is granted to humans, not bits of computer code.
The alternative is a dystopia in which election-year dirty tricks become a way of life for everyone.