I use pictures to illustrate the mechanics of "Bayes' rule," a mathematical theorem about how to update your beliefs as you encounter new evidence. Then I tell three stories from my life that show how I use Bayes' rule to improve my thinking.
A lot of people struggle with "falling off the wagon" when they try to start some new habit (like going to the gym), and get discouraged and give up. Here's an idea that might help fix the problem, inspired by meditation.
Julia complains about something she's seen skeptics do: Apply principles of science and reason (like "ancecdotes aren't data!") blindly, or in order to dismiss things they don't want to have to believe.
Have you ever expressed disagreement with some viewpoint (X) by saying "I can't understand how anyone could think X"?
I explain why I think that phrase is bad form, and complain about the skeptic version of this trope: "I can't understand how people could think science detracts from the beauty of the physical world."
The Megan McArdle post I reference is here:
http://www.bloombergview.com/articles/2014-08-12/only-stupid-people-call-people-stupid
Julia Galef from http://measureofdoubt.com talks about why rationalists are more likely to abandon social norms like marriage, monogamy, standard gender roles, having children, and so on. Is that a rational attitude to take?
Expanding on my podcast w/Scott Aaronson (http://bit.ly/1MFeJej).
I introduce Aumann's Agreement Theorem, which says rationalists can't "agree to disagree," and describe the process my friends and I use to update our beliefs based on each other's updates.
A discussion of the panel I moderated at TAM9 this year, titled "The Ethics of Paranormal Investigation." Topics include: Is it acceptable to use deception in the course of unmasking someone else's deception? What do you do when the person claiming paranormal powers is a child? How much do you feel obligated to ensure the confidentiality of the people you're investigating? And do you have any responsibility, as a mentalist or magician, to make sure your audience understands what you're doing isn't real?
(ETA: Changed the title, since it was misleading!)
How is rationality like artificial intelligence? One connection is that both fields are interested in how to handle interdependent beliefs. In this video I explain why your brain is *not* like an AI, and why that means you end up believing contradictory things.
Julia Galef from http://measureofdoubt.com talks about tricks for spotting your own rationalizations: stories that we tell ourselves about why we think or act a certain way.