AI
My philosophy on responsible AI use.
You → Me
Generally, if you write something to me in your own words, I’ll read it with my own eyes. That means it won’t be summarized by any large-language models (LLMs) before it gets to me.
I will use LLMs to summarize particularly long articles, posts, or content that otherwise struggles with brevity, unless you’ve sent me something of the like and asked me explicitly to read it in its entirety.
Me → You
The words I write are my own. I use AI as a research assistant, as a critic to proof my work, as a notetaker, for art, and for AI proofing tools. Generally speaking that means unless I tell you otherwise, if you get something from me, I wrote it!
- "As a research assistant" - means, for example, using LLMs to search through academic papers, summarize them, then comparing that summary to my notes and what I understood. It also could mean finding primary sources for opposing viewpoints. This makes what I write more accurate and grounded in science.
- "As a critic" - means, for example, feeding drafts of topics into an LLM and asking it to generate opposing views. This allows for more substantive topics that hold up to scrutiny.
- "As a notetaker" - means to transcribe and summarize thoughts that I have (about potential topics). This allows me to capture unique ideas quickly.
- "For art" - means using tools like DALL·E 3 to make graphics for a topic. The art these text-to-image systems create can be pretty beautiful, and I enjoy experimenting with it. This creates compelling graphics to accompany topics.
- "AI proofing tools" - means, for example, software like Grammarly (or other similar software), that catches writing mistakes (like duplicate words, missing commas/apostrophes, or incorrect tenses). This makes what I write more readable.
I subscribe to the philosophy of mindful AI use, and you should too.