Category Archives: Productivity

How I Use Voice and AI to Turn Messy Thoughts Into Clear Plans

When I was a teenager, I got really into philosophy. I’d sit at my desk with blank paper (this was before smartphones), scribbling down every half-baked thought about existence and consciousness. Whatever rabbit hole I’d fallen into that week.

I realized that brainstorming on paper forced me to actually think. All those “profound” ideas bouncing around my head? Half of them were nonsense after I’d written them down. The other half started making more sense than I expected.

But I kept trying to organize my thoughts while brainstorming, which defeated the whole purpose. I needed that messy exploration phase, but the structure kept getting in the way.

So I started talking through ideas out loud. I could work through ideas while biking or driving, no structure needed. Just raw thoughts. No stopping to fix sentences, no fiddling with formatting.

Problem was, what do I do with 30 minutes of rambling? Record, listen back and take notes? Those recordings just sat there, full of a few good ideas I never actually used.

Then transcription and AI came along.

Now I can have the same stream-of-consciousness voice sessions, dump the transcript into Claude or ChatGPT, and get a structured plan back. Talk freely, get organized output.

How I Actually Do It

Here’s what I do when I need to work through something:

  1. Hit record and brain dump: Apple’s voice recorder, a few minutes but sometimes as long as 1 hour. Start with the problem, then just go. Questions, angles, contradictions, all of it.
  2. Let it wander: I start talking about some ideas and often end up somewhere unexpected. Ideas build on each other. What starts as chaos usually ends with clarity.
  3. Feed the transcript to AI: Apple transcribes it, I give it to Claude or ChatGPT. The AI follows my rambling and pulls out what matters.
  4. Quick cleanup: Sometimes I’ll record myself reviewing the output with changes. Or just make a few quick edits. Usually minimal.

Team Brainstorming Gets Crazy Good

This gets even better with teams. Record a team brainstorming session (with permission, obviously). Not for meeting notes, but for AI to turn the raw thoughts into a comprehensive plan.

Weird thing happens when everyone knows AI will form the first draft of the plan: people actually explain their thinking. We spell out assumptions. We say why we’re making decisions. Someone will literally say “Hey AI, make sure you catch this part…” and we all laugh, but then we realize we should be this clear all the time.

No one’s frantically taking notes. No one’s trying to remember who said what. We just talk, explore tangents, disagree, figure things out. The AI sorts it out later.

Where It Gets Wild: Voice-to-Code

Real example: On an open source project recently, we were discussing background processing in iOS. Background tasks? Silent push? Background fetch? Everyone’s got ideas, no one actually knows. Usually this ends with “let’s spike on it” and one week later, we’ve explored one or two of the concepts, we’re already committed to the first or second idea and not really sure.

This time we recorded the whole messy discussion. All our dumb questions: How often does BGAppRefreshTask actually fire? What’s the real time limit? Does anything work when the app’s killed?

Fed the transcript to Claude asking for a demo app covering everything we discussed plus anything we missed. The idea was to create a demo that confirms assumptions. We really don’t care what the AI’s opinion is of how things may work – give us something real we can confirm it with.

An hour later we had a working sample app. Each tab demonstrating a different approach with detailed event logging in the UI. We install it, we watch what actually happens.

After a few hours experimenting with the app and reading the code, we understood how these APIs actually work, their limitations, and which approach made sense.

Why This Works so Well

I get clarity this way that doesn’t happen otherwise. Talking forces me to think linearly but lets ideas evolve. AI adds structure without killing the exploration.

Might work if you:

  • Get ideas while walking or driving
  • Find talking easier than writing
  • Edit while writing kills your flow
  • Need to explore without committing

A Simple Checklist for Debugging Regressions

I’ve been thinking about a process I’ve used for resolving regressions that may be useful to share. I’ve never explicitly written the steps down before, but figured it was worth capturing—both for myself and others.

When a regression shows up, there are three questions that I’ve found you have to answer. Skipping any of them usually leads to confusion, wasted time, or a fix that doesn’t actually solve the real problem. But if you take the time to work through them, you can usually land on the right answer with confidence.

1. Can you reproduce the problem?

A lot of engineers want to jump straight into the code. That’s the fun part, right? Digging through logic, inspecting diffs, reasoning your way to a fix. But if you can’t reliably reproduce the issue, studying the code is usually a waste of time. You don’t even know where to look yet.

Reproducing the problem is the first real step. It’s not glamorous, and it can feel a little silly—especially when you’re trying the same steps over and over with tiny variations. But this is one of the most valuable things you can do when a bug shows up.

As engineers, we have a special vantage point. We know how the code works, and we often have a gut instinct about what kinds of conditions might trigger strange behavior. That gives us a real edge in uncovering subtle issues—so don’t think you’re above tapping on an iPad for hours or running the same test over and over. It’s our duty to chase it down.

Once you have a reliable repro, everything gets easier. You can try fixes, stress other paths, and most importantly, build real confidence that your solution works.

Some useful tricks:

  • Adjust timing, inputs, or state to help provoke the bug
  • Script setup steps or test data to save time
  • Loop the behavior or stress threads to make edge cases more likely

2. What changed?

This step is often skipped. People jump into debugging without first understanding what changed. But the fastest way to track down a regression is to compare working code to broken code and see what’s different.

This question can feel sensitive. It gets close to specific contributions that may have introduced instability. I’ve seen plenty of cases where the discussion gets deflected into vague root causes or long-term issues—anything but the specific change. That’s understandable. We’ve all been there. But avoiding the question doesn’t help. It puts the fix at risk and slows everyone down.

Some go-to techniques:

  • Review pull requests and diffs
  • Trace from crash logs or error messages to recently changed code
  • Use git bisect to find the breaking commit
  • Try reverting the suspected change and see if the issue disappears

Once you find the change, test your theory. If undoing it makes the problem go away, you’re on the right track.

3. Why does it happen?

Knowing what changed isn’t enough. You need to understand why that change caused the issue. Otherwise, you’re just fixing symptoms, and might miss deeper problems.

This is where the real problem solving happens:

  • Read documentation for the APIs or system behavior involved
  • Think through the interaction between components or timing
  • Build a mental model and prove it with experiments or targeted tests

You don’t want to ship a fix that works by accident. You want one that works because you actually understand the problem. That’s what prevents repeat issues and edge cases slipping through.

Wrapping up

These three questions — Can you reproduce it? What changed? Why does it happen? — have helped me find and fix bugs more reliably than anything else.

It’s easy to skip them under pressure. It’s tempting to merge the first thing that seems to work. But without answering all three, you’re flying blind. You might get lucky. Or you might end up wasting hours chasing your tail or shipping the wrong fix.

New Month’s Resolution

I’m skeptical of New Year’s resolutions, at least in the traditional way they are framed. The statistics are bleak; only 8% of people stick with their resolutions. I think a year is just too long.

Let’s consider the resolution to go to the gym 5 days a week. Things will be going well the first few days or weeks. But suppose your new gym rat friends let you know your plan is flawed. They suggest you should only train 4 days per week. What would this change mean for the resolution? Are you compromising if you cut back a day? Or let’s assume you have a minor injury, requiring a few weeks of rest. Is it game over now that you took some time off?

Whenever you start something new, you need to make many adjustments. A rigid plan made during the holidays probably isn’t going to hold up for the year. Your brain was likely in a planning fog anyways from too many cookies and bad holiday films.

As an alternative, let’s consider monthly resolutions. Basically these work just like New Year’s resolutions. Add a calendar reminder once a month to select some important goals. Work hard to stick with this plan for the next 4 weeks. When the new month arrives, it’s time to celebrate your success and think about what can be improved. You can either roll-over your same strategy into the new month or make adjustments from what you learned, or scrap it entirely and do something else.

I experimented with the monthly resolutions this month. As I write this I’m excited to conduct a post-mortem on the last month and incorporate the learnings into my January goals.