Tag Archives: ios

CLI-Driven Development: Building AI-Friendly iOS and Mac Apps

I’ve been using Claude Code for several months now on many personal projects, and I’ve worked out some practices that work really well for me as an iOS and Mac developer. This CLI-driven development approach has fundamentally changed how I build applications with AI assistance.

The Problem with GUI-Based Development

One of the most powerful aspects of AI coding assistants is their ability to receive feedback and iterate on solutions. Give an AI a goal, and it can refine and improve until it gets there. However, if you’re an iOS or Mac developer, you’ve likely hit a wall: GUI interfaces are opaque to AI systems.

You could set up tooling to capture simulator screenshots, but this approach is slow and error-prone. By the time the AI gets a screenshot, analyzes it, and suggests changes, you’ve lost the rapid iteration cycle that makes AI assistants so valuable in the first place.

The other option is writing comprehensive unit tests. If you’re not doing test-driven development with AI yet, CLI-driven development is a nice stepping stone toward that goal. It has the added flexibility of interacting with real data—somewhat like an end-to-end test. Tests are still important, but this is another tool in your toolbelt for those not ready to go full TDD.

The goal is to give the AI the full context of your running application so it can fully interact with it.

Note: See the caveats section below regarding respecting user privacy and security when giving AI access to application data.


The Solution: CLI-Driven Development

CLI-driven development means architecting your application so that every use case accessible via the UI is also accessible via a command-line interface. The UI and CLI becomes access points to these use cases, rather than containing the business logic itself.

This isn’t a new idea. We’ve been told for years not to put business logic in view controllers or SwiftUI views. When working with AI, this separation becomes critical.

Benefits

  1. Better Architecture: Enforces separation between UI and business logic
  2. Faster Debugging: AI can identify and fix issues more quickly
  3. Faster Feature Development: AI has a way to give itself feedback
  4. Improved Testability: These principles make your app more unit-testable too
  5. Easier Data Migrations: AI can access and transform your real data

Architecture: Three Targets

I’m vastly oversimplifying the architectural requirements for a real application here. Folks will have their choice of patterns, frameworks, and approaches. I’m focusing on the minimal Swift package setup that will allow for this flow to work.

Think of your application as having three distinct targets within a Swift package:

  1. UI Target: Contains your SwiftUI views, view controllers, and UI interactions
  2. CLI Target: Handles command-line input/output
  3. Core Target: Contains all business logic, services, data interactions, and workflows

Both the UI and CLI targets are thin layers that simply pass data to the Core. When a user taps a button, the UI sends data to a service. When you run a CLI command, it does the same thing.

Shared Services in Core

This approach requires discipline you probably want anyway: every new feature or interaction needs a single method that can be called with all necessary parameters, with all the important logic happening outside your views. These shared services live in your Core target.

Here’s a brief example of a service to add a TODO:

Core Target – TodoService.swift:

public class TodoService {
    public func addTodo(listName: String, item: String) throws {
        // All business logic lives here
        let list = try getOrCreateList(name: listName)
        let todo = Todo(title: item, completed: false)
        list.todos.append(todo)
        try persistenceManager.save(list)
    }
}

CLI Target – main.swift:

// Parse command: ./MyAppCLI add-todo --list "Grocery" --item "Buy milk"
let service = TodoService()
try service.addTodo(listName: listName, item: item)
print("Added: \(item)")

Mac App Target – AddTodoView.swift:

struct AddTodoView: View {
    @State private var itemName = ""
    let listName: String
    let service = TodoService()

    var body: some View {
        TextField("Item", text: $itemName)
        Button("Add") {
            try? service.addTodo(listName: listName, item: itemName)
        }
    }
}

Both the CLI and Mac app call the exact same TodoService.addTodo() method. The business logic is shared, and both interfaces benefit from any improvements or fixes.


What Can You Do With This?

Debugging with AI

Let’s say you see an issue when adding a TODO to your Grocery list in the Mac app. Here’s how the conversation might go:

> I see a bug when adding a TODO to my Grocery list. Check it out.

Let me test this using the CLI:

$ ./MyAppCLI add-todo --list "Grocery" --item "Buy milk"

$ ./MyAppCLI list-todos --list "Grocery"
Grocery List:
  - Buy milk
  - Buy milk

I see the issue - the TODO is being added twice. Let me check the add-todo implementation...

[AI fixes the duplicate insertion bug]

$ ./MyAppCLI list-todos --list "Grocery"
Grocery List:
  - Buy milk

Fixed! The issue was in the data persistence layer where we were saving twice

The AI can use the CLI to reproduce the issue, see what’s happening, and iterate on fixes until it’s working correctly.

Building Features Faster

When building a feature, you can give the AI requirements and have it verify its work. Here’s an example:

> Add a feature to mark TODOs as complete. When I mark a TODO as complete,
> it should show with a checkmark when I list them. Verify it with the CLI
> and make sure it works before you're done.

I'll implement the complete-todo feature. Let me build it and test it:

[AI implements the feature in Core target]
[AI adds CLI command for marking complete]
[AI updates list display to show completion status]

Now let me test it:

$ ./MyAppCLI add-todo --list "Grocery" --item "Buy milk"
Added: Buy milk

$ ./MyAppCLI add-todo --list "Grocery" --item "Buy eggs"
Added: Buy eggs

$ ./MyAppCLI list-todos --list "Grocery"
Grocery List:
  - [ ] Buy milk
  - [ ] Buy eggs

$ ./MyAppCLI complete-todo --list "Grocery" --item "Buy milk"
Marked complete: Buy milk

$ ./MyAppCLI list-todos --list "Grocery"
Grocery List:
  - [] Buy milk
  - [ ] Buy eggs

Perfect! The feature is working. TODOs now track completion status and
display with checkmarks when completed.

The AI builds the feature and verifies it works before reporting back. In theory, you’ll see the same results in the UI application.

Data Migration and Transformation

During early iterations, you frequently need to migrate data when properties change. With a CLI, you can ask the AI to transform all your proof-of-concept data because it has access to all the APIs necessary to read and write data.

> Migrate my TODO data from the old JSON format to the new format.
> Make a backup first, then verify the results match exactly.

$ ./MyAppCLI export-data --output todos_backup.json
Exported 127 TODOs to todos_backup.json

[AI implements migration from old format to new nested format with IDs]

$ ./MyAppCLI migrate-data --from todos_backup.json --to-new-format
Migrating 127 TODOs...
Migration complete.

$ ./MyAppCLI verify-migration --old todos_backup.json --new current
 All 127 TODOs migrated successfully
 Data integrity verified

Migration complete. Backup saved at todos_backup.json.

Important Caveats

This is Not a Replacement for Unit Tests

Unit testing leads you toward similar practices of keeping services separate from UI. Following CLI-driven development actually makes your app more testable. I suggest using this approach for both unit testing and CLI-driven development. Don’t focus on CLI-driven development alone.

Security and Privacy Considerations

You must respect your customers’ privacy and security. Be mindful when giving AI access to data and understand your privacy requirements. The CLI should be treated as a development tool with the same security considerations as direct database access.

The Dangers of Mixing Locks with Core Data

Codebases accumulate patterns over time that don’t match up with current best practices. These patterns might have made sense when they were written, or they might just reflect how our understanding has evolved. Before you can address  these patterns, you need to spot them and understand why they’re risky.

One particularly dangerous combination in iOS codebases is mixing locks (@synchronized, NSLock, etc.) with Core Data’s performAndWait. These patterns were used to maintain synchronous operation patterns, but together they create hidden cross-thread dependencies that lead to deadlocks, freezing your app.

This shows exactly how these deadlocks occur, so you can recognize and avoid them in your own code.

A Simple Shared Class

Let’s start with a basic class that manages some shared state. This shows a common pattern from before Swift concurrency – using dispatch queues to manage background work. This class might be accessed from multiple threads:

  • Main thread: reads the operation status description
  • Background thread: Starts a background operation
class DataProcessor {
    var currentOperationIdentifier: String = ""
    var currentOperationStatus: String = ""

    // Called from main thread
    func getDescription() -> String {
        return "Operation \(currentOperationIdentifier) has status: \(currentOperationStatus)"
    }
    
   // Called from background thread
  func startBackgroundOperation() {
      currentOperationIdentifier = "DataSync"
      currentOperationStatus = "Processing"
      // Do processing
  }
}

The Problem – Race Conditions

When dealing with multiple threads, execution can interleave unpredictably. One thread executes some code, then another thread slips in and executes its code, then back to the first thread – you have no way of knowing the order.

Here’s what can happen:

Background ThreadMain Thread
currentOperationIdentifier = “DataSync”
(about to update status…)
getDescription()
reads identifier → “DataSync” ✓
reads status → “Idle” ❌ (old value!)
currentOperationStatus = “Processing”
❌ too late – main thread already read old value

The main thread ends up with the new identifier but the old status – a mismatch that leads to inconsistent data.

There are better solutions to this problem – like bundling related state in one immutable structure, or using actors in modern Swift. But in legacy codebases, synchronous locks were a common strategy to protect shared state.

Adding Locks for Thread Safety

The lock creates “critical sections” – ensuring we either write to both properties OR read from both without other threads interfering.

class DataProcessor {
    private let lock = NSLock()
    var currentOperationIdentifier: String = ""
    var currentOperationStatus: String = ""

    func getDescription() -> String {
        lock.lock()
        defer { lock.unlock() }
        
        return "Operation \(currentOperationIdentifier) has status: \(currentOperationStatus)"
    }

    func startBackgroundOperation() {
        lock.lock()
        defer { lock.unlock() }

        currentOperationIdentifier = "DataSync"
        currentOperationStatus = "Processing"
    }
}

So far, this works fine. The locks protect our shared state, and both threads can safely access the properties.

The Deadlock – When Locks Meet Core Data

Now let’s assume we want to store this data to Core Data. This is where things get interesting.

When sharing Core Data across threads, you can run into race conditions just like we had earlier. So you need to use the right APIs to protect the critical sections too.

Your go-to is performBlock – it asynchronously performs the work safely. However, there are cases in legacy code where the caller needs to do something synchronously using performAndWait. When you call performAndWait on a main queue context, it blocks the calling thread until the block executes on the main thread. Think of waiting on the main queue as our “lock”.

Let’s assume some developer in the past (who definitely isn’t you) decided to use performAndWait here:

func startBackgroundOperation(with context: NSManagedObjectContext) {
    lock.lock()
    defer { lock.unlock() }
    
    // Assume the main thread tries to call
    // getDescription() at this point. 
    // It is blocked as we are holding the lock

    currentOperationIdentifier = "DataSync"
    currentOperationStatus = "Processing"

    // 💀 DEADLOCK HAPPENS HERE
    context.performAndWait {
        saveDataToStore(context: context)
    }
}

Why Does This Deadlock?

There’s a problem:

  • performAndWait needs the MAIN THREAD to execute this block
  • The MAIN THREAD is blocked waiting for our lock (in getDescription)
  • We’re holding that lock and won’t release until performAndWait completes

CIRCULAR WAIT = DEADLOCK

Timeline of the Deadlock

Background ThreadMain Thread
lock.lock() ✅
Updates propertiesCalls getDescription()
Still holding lock…             lock.lock() ❌ WAITING…
Waiting on performAndWait() (needs main thread)Can’t process – stuck waiting on lock!
  • Main thread: stuck in lock.lock() waiting for background thread
  • Background thread: stuck in performAndWait waiting for main thread

 How to Fix This Deadlock

The best solution is to eliminate performAndWait entirely and use the asynchronous perform instead. This breaks the circular dependency because the background thread no longer waits for the main thread:

func startBackgroundOperation(with context: NSManagedObjectContext) {
    lock.lock()
    defer { lock.unlock() }

    currentOperationIdentifier = "DataSync"
    currentOperationStatus = "Processing"

    // ✅ No deadlock
    // doesn't block waiting for main thread
    context.perform {
        self.saveDataToStore(context: context)
    }
}

If you absolutely cannot eliminate performAndWait, you’ll need to carefully analyze all lock dependencies, but this is error-prone and hard to maintain. The real fix is embracing asynchronous patterns.

What We Learned

In this article, we’ve seen how mixing locks with Core Data’s performAndWait creates a classic deadlock scenario:

  1. Race conditions can occur when multiple threads access shared mutable state
  2. Locks were traditionally used to protect this shared state with critical sections
  3. performAndWait works like a lock but requiring the main thread to execute
  4. When a background thread holds a lock and calls performAndWait, while the main thread is waiting for that same lock, we get a circular dependency – neither thread can proceed

Coming Up



Future articles will explore other ways you can hit or avoid these deadlocks:

  • Child contexts with read operations – Why using a child context doesn’t save you from deadlocks during fetch operations
  • Child contexts with write operations – How save operations on child contexts create the same circular dependencies
  • Private Contexts – Why private contexts with direct store connections are less likely to lock up

How I Use Voice and AI to Turn Messy Thoughts Into Clear Plans

When I was a teenager, I got really into philosophy. I’d sit at my desk with blank paper (this was before smartphones), scribbling down every half-baked thought about existence and consciousness. Whatever rabbit hole I’d fallen into that week.

I realized that brainstorming on paper forced me to actually think. All those “profound” ideas bouncing around my head? Half of them were nonsense after I’d written them down. The other half started making more sense than I expected.

But I kept trying to organize my thoughts while brainstorming, which defeated the whole purpose. I needed that messy exploration phase, but the structure kept getting in the way.

So I started talking through ideas out loud. I could work through ideas while biking or driving, no structure needed. Just raw thoughts. No stopping to fix sentences, no fiddling with formatting.

Problem was, what do I do with 30 minutes of rambling? Record, listen back and take notes? Those recordings just sat there, full of a few good ideas I never actually used.

Then transcription and AI came along.

Now I can have the same stream-of-consciousness voice sessions, dump the transcript into Claude or ChatGPT, and get a structured plan back. Talk freely, get organized output.

How I Actually Do It

Here’s what I do when I need to work through something:

  1. Hit record and brain dump: Apple’s voice recorder, a few minutes but sometimes as long as 1 hour. Start with the problem, then just go. Questions, angles, contradictions, all of it.
  2. Let it wander: I start talking about some ideas and often end up somewhere unexpected. Ideas build on each other. What starts as chaos usually ends with clarity.
  3. Feed the transcript to AI: Apple transcribes it, I give it to Claude or ChatGPT. The AI follows my rambling and pulls out what matters.
  4. Quick cleanup: Sometimes I’ll record myself reviewing the output with changes. Or just make a few quick edits. Usually minimal.

Team Brainstorming Gets Crazy Good

This gets even better with teams. Record a team brainstorming session (with permission, obviously). Not for meeting notes, but for AI to turn the raw thoughts into a comprehensive plan.

Weird thing happens when everyone knows AI will form the first draft of the plan: people actually explain their thinking. We spell out assumptions. We say why we’re making decisions. Someone will literally say “Hey AI, make sure you catch this part…” and we all laugh, but then we realize we should be this clear all the time.

No one’s frantically taking notes. No one’s trying to remember who said what. We just talk, explore tangents, disagree, figure things out. The AI sorts it out later.

Where It Gets Wild: Voice-to-Code

Real example: On an open source project recently, we were discussing background processing in iOS. Background tasks? Silent push? Background fetch? Everyone’s got ideas, no one actually knows. Usually this ends with “let’s spike on it” and one week later, we’ve explored one or two of the concepts, we’re already committed to the first or second idea and not really sure.

This time we recorded the whole messy discussion. All our dumb questions: How often does BGAppRefreshTask actually fire? What’s the real time limit? Does anything work when the app’s killed?

Fed the transcript to Claude asking for a demo app covering everything we discussed plus anything we missed. The idea was to create a demo that confirms assumptions. We really don’t care what the AI’s opinion is of how things may work – give us something real we can confirm it with.

An hour later we had a working sample app. Each tab demonstrating a different approach with detailed event logging in the UI. We install it, we watch what actually happens.

After a few hours experimenting with the app and reading the code, we understood how these APIs actually work, their limitations, and which approach made sense.

Why This Works so Well

I get clarity this way that doesn’t happen otherwise. Talking forces me to think linearly but lets ideas evolve. AI adds structure without killing the exploration.

Might work if you:

  • Get ideas while walking or driving
  • Find talking easier than writing
  • Edit while writing kills your flow
  • Need to explore without committing

Shortcut to Upload Screenshots to Jira

Apple’s commitment this year to promote the Shortcuts app, formerly known as Workflow, has spurred excitement among a growing group in the iOS community. Shortcuts allows users to automate iPhone/iPad tasks, analogous to the Mac’s Automator. While this app is accessible to those without any development experience, I’d like to share the experience of developing my first Shortcut as an iOS developer. Additionally, I hope the resulting Shortcut will save some Jira users some time. First, here is the background on the process I chose to optimize with Shortcuts.

Nearly everybody involved in creating software needs tools for documenting software bugs. The most ubiquitous tool for tracking bugs is Jira which I use daily.  To track a new bug in Jira I stumble upon on an iOS device, I follow these steps:

  • Capture a screenshot or video of the bug on my iPad/iPhone.
  • Create a case in Jira documenting the bug.
  • Annotate or trim the aforementioned media.
  • Upload and attach the media to the Jira case.
  • Delete the media from the iPad/iPhone.

This process seems simple enough in theory but has some minor challenges.  My iPhone and iPad Photos app is littered with software screenshots, some months old that I neglected to delete after uploading to Jira. Attaching the screenshot to the case can also be cumbersome as I need to move the media to the device that has the related Jira case open. The Jira App for iOS can make this simpler; however, I prefer to create cases on my Mac for speed of entering the details.

My aim was to optimize this with a Shortcut that can walk me through the steps of uploading my most recent media to my most recent Jira case and the subsequent media deletion. I was excited that Shortcuts was up for the task and jumped right in. 

Shortcuts First Impressions

I’ll admit that I spent no time reading documentation or watching tutorials for Shortcuts before getting started. I think that speaks to the approachability of Shortcuts (and my impatience to play).

The Shortcut interface offers many options that require no background in development. Using Shortcuts’ drag-and-drop interface, one can compose actions to play a song, send a text message or search the web, for example, without any familiarity with programming languages. But much more power exists in this tool that will be familiar to developers and an excellent primer for those that want to learn more about development.

Simple Shortcut To Send Text Message

Some of the features familiar to developers are variables, conditional and control constructs, arrays/dictionaries and code comments — just to name a few. There are also powerful APIs available to fetch data from from the web, parse JSON and even SSH to other devices (and much more). I was delighted to find so much flexibility so I did not plan to limit the scope of this Shortcut initially.

More Complicated Shortcuts Using Web Services

Lessons Learned

Like many complex apps, Shortcuts will be more efficient to develop on an iPad than an iPhone. The larger form-factor and persistent sidebar makes it much easier to navigate the interface and add actions. The use of an external keyboard can help as well and I’m curious how the Apple Pencil would improve drag-and-drop functionality. .

But the iPhone makes it very convenient to develop Shortcuts during small idle periods when your iPad may not be on hand (Uber, bank line, etc..). This was a new experience for me and I got hooked to it the way many play games on their phones. I have some mixed feelings on the distractibility factor but developing in spare contexts was interesting.

This Jira Shortcut morphed into a more complex tool than I expected. It makes several API calls to Jira, processes the results, requests input from the user, shows error messages and more. As a developer, my inclination was to break these individual components into smaller Shortcuts for modularity and usability. This was clearly a wrong turn when I decided to share the shortcut which would have required a user to download multiple Shortcuts for one feature. So I ended up joining 3 different Shortcuts into one Shortcut as I neared completion.

Security also required thought on this project. The Shortcut will need access to a user’s Jira credentials (username + password or token). All the actions and parameters for a Shortcut are stored to the Shortcut app’s sandbox. While storing usernames/passwords is not ideal, the primary risk I see here is a user accidentally sharing those credentials if they ever redistribute their Shortcut (“Hello World.. oops here is my login!”). I attempted to work around this by using import questions which will request sensitive user information on installation and not share those user inputs if that Shortcut is later shared. I have not verified this so caution against sharing any Shortcuts with personal information captured.

Conclusion

Nearing the end of this project, I realized there is a limit to how much additional functionality you want to pack into a Shortcut before you may want to consider writing an iOS app instead. It can be tricky editing a Shortcut since it lacks copy/paste support for actions and the lack of functions makes it difficult to reuse logic. I also missed simple common language features such as “else if” and “break/continue”. Finally, all this work to develop the Shortcut cannot be ported outside the context of the Shortcut (i.e. another iOS app, Mac, etc..).

But Shortcuts is not designed to be a replacement for traditional software development or apps. It is however an excellent tool for automating tasks on the iOS platform, even for iOS developers. I’d certainly use Shortcuts again when the task to automate is the appropriate complexity level. Then if it gets too complicated, I’ll consider writing an iOS app. I hope non-developers can use Shortcuts too as an introduction to software development in a familiar environment.

If you are a Jira user and wish to fiddle with this Shortcut with your account, you can download it on my Github. I welcome feedback and pull requests.