Category Archives: Apple

Fun with Swift Numbers

Over the years I’ve spent a lot of time poking at Swift’s numeric types — writing little experiments, hitting unexpected results, and going down rabbit holes. Here are ten things that surprised me.


1. Print Lies (A Little)

Before diving in, there’s something important to know: print doesn’t show the full precision of the stored value. Swift uses an algorithm that finds the shortest decimal string that uniquely identifies the stored bits — which means it often looks cleaner than the actual value. If you’re exploring how numbers are stored, this can be misleading:

let x: Double = 0.1
print(x)  // 0.1  ← looks exact

print(String(format: "%.30f", x))  // 0.100000000000000005551115123126

That’s the actual value a Double stores for 0.1—not exactly 0.1, but the nearest representable value in binary floating-point. Throughout this article, we’ll use String(format: "%.Nf") when we need to see what’s actually in memory.


2. Division by Zero Depends on the Type

Integer division by zero is a fatal runtime crash — and you can’t catch it with do/catch. Floating point doesn’t crash. Instead it produces special values:

print(1 / 0)   // Fatal error: Division by zero

print(1 / 0.0)    //  inf
print(0.0 / 0.0)  //  nan

The asymmetry is intentional — integers have no way to represent infinity, so Swift panics. Floating point types (Float, Double) have dedicated bit patterns for these cases.


3. Negative Zero Is a Thing

Floating point has two zeros: 0.0 and -0.0. They compare as equal, but they’re not the same.

You can’t create negative zero with an integer literal.

let negFloatZero: Float = -0
print(negFloatZero) // 0.0, not -0.0

The -0 is integer arithmetic where -0 = 0, so Float gets 0, not -0.0.

So how do you get one? Through an operation:

let negFloatZeroTwo: Float = -3 * 0
print(negFloatZeroTwo) // -0.0

// true — equal, but not the same
print(0.0 == negFloatZeroTwo)

So why does -0.0 exist?

A negative number can get so small that Double can no longer store it — it rounds to zero. Without -0.0, that zero would look positive and 1 / result would give +inf instead of -inf. The sign is preserved via -0.0:

let tinyNegative = -Double.leastNonzeroMagnitude / 2
print(tinyNegative)       // -0.0  ← too small to store, rounds to negative zero
print(1 / tinyNegative)   // -inf  ← sign was preserved, correct result

-0.0 is just a flag that says “this zero came from the negative side.” The sign of the infinity you get from dividing follows the same rules as multiplication — same signs give positive, opposite signs give negative:

print(1 / 0.0)    //  inf  (positive ÷ positive zero)
print(-1 / -0.0)  //  inf  (negative ÷ negative zero — negatives cancel)
print(1 / -0.0)   // -inf  (positive ÷ negative zero)
print(-1 / 0.0)   // -inf  (negative ÷ positive zero)

4. The Classic Floating Point Gotcha

print(0.1 + 0.2 == 0.3)  // false

Every float has a hidden tail of digits. 0.1 is really stored as 0.100000000000000006... — the nearest value the hardware can represent. Add two of these approximations together and you don’t land exactly on a third. In a way, floating point feels less precise the more you look at it.

The fix is Decimal, which stores numbers in base-10 the way humans write them, so 0.1 is actually 0.1 — not a binary approximation of it:

let result: Decimal = 0.1 + 0.2
print(result == 0.3)  // true — Decimal arithmetic is exact in base-10

Use Decimal anywhere exact decimal math matters: money, measurements, user-facing values.


5. Float’s Number Line Has Gaps

You might assume every decimal value is representable in Float — that after 1.0000004 comes 1.0000005, and so on. It doesn’t work that way. Float only has 6–7 significant decimal digits of precision. Beyond that, Float simply can’t distinguish between nearby values — some get skipped entirely.

nextUp returns the very next representable value above a number with nothing in between. Watch what happens:

var f: Float = 1.0
print(f.nextUp)             // 1.0000001
print(f.nextUp.nextUp)      // 1.0000002
print(f.nextUp.nextUp.nextUp) // 1.0000004  ← skipped 3

The Float nearest to 1.0000003 displays as 1.0000004 — both decimal values round to the same stored bit pattern. Like a hotel that skips floor labels, the floor exists, it just has an unexpected number on the door.


6. Converting to Float and Back Is a One-Way Trip

Not all decimal numbers can be represented exactly in binary floating point — and this includes simple-looking values like 15.2. When you write 15.2, both Float and Double store the nearest binary value they can manage. They each have a different nearest value, because they have different precision. Neither one is actually 15.2.

You can see this by printing with enough decimal places to bypass Swift’s default shortest-representation output:

let originalDouble: Double = 15.2
print(String(format: "%.25f", originalDouble))    // 15.1999999999999992894573...  ← what Double actually stores

let convertedToFloat = Float(originalDouble)
print(String(format: "%.25f", convertedToFloat))  // 15.1999998092651367187500...  ← what Float actually stores

let backToDouble = Double(convertedToFloat)
print(String(format: "%.25f", backToDouble))      // 15.1999998092651367187500...  ← precision is gone

print(originalDouble == backToDouble)             // false

The ugly digits in backToDouble aren’t new damage — they were always there in Float, just hidden. Double has enough precision to expose them.


7. Casting Between Numeric Types Can Crash Your App

Converting a Double to Int, or putting a negative number into a UInt, hits a fatal error with no way to catch it. The safe alternative is exactly: — a failable initializer that returns nil instead of crashing. What exactly: is really asking is: does this value require no rounding in the target type? If Float has to approximate at all, it returns nil:

// 1.1 fails — Float and Double round it differently, so the bits don't match:
let d1: Double = 1.1
print(String(format: "%.30f", d1))   // 1.100000000000000088817841970013
print(Float(exactly: d1))            // nil

// 1.5 succeeds — it's a power-of-2 fraction (1 + 2⁻¹), exactly representable in both types:
let d2: Double = 1.5
print(String(format: "%.30f", d2))   // 1.500000000000000000000000000000
print(Float(exactly: d2))            // Optional(1.5)

// 1234.5 also succeeds — 1234 is an integer (exact in Float) and 0.5 is exact, so the sum is too:
let d3: Double = 1234.5
print(String(format: "%.30f", d3))   // 1234.500000000000000000000000000000
print(Float(exactly: d3))            // Optional(1234.5)

// Going Float → Double always succeeds — widening never loses information:
let fWidened: Float = 1.333333
print(Double(exactly: fWidened)!)    // 1.3333330154418945 ← more digits revealed, nothing lost

The surprise is values like 1234.5 passing while 1.1 fails — it’s not about the size of the number, it’s purely whether the value lands exactly on a binary fraction.


8. Float Starts Skipping Integers Above 16,777,216

Float can only represent integers without gaps up to 16,777,216. Beyond that, consecutive integers start sharing the same value — adding 1 does nothing:

let limit: Float = 16_777_216.0
print(limit.nextUp)  // 1.6777218e+07 — skipped 16777217

print(Float(16_777_217) == Float(16_777_216))  // true — they're the same Float

If you’re storing large integers in a Float, they silently lose uniqueness.


9. Float Overflow Becomes Infinity, Not a Crash

Unlike integers, floats don’t crash on overflow — they overflow to infinity:

let huge = Double.greatestFiniteMagnitude    // ~1.8e+308 — largest positive finite Double
print(Float(huge))  // inf

let mostNegative = -Double.greatestFiniteMagnitude  // ~-1.8e+308 — largest negative finite Double
print(Float(mostNegative))  // -inf

10. Hex Floats Use p, Not e

Just like decimal floats use e (1.5e2 = 150.0), hex floats use p, meaning “times 2 to the power of”. Honestly, this is probably one of those things you’ll never remember because you’ll never use it — but it’s good to recognize when you see it:

print(0xFp2)    // 15 × 2² = 60.0
print(0xFp-2)   // 15 × 2⁻² = 3.75
print(0x1.1p1)  // (1 + 1/16) × 2¹ = 2.125

A hex float without an exponent is a compile error — the p is required. You probably won’t write these by hand, but they show up in generated code and low-level bit manipulation.

The Dangers of Mixing Locks with Core Data

Codebases accumulate patterns over time that don’t match up with current best practices. These patterns might have made sense when they were written, or they might just reflect how our understanding has evolved. Before you can address  these patterns, you need to spot them and understand why they’re risky.

One particularly dangerous combination in iOS codebases is mixing locks (@synchronized, NSLock, etc.) with Core Data’s performAndWait. These patterns were used to maintain synchronous operation patterns, but together they create hidden cross-thread dependencies that lead to deadlocks, freezing your app.

This shows exactly how these deadlocks occur, so you can recognize and avoid them in your own code.

A Simple Shared Class

Let’s start with a basic class that manages some shared state. This shows a common pattern from before Swift concurrency – using dispatch queues to manage background work. This class might be accessed from multiple threads:

  • Main thread: reads the operation status description
  • Background thread: Starts a background operation
class DataProcessor {
    var currentOperationIdentifier: String = ""
    var currentOperationStatus: String = ""

    // Called from main thread
    func getDescription() -> String {
        return "Operation \(currentOperationIdentifier) has status: \(currentOperationStatus)"
    }
    
   // Called from background thread
  func startBackgroundOperation() {
      currentOperationIdentifier = "DataSync"
      currentOperationStatus = "Processing"
      // Do processing
  }
}

The Problem – Race Conditions

When dealing with multiple threads, execution can interleave unpredictably. One thread executes some code, then another thread slips in and executes its code, then back to the first thread – you have no way of knowing the order.

Here’s what can happen:

Background ThreadMain Thread
currentOperationIdentifier = “DataSync”
(about to update status…)
getDescription()
reads identifier → “DataSync” ✓
reads status → “Idle” ❌ (old value!)
currentOperationStatus = “Processing”
❌ too late – main thread already read old value

The main thread ends up with the new identifier but the old status – a mismatch that leads to inconsistent data.

There are better solutions to this problem – like bundling related state in one immutable structure, or using actors in modern Swift. But in legacy codebases, synchronous locks were a common strategy to protect shared state.

Adding Locks for Thread Safety

The lock creates “critical sections” – ensuring we either write to both properties OR read from both without other threads interfering.

class DataProcessor {
    private let lock = NSLock()
    var currentOperationIdentifier: String = ""
    var currentOperationStatus: String = ""

    func getDescription() -> String {
        lock.lock()
        defer { lock.unlock() }
        
        return "Operation \(currentOperationIdentifier) has status: \(currentOperationStatus)"
    }

    func startBackgroundOperation() {
        lock.lock()
        defer { lock.unlock() }

        currentOperationIdentifier = "DataSync"
        currentOperationStatus = "Processing"
    }
}

So far, this works fine. The locks protect our shared state, and both threads can safely access the properties.

The Deadlock – When Locks Meet Core Data

Now let’s assume we want to store this data to Core Data. This is where things get interesting.

When sharing Core Data across threads, you can run into race conditions just like we had earlier. So you need to use the right APIs to protect the critical sections too.

Your go-to is performBlock – it asynchronously performs the work safely. However, there are cases in legacy code where the caller needs to do something synchronously using performAndWait. When you call performAndWait on a main queue context, it blocks the calling thread until the block executes on the main thread. Think of waiting on the main queue as our “lock”.

Let’s assume some developer in the past (who definitely isn’t you) decided to use performAndWait here:

func startBackgroundOperation(with context: NSManagedObjectContext) {
    lock.lock()
    defer { lock.unlock() }
    
    // Assume the main thread tries to call
    // getDescription() at this point. 
    // It is blocked as we are holding the lock

    currentOperationIdentifier = "DataSync"
    currentOperationStatus = "Processing"

    // 💀 DEADLOCK HAPPENS HERE
    context.performAndWait {
        saveDataToStore(context: context)
    }
}

Why Does This Deadlock?

There’s a problem:

  • performAndWait needs the MAIN THREAD to execute this block
  • The MAIN THREAD is blocked waiting for our lock (in getDescription)
  • We’re holding that lock and won’t release until performAndWait completes

CIRCULAR WAIT = DEADLOCK

Timeline of the Deadlock

Background ThreadMain Thread
lock.lock() ✅
Updates propertiesCalls getDescription()
Still holding lock…             lock.lock() ❌ WAITING…
Waiting on performAndWait() (needs main thread)Can’t process – stuck waiting on lock!
  • Main thread: stuck in lock.lock() waiting for background thread
  • Background thread: stuck in performAndWait waiting for main thread

 How to Fix This Deadlock

The best solution is to eliminate performAndWait entirely and use the asynchronous perform instead. This breaks the circular dependency because the background thread no longer waits for the main thread:

func startBackgroundOperation(with context: NSManagedObjectContext) {
    lock.lock()
    defer { lock.unlock() }

    currentOperationIdentifier = "DataSync"
    currentOperationStatus = "Processing"

    // ✅ No deadlock
    // doesn't block waiting for main thread
    context.perform {
        self.saveDataToStore(context: context)
    }
}

If you absolutely cannot eliminate performAndWait, you’ll need to carefully analyze all lock dependencies, but this is error-prone and hard to maintain. The real fix is embracing asynchronous patterns.

What We Learned

In this article, we’ve seen how mixing locks with Core Data’s performAndWait creates a classic deadlock scenario:

  1. Race conditions can occur when multiple threads access shared mutable state
  2. Locks were traditionally used to protect this shared state with critical sections
  3. performAndWait works like a lock but requiring the main thread to execute
  4. When a background thread holds a lock and calls performAndWait, while the main thread is waiting for that same lock, we get a circular dependency – neither thread can proceed

Coming Up



Future articles will explore other ways you can hit or avoid these deadlocks:

  • Child contexts with read operations – Why using a child context doesn’t save you from deadlocks during fetch operations
  • Child contexts with write operations – How save operations on child contexts create the same circular dependencies
  • Private Contexts – Why private contexts with direct store connections are less likely to lock up

How I Use Voice and AI to Turn Messy Thoughts Into Clear Plans

When I was a teenager, I got really into philosophy. I’d sit at my desk with blank paper (this was before smartphones), scribbling down every half-baked thought about existence and consciousness. Whatever rabbit hole I’d fallen into that week.

I realized that brainstorming on paper forced me to actually think. All those “profound” ideas bouncing around my head? Half of them were nonsense after I’d written them down. The other half started making more sense than I expected.

But I kept trying to organize my thoughts while brainstorming, which defeated the whole purpose. I needed that messy exploration phase, but the structure kept getting in the way.

So I started talking through ideas out loud. I could work through ideas while biking or driving, no structure needed. Just raw thoughts. No stopping to fix sentences, no fiddling with formatting.

Problem was, what do I do with 30 minutes of rambling? Record, listen back and take notes? Those recordings just sat there, full of a few good ideas I never actually used.

Then transcription and AI came along.

Now I can have the same stream-of-consciousness voice sessions, dump the transcript into Claude or ChatGPT, and get a structured plan back. Talk freely, get organized output.

How I Actually Do It

Here’s what I do when I need to work through something:

  1. Hit record and brain dump: Apple’s voice recorder, a few minutes but sometimes as long as 1 hour. Start with the problem, then just go. Questions, angles, contradictions, all of it.
  2. Let it wander: I start talking about some ideas and often end up somewhere unexpected. Ideas build on each other. What starts as chaos usually ends with clarity.
  3. Feed the transcript to AI: Apple transcribes it, I give it to Claude or ChatGPT. The AI follows my rambling and pulls out what matters.
  4. Quick cleanup: Sometimes I’ll record myself reviewing the output with changes. Or just make a few quick edits. Usually minimal.

Team Brainstorming Gets Crazy Good

This gets even better with teams. Record a team brainstorming session (with permission, obviously). Not for meeting notes, but for AI to turn the raw thoughts into a comprehensive plan.

Weird thing happens when everyone knows AI will form the first draft of the plan: people actually explain their thinking. We spell out assumptions. We say why we’re making decisions. Someone will literally say “Hey AI, make sure you catch this part…” and we all laugh, but then we realize we should be this clear all the time.

No one’s frantically taking notes. No one’s trying to remember who said what. We just talk, explore tangents, disagree, figure things out. The AI sorts it out later.

Where It Gets Wild: Voice-to-Code

Real example: On an open source project recently, we were discussing background processing in iOS. Background tasks? Silent push? Background fetch? Everyone’s got ideas, no one actually knows. Usually this ends with “let’s spike on it” and one week later, we’ve explored one or two of the concepts, we’re already committed to the first or second idea and not really sure.

This time we recorded the whole messy discussion. All our dumb questions: How often does BGAppRefreshTask actually fire? What’s the real time limit? Does anything work when the app’s killed?

Fed the transcript to Claude asking for a demo app covering everything we discussed plus anything we missed. The idea was to create a demo that confirms assumptions. We really don’t care what the AI’s opinion is of how things may work – give us something real we can confirm it with.

An hour later we had a working sample app. Each tab demonstrating a different approach with detailed event logging in the UI. We install it, we watch what actually happens.

After a few hours experimenting with the app and reading the code, we understood how these APIs actually work, their limitations, and which approach made sense.

Why This Works so Well

I get clarity this way that doesn’t happen otherwise. Talking forces me to think linearly but lets ideas evolve. AI adds structure without killing the exploration.

Might work if you:

  • Get ideas while walking or driving
  • Find talking easier than writing
  • Edit while writing kills your flow
  • Need to explore without committing

Why You Should Learn Server-Side Swift

If you’ve been watching the server-side Swift changes this year, you may have noticed the building momentum. The Vapor community reinforced their commitment to server-side Swift by pushing significant updates in Vapor version 4.  That was followed by a new server-side Swift platform — the  Swift AWS Lambda Runtime, complete with a WWDC video. Apple also dropped Vapor’s name during a State of the Union demo. Finally, the Swift Server work group has been expanding, so we should expect to see more server-side Swift features trickling out in the near future.

I hope some of the recent positive developments encourages you to consider why Swift on the server may be a good choice for your next server project. But I’d really like to encourage iOS developers to consider writing Swift server apps, even as experiments, to make better iOS apps. There is a synergy between iOS development and Swift server development that compliment one another and justify the investment.

Modularizing your App

To leverage the benefits of Swift server development, you will likely want to share some existing iOS app code with the server. The server-side solutions are built around the Swift Package Manager. If you are not already using packages to modularize your iOS app, you will need to spend some time moving code into a package. This requires separating the iOS-specific parts (ex: UIKit) from the things you want to reuse on the server. The scope of this effort depends on how intertwined the project code is. But once you move even a portion of your app code to a package, you will likely be thrilled to watch in run on a server. Your code is now more modular, opening up opportunities to extend to even other iOS apps.

Become more proficient with Swift

To learn a new development skill, we often need to experiment with technologies that add little value to our primary skill sets. Take for example my desire to expand to server-side development several years ago. My day job consisted of iOS development but I wanted to learn about the web. So I ventured into the world of Node.js and created a few simple web apps for personal use. While I don’t regret doing that, the results could have been better. These web apps got very little of my attention as I didn’t have a good reason to maintain my Node.js skills and I’d cringe at the idea of jumping back into unfamiliar code. 

I can contrast that experience with the time I took recently to convert those apps to Swift Vapor apps. In this case I’m working with Swift — a language I’m familiar with. If I return to that project in a year, I’ll at least be comfortable with the language. Additionally, each time I work with Swift on the server, there is a chance I will learn something new about Swift. That value will translate into better apps on both sides and help justify the time spent.

Time To Learn Something Really New

As I mentioned, it is compelling to learn a new skill that strengthens your existing skills — like cross-platform Swift. But I think we all crave an unfamiliar challenge too. Apple offers a stream of new APIs to keep us busy. But the environment for server-side Swift is a different animal. Working with technologies like Docker, Linux, AWS and Heroku is unlike anything you will see in the Xcode editor. That shift in paradigms may widen your perspective on development possibilities for your company/app and build some confidence to take on even bolder solutions.

How to Get Started

I suggest starting with small experiments to get comfortable in this space. Maybe write a Hello World app with Vapor 4 and contrast that experience with running Swift code on a server with an AWS Lambda deployment. Once you are comfortable with the basics, consider migrating parts of your app to a dedicated Swift Package that can be used by Vapor or AWS. I think you won’t regret the time spent here and at a minimum will learn some new Swift skills, have a more modular app and will have some fun taking on a new challenge.

Vapor Resources

Vapor 4 Getting Started

Vapor 4 Tutorial – Tim Condon

AWS Lambdas Resources

AWS Swift Lambda Announcement – Tom Doron

WWDC Tutorial – Tom Doron

Getting Started With Swift on AWS Lambda – Fabian Fett

HTTP Endpoint With AWS Lambdas – Fabian Fett

Developer Experience Using AWS Swift Lambdas – Adam Fowler