Concurrency programming in Swift involves managing multiple tasks executing simultaneously to boost performance and create responsive user experiences. However, without careful consideration, working in a multithreaded environment opens the door to subtle bugs. In this post, we’ll discuss how to ensure concurrency safety in Swift applications, arming you with the tools to write robust and reliable code.

While developing iOS apps, developers frequently encounter tasks such as updating shared data structures, accessing network resources, or performing long-running calculations. These tasks can become incredibly tricky when multiple threads are involved. Without careful synchronization, issues can emerge. Those include:

  • Race Conditions: Occur when two or more tasks attempt to modify a shared resource simultaneously without proper synchronization, leading to unpredictable outcome.
  • Deadlocks: Happen when two or more tasks wait on each other to release resources they each hold, causing the app to stall indefinitely.
  • Priority Inversion: Arises when lower-priority tasks hold resources needed by higher-priority tasks, causing the higher-priority tasks to be indirectly blocked by lower-priority ones.

The engineers at Apple recognize the importance of addressing concurrency head-on, providing developers with a suite of tools to ensure safety. Let’s walk through some key techniques and strategies that we can leverage in our Swift applications.

1. Structured Concurrency

Swift 5.5 introduced structured concurrency, a paradigm shift from the manual concurrency management of GCD and OperationQueue to a more intuitive and safer model. It brings structure to concurrent execution, much like structured programming does for sequential code, by ensuring that concurrent tasks are well-organized and their lifecycles are managed systematically. There are several key mechanisms behind this paradigm:

1.1. Hierarchical Task Management

Structured concurrency organizes tasks into a hierarchy. This hierarchy prevents the parent task from finishing and releasing resources that its child might still be using. It also ensures that related tasks are grouped together, making it easier to understand and manage their execution. Here’s an example:

func loadUserProfile() async throws -> UserProfile {
    async let profile = fetchProfile()
    async let avatar = fetchAvatarImage()

    let (userProfile, avatarImage) = await (try profile, try avatar)
    userProfile.avatar = avatarImage
    return userProfile
}

In this code snippet, both fetchProfile and fetchAvatarImage are child tasks of loadUserProfile. The function waits for both operations to complete before proceeding, effectively managing task lifecycles hierarchically. The employment of async/await makes the code more readable and maintainable compared to GCD’s callback-based model.

Another nice thing about structured concurrency is that it automatically switches between execution contexts (e.g., background to main thread), which reduces the potential for mistakes associated with GCD’s manual context switching.

1.2. Automatic Task Cancellation

In structured concurrency, cancellation signals propagate automatically down the task hierarchy. If a parent task is cancelled, all its child tasks are also cancelled. This automatic propagation simplifies the management of cancellation, ensuring that unnecessary work is stopped instantly, resources are conserved, and the application remains responsive.

Let’s look at a sample on how we can achieve this:

func performTimeSensitiveOperation() async {
    let task = Task {
        await withTaskGroup(of: Void.self) { group in
            group.addTask { await performPartOne() }
            group.addTask { await performPartTwo() }
        }
    }
    
    // Cancel the task if needed, based on some condition
    task.cancel()
}

In this scenario, performTimeSensitiveOperation initiates a cancellable parent task that encompasses two child tasks. If the parent task is cancelled (e.g., due to a timeout or user action), both child tasks are automatically cancelled as well.

1.3. Error Propagation

Structured concurrency simplifies error handling in concurrent code. Errors thrown by child tasks are propagated to the parent task, allowing developers to handle errors in a centralized manner rather than dealing with them individually in each child task.

The example below illustrates how a function uses a throwing task group to execute two child tasks. If either child task throws an error, it’s propagated to the parent task, where it can be caught and handled appropriately.

func loadData() async {
    do {
        try await withThrowingTaskGroup(of: Void.self) { group in
            group.addTask { try await loadDataPartOne() }
            group.addTask { try await loadDataPartTwo() }
            // Errors thrown in either loadDataPartOne or loadDataPartTwo are propagated.
        }
    } catch {
        // Handle errors from child tasks here
        print("Failed to load data: \(error)")
    }
}

2. State Isolation with Actors

The cornerstone of concurrency lies in managing access to shared resources. Actors introduce a powerful paradigm to safely access and mutate state across multiple threads. They abstract away the complexity of managing concurrency, making the code simpler and preventing race conditions.

Let’s examine an scenario where we have an Account class that manages a user’s account balance. Operations on the balance, such as deposits and withdrawals, must be thread-safe to prevent concurrent modifications that could lead to an incorrect balance. Back in the old days, GCD and serial dispatch queues were the common way to synchronize access to the balance. Here’s how you can implement it:

class Account {
    private var balance: Double = 0.0
    private let queue = DispatchQueue(label: "io.grokkingswift.account.serialQueue")

    func deposit(amount: Double) {
        queue.async {
            self.balance += amount
        }
    }

    func withdraw(amount: Double) -> Bool {
        var success = false
        queue.sync {
            guard self.balance >= amount else { return }
            self.balance -= amount
            success = true
        }
        return success
    }

    func getBalance() -> Double {
        var currentBalance: Double = 0.0
        queue.sync {
            currentBalance = self.balance
        }
        return currentBalance
    }
}

This approach ensures thread safety but requires management of dispatch queues. Furthermore, GCD and serial queues can lead to performance issue with excessive thread switching or blocking operations if not handled carefully.

Actors are designed to minimize these issues. Here’s how you can achieve the same thread-safe Account using an actor:

actor Account {
    private var balance: Double = 0.0

    func deposit(amount: Double) {
        balance += amount
    }

    func withdraw(amount: Double) -> Bool {
        guard balance >= amount else { return false }
        balance -= amount
        return true
    }

    func getBalance() -> Double {
        return balance
    }
}

Under the hood, actors automate the synchronization of state access. Only the actor’s own methods can access and modify its state. This isolation guarantees that concurrent access to the state is controlled and serialized. Simultaneously, the Swift runtime ensures that only one piece of code accesses the actor’s state at a time, without the boilerplate of managing queues.

let account = Account()

Task {
    await account.deposit(amount: 100)
    print("Balance after deposit: \(await account.getBalance())")
}

Task {
    let success = await account.withdraw(amount: 50)
    print(success ? "Withdrawal successful" : "Withdrawal failed")
    print("Balance after withdrawal: \(await account.getBalance())")
}

In the concurrent execution scenario above, despite the concurrent tasks, access to the account actor’s state is automatically managed in a serial fashion. This simplified syntax allows developers to focus more on the logic of their applications rather than the complexity of concurrency management.

3. Task Prioritization

Task prioritization is a critical aspect of Swift’s structured concurrency model, allowing developers to specify the relative importance of tasks. By assigning priorities, Swift ensures that system resources are allocated efficiently, prioritizing the execution of more critical tasks and thereby improving the application’s overall performance and responsiveness. This mechanism helps manage system load and prevent situations where low-priority tasks starve high-priority tasks of resources.

Swift defines several priority levels for tasks, including .background, .utility, .default, .userInitiated, and .userInteractive. These priorities are designed to mirror the user experience expectations:

  • .userInteractive and .userInitiated priorities are intended for tasks that are directly interacting with the user and require immediate results to ensure a smooth user experience.
  • .utility and .background are suited for tasks that operate in the background, where execution time is less critical.

Prioritize tasks that directly impact the user experience to ensure that your application remains responsive and interactive. Operations like fetching data for the current screen should have higher priority over background tasks.

One critical point is that you should avoid assigning high priorities to all tasks to make everything seem fast. Overuse of high priorities can lead to resource contention and potentially degrade overall system performance. Instead, consider adjusting task priorities dynamically based on the current application state or user actions. For example, increase the priority of data prefetching tasks when the network is idle or lower the priority of certain background tasks during user-intensive operations.

Conclusion

Concurrency safety is extremely important in modern software development as it directly impacts the reliability, performance, and user experience of applications. Swift’s modern concurrency model inherently promotes safety by abstracting away the complexities of thread management and synchronization, thus empowers developers to build efficient and safe concurrent applications with ease and confidence.

Thanks for reading! 🚀

Categorized in: