Using Block in Objective C by example

Overview

Say we have a class called SPFPriceFetcher that uses JCDHTTPConnection.

SPFPriceFetcher
|
|—JCDHTTPConnection

JCDHTTPConnection uses NSURLConnection connection where any data, finish loading, or response received would have to give feedback BACK to SPFPriceFetcher. We give feedback by retaining and using block definitions passed in from SPFPriceFetcher.

Passing block definitions from parent (SPFPriceFetcher) to child (JCDHTTPConnection)

Basically what happens is that SPFPriceFetcher provides block definitions for JCDHTTPConnection to retain and use:

SPFPriceFetcher — OnSuccess block definition —-> JCDHTTPConnection
SPFPriceFetcher — OnFailure block definition —-> JCDHTTPConnection
SPFPriceFetcher — OnDidSendData block definition —-> JCDHTTPConnection

where the block interface is defined in JCDHTTPConnection.h as:

Using blocks

1) When you declare a method interface that takes such block definitions, you need to have the block interface as defined by the typedefs above.

JCDHTTPConnection.h

Then in your block definition, you would use the block definitions as a variable. In our definition, we retain them:

JCDHTTPConnection.m

Now when the NSURLConnection runs to connectionDidFinishLoading, we use our retained block definitions and call them:

For example, self.onSuccess will call the block definitions you retained that was passed in from SPFPriceFetcher earlier. It will pass the needed parmaters NSHTTPURLResponse (self.response) and NSString * (self.body) in for it to process.

In SPFPriceFetcher.m, we pass in the block definition like so:

As you can see, when we provide block definitions, we simply use “^” to denote that its a block, and then use the block interface and continue with the code implementation. We just have to make sure to match the block interface.

For example, in our case we know that OnSuccess block takes is defined as:

OnSuccess block definition

and thus, we first denote “^” as a block, then match the block interface by having the parameters be NSHTTPURLResponse and NSString. We then write the method implementation.

OnSuccess block definition we pass in:

OnFailure block definition

Same thing with the OnFailure block definition.
We first note “^” to denote that its a block definition. Then we provide the interface needed by the OnFailure block definition as shown:

OnFailure block definition we pass in:

copy and mutableCopy of NSMutableString

For NSMutableString AND NSString

say we have:

If you use

it will return you an immutable object (NSString*) with its own address. This means you cannot change the data.
Hence, even if you declare a NSMutableString pointer, and use setString to change the data, the compiler will give you an exception.

if you use [temp mutableCopy], it will return an mutable object (NSMutableString*) with its own address.
However, be sure that you declare a NSMutableString * so that you can use setString to change the data and see the difference.

Shallow and deep copy of NSArray

copying NSArray

If we are to use [NSArray copy]. The source array and copy array are pointing to the same address.

If we are to use [NSArray mutableCopy]. The source array and copy array have created their own NSArray, and thus have different addresses.

However, the User pointers inside the copy array points to the same Users pointed to by the source array. This is shown in the image. Thus, after using [NSArray mutableCopy], array and copyArray may have different array objects, but they all point to the same User objects.

Simple pointing

Run the source code and debug it. Analyze the the variables and their addresses.

Basically, this is what’s going on:

nsarray_copy

When we’re doing a shallow copy, changing the name ‘ricky’ to ‘rocky’ will also reflect in the copyArray. Because the source array and copyArray are pointing to the same Users.

Shallow Copy

If you want a shallow copy, change copying to mutable copy

In mutable copy, copyArray will have its own array. Look at the bottom portion of the image above. So if you use a NSMutabeArray to receive the return value from mutableCopy, you can add and remove objects. In our image, I used NSArray, and since its immutable, I won’t be able to add or remove array data. Its up to you to decide which one to use.

In other words, you end up with two distinct arrays, so if you were to remove or add items from one array, it wouldn’t affect the other array. However, the items in the two arrays are identical right after the copy.

Therefore, notice that the Users the source array and its mutable copy array pointing to are the same:

Screen-Shot-2015-03-17-at-7.41.46-PM

Screen-Shot-2015-03-17-at-7.41.55-PM

In order to do a deep copy, you would have to make a new array, and each element of the new array would be a deep copy of the corresponding element of the old array.

Deep Copy

Now, if you were to look at the User’s addresses. They are all different in the 2 distinct arrays. Changing the User at index 0 in array 1, will not affect the User at index 0 in array 2.

Screen-Shot-2015-03-17-at-7.34.40-PM

implementing NS Copying

REF: http://stackoverflow.com/questions/4089238/implementing-nscopying

User.m

NSNotification addObserver object differentiation

In your init

what to do when the message is sent

Sending the message

When adding observers to receive notifications for a certain unique id, we can choose which object to receive it from. In our example below, we have method sexGroupUpdated: which will be executed once the notification is received for unique id SELECTED_RADIO_BUTTON_CHANGED. But we only will do this IFF those notifications are sent from self.sexGroup object.

Extending our example, let’s say in our MainView.m, we have multiple objects of the same class TNRadioButtonGroup created. Those classes all send SELECTED_RADIO_BUTTON_CHANGED unique id notifications. We can differentiate which methods the notifications will hit by putting the name of the objects that are sending those notifications:

Sending of those notifications as shown TNRadioButtonGroup.m:

It posts notification for constant string SELECTED_RADIO_BUTTON_CHANGED.

Let’s analyze in detail.

In our MainView.m, we’re adding an observer for notifications for string id SELECTED_RADIO_BUTTON_CHANGED.

But we’re getting it from three different instantiations of TNRadioButtonGroup:
sexGroup, hobbiesGroup, and temperatureGroup. They all send a notifications of id SELECTED_RADIO_BUTTON_CHANGED.

In other words, sexGroup, hobbiesGroup, and temperatureGroup all send those notifications. How do we differentiate them?

That’s where the paramter object comes in. As you noticed, we specified the object variable in which we receive these notifications:

It simply means we will execute temperatureGroupUpdated: when we receive SELECTED_RADIO_BUTTON_CHANGED notifications, but only from the object self.temperatureGroup.

UIKeyboardTypeDecimalPad

dispatch_async vs dispatch_sync

ref: http://stackoverflow.com/questions/4360591/help-with-multi-threading-on-ios

xCode 7.3 sample code

The main reason why you want to use concurrent or serial queues over the main queue is to run tasks in the background.

Sync on a Serial Queue

dispatch_sync –

1) dispatch_sync means that the block is enqueued and will NOT continue enqueueing further tasks UNTIL the current task has been executed.
2) dispatch_sync is a blocking operation. It DOES NOT RETURN until its current task has been executed.

1) prints START
2) puts the block of code onto serialQueue2, then blocks. aka does not return
3) the block of code executes
4) When the block of code finishes, the dispatch_sync then returns, and we move on to the next instruction, which is prints END

output:

— START —
— dispatch start —
—  Task TASK B start  —
TASK B – 0
TASK B – 1


TASK B – 8
TASK B – 9
^^^ Task TASK B END ^^^
— dispatch end —
— END —

Due to the dispatch_sync not returning immediately, it blocks the main queue also. Thus, try playing around with your UISlider.
It is not responsive.

dispatch_async means that the block is enqueued and RETURN IMMEDIATELY, letting next commands execute, as well as the main thread process.

ASYNC on a Serial Queue

Let’s start with a very simple example:

This means:

1) prints START
2) we dispatch a block onto serialQueue2, then return control immediately.
3) Because dispatch_async returns immediately, we can continues down to the next instruction, which is prints END
4) the dispatched block starts executing

— START —
— END —
— dispatch start —
—  Task TASK B start  —
TASK B – 0


TASK B – 9
^^^ Task TASK B END ^^^
— dispatch end —

If you look at your UISlider, it is still responsive.

Slider code

sync-ed

dispatch_sync means that the block is enqueued and will NOT continue enqueueing further tasks UNTIL the current task has been executed.

Now let’s dispatch the first task (printing numerics) in a sync-ed fashion. This means that we put the task on the queue. Then while the queue is processing that task, it WILL NOT queue further tasks until this current task is finished.

The sleep call block other thread(s)/queue(s) from executing. Only when we finish printing out all the numerics, does the queue move on to execute the next task, which is printing out the alphabets.

Let’s throw a button in there, and have it display numbers. We’ll do one dispatch_sync first.

You’ll notice that the slider is unresponsive. That’s because by definition dispatch_sync blocks other threads/queues (including main queue from processing) until our dispatch_sync’s block has been executed. It does return until it has finished its own task.

Async, Sync, on Concurrent Queue

Now let’s dispatch_async first. Then we’ll dispatch_sync. What happens here is that:

1) prints — START —
2) we dispatch the block TASK A onto the concurrent queue, control returns immediately. prints === A start ===

3) Then we dispatch_sync TASK B on the same queue, it does not return and blocks because this task needs to complete before we relinquish control.
task B starts, prints === B start ===

The main UI is now blocked by Task B’s dispatch_sync.

4) Since Task A was executing before Task B, it will run along with B. Both will run at the same time because our queue is concurrent.

5) both tasks finish and prints === END ===

6) control returns ONLY WHEN Task B finishes, then Task B’s dispatch_sync returns control, and we can move on to the next instruction, which is log — END –.

output:

14] — START —
2016-08-25 14:26:13.638 sync_async[29186:3817447] === A start ===
2016-08-25 14:26:13.638 sync_async[29186:3817414] === B start ===
2016-08-25 14:26:13.638 sync_async[29186:3817447] —  Task TASK A start  —
2016-08-25 14:26:13.638 sync_async[29186:3817414] —  Task TASK B start  —
2016-08-25 14:26:14.640 sync_async[29186:3817414] TASK B – 0
2016-08-25 14:26:14.640 sync_async[29186:3817447] TASK A – 0


2016-08-25 14:26:21.668 sync_async[29186:3817447] TASK A – 7
2016-08-25 14:26:21.668 sync_async[29186:3817414] TASK B – 7
2016-08-25 14:26:21.668 sync_async[29186:3817447] ^^^ Task TASK A END ^^^
2016-08-25 14:26:21.668 sync_async[29186:3817414] ^^^ Task TASK B END ^^^
2016-08-25 14:26:21.668 sync_async[29186:3817447] === A end ===
2016-08-25 14:26:21.668 sync_async[29186:3817414] === B end ===
2016-08-25 14:26:21.668 sync_async[29186:3817414] — END —

Sync, Async, on Concurrent Queue

If we were to run it sync, then async:

1) prints START
2) dispatch sync on serial Queue. The sync causes us to block, or NOT RETURN until task finishes. Thus, at this point UI is unresponsive.
3) prints === A start ===
4) Task A is executing.

5) prints === A end ===
6) Now, we dispatch another task via dispatch_async onto serial queue. Control returns immediately, and we move on to the next instruction, which is prints — END –. At this point UI is now responsive again.

7) Due to control returning immediately at 6) dispatch_async, we prints — END —
8) the task starts and prints === B start ===, and task B executes.
9) task B finishes, and we prints === B end ===

output:

— START —
=== A start ===
—  Task TASK A start  —
TASK A – 0

TASK A – 7
^^^ Task TASK A END ^^^
=== A end ===
— END —
=== B start ===
—  Task TASK B start  —
TASK B – 0
TASK B – 1

TASK B – 6
TASK B – 7
^^^ Task TASK B END ^^^
=== B end ===

Async on Serial Queues

However, if we were to use a serial queue, each task would finish executing, before going on to the next one.
Hence Task A would have to finish, then Task B can start.

1) prints — START —
2) dispatch task block onto serial queue via dispatch_async. Returns immediately. UI responsive.
3) prints –END — doe to execution continuing
4) prints == START == as this block starts to execute
5) Task A executes
6) prints == END == task block finishes

output:

— START —
— END —
== START ==
—  Task TASK A start  —
TASK A – 0
TASK A – 1
TASK A – 2
TASK A – 3
TASK A – 4
TASK A – 5
TASK A – 6
TASK A – 7
TASK A – 8
TASK A – 9
^^^ Task TASK A END ^^^
== END ==

Sync on Serial Queue

1) log — START —
2) puts the task block onto the serial queue via dispatch_sync. Does not return control until task finishes. Thus, UI and other queues is blocked.
3) log == START == as the task block starts
4) Task A executes
5) log == END == as the task block ends
6) Task block is finished, so dispatch_sync relinquishes control, thus UI is responsive again. log — END –.

— START —
== START ==
—  Task TASK A start  —
TASK A – 0
TASK A – 1
TASK A – 2

TASK A – 9
^^^ Task TASK A END ^^^
== END ==
— END —

Serial Queue – Async, then Sync

The correct situation is that the serial queue plans thread(s) to execute Task A and Task B one by one. The dispatch_async, and dispatch_sync’s effects are instantaneous:

1) dispatch_async Task A – Task A gets queued. The serial queue starts planning threads to work on this task A. Execution control continues because dispatch_async returns right away.

2) dispatch_sync Task B – Task B gets queued. The serial queue is working on Task A, and thus, by definition of a serial Queue, Task B must wait for Task A to finish before it continues. However, dispatch_sync’s effect is instantaneous and it blocks all other queues, main queues, and the tasks behind Task B from being queued.

Hence, the situation created by 1) and 2), we can see that Task A is being executed, Task B is waiting for Task A to finish, and the dispatch_sync is blocking all other queues, including the main queue. Thus, that is why your UISlider is not responsive.

output:

— START —
== START ==
—  Task TASK A start  —
TASK A – 0
TASK A – 1
TASK A – 2
TASK A – 3
TASK A – 4

TASK A – 9
^^^ Task TASK A END ^^^
== END ==
== START ==
—  Task TASK A start  —
TASK A – 0

TASK A – 9
^^^ Task TASK A END ^^^
== END ==
— END —

Serial Queue – Sync, then Async

The first sync blocks all queues, main queue, and other blocks behind itself. Hence UI is unresponsive.
Block A runs. When it finishes, it relinquishes control. Task B starts via dispatch_async, and returns immediately.
Thus, UI is NOT responsive when Task A is running. Then when Task A finishes, by definition of the serial queue, it let’s Task B runs. Task B starts via dispatch_async and thus, the UI would then be responsive.

Nested dispatches

Async nest Async on a Serial Queue

1) prints — START —
2) dispatch async a block task onto the serial queue. It returns right away, does not block UI. Execution continues.
3) Execution continues, prints — END —

4) the block task starts to execute. prints — OUTER BLOCK START —
5) Task A executes and prints its stuff

6) dispatch async another block onto the same serial queue. It returns execution right away, does not block UI. Execution continues.
7) Execution continues., prints — OUTER BLOCK END –.

8) The inner block starts processing on the serial queue. prints — INNER BLOCK START —
9) Task B executes and prints stuff
10) prints — INNER BLOCK END —

Result:

— START —
— OUTER BLOCK START —
— END —
—  Task TASK A start  —
TASK A – 0

TASK A – 9
^^^ Task TASK A END ^^^
— OUTER BLOCK END —
— INNER BLOCK START —
—  Task TASK B start  —
TASK B – 0

TASK B – 9
^^^ Task TASK B END ^^^
— INNER BLOCK END —

Async nest Sync on a Serial Queue – DEADLOCK!

deadlock

Notice that we’re on a serial queue. Which means the queue must finish the current task, before moving on to the next one.
The key idea here is that the task block that’s being queued at //2, must complete before any other tasks on the queue can start.

At // 6, we put another task block onto the queue, but due to dispatch_sync, we don’t return. We only return if the block at // 6 finish executing.

But how can the 1st task block at // 2 finish, if its being blocked by the 2nd task block at // 6?

This is what leads to the deadlock.

Sync nest Async on a Serial Queue

1) log — START —
2) sync task block onto the queue, blocks UI
3) log — OUTER BLOCK START —
4) Task A processes and finishes
5) dispatch_async another task block onto the queue, the UI is still blocked from 2)’s sync. However execution moves forward within the block due to the dispatch_async
returns immediately.
6) execution moves forward and log — OUTER BLOCK END —
7) outer block finishes execution, dispatch_sync returns. UI has control again. logs — END —
8) log –INNER BLOCK START —
9) Task B executes
10) log — INSERT BLOCK END —

Async nest Async on Concurrent Queue

1) log –START–
2) dispatch_async puts block task onto the concurrent queue. Does not block, returns immediately.
3) execution continues, and we log — END —
4) queue starts processing the task block from //2. prints — OUTER BLOCK START —
5) Task A executes
6) dispatch_async puts another block task onto the concurrent queue. Now there is 2 blocks. Does not block, returns immediately.
7) prints — OUTER BLOCK END –, task block #1 is done and de-queued.
8) prints — INNER BLOCK START —
9) Task B executes
10) prints — INNER BLOCK END —

Async nest Sync on Concurrent Queue

1) prints –START–
2) puts block task on concurrent queue. returns immediately so UI and other queues can process
3) since execution immediately returns, we print — END —

4) prints — OUTER BLOCK START —
5) Task A executes

6) puts another task block onto the concurrent queue. Return ONLY if this block is finished.
Note, that it DOES NOT RETURN only in current execution context of this block!
BUT OUTTER SCOPE CONTEXT STILL CAN PROCESS. That’s why UI is still responsive.

7) dispatch_sync does not return, so we print — INNER BLOCK START —
8) Task B executes
9) prints — INNER BLOCK END —
10) prints — OUTER BLOCK END —

Sync nest Async on Concurrent Queue

1) logs — START —
2) dispatch_sync a block task onto the concurrent queue, we do not return until this whole thing is done. UI not responsive
3) prints — OUTER BLOCK START —
4) Task A executes
5) dispatch_async a 2nd block onto the concurrent queue. They async returns immediately.
6) prints — OUTER BLOCK END –.
7) The 1st task block finishes, and dispatch_sync returns.
8) prints — END —
9) prints — INNER BLOCK START —
10) Task B executes
11) prints — INNER BLOCK END —

2 serial queues

Say it takes 10 seconds to complete a DB operation.

Say I have 1st serial queue. I use dispatch_async to quickly throw tasks on there without waiting.
Then I have a 2nd serial queue. I do the same.

When they execute, the 2 serial queues will be executing at the same time. In a situation where you have
a DB resource, having ONE serial queue makes it thread safe as all threads will be in queue.

But what if someone else spawns a SECOND serial queue. Those 2 serial queues will be accessing the DB resource
at the same time!

—  Task TASK A start  —
—  Task TASK B start  —
TASK B – 0
TASK A – 0
TASK B – 1
TASK A – 1
TASK B – 2

As you can see both operations are writing to the DB at the same time.

If you were to use dispatch_sync instead:

The dispatch_sync will not return until current task block is finished. The good thing about it is that the DB operation in serial queue ONE can finish
without the DB operation in serial TWO starting.

dispatch_sync on serial queue ONE is blocking all other queues, including serial queue TWO.

TASK A – 6
TASK A – 7
TASK A – 8
TASK A – 9
^^^ Task TASK A END ^^^
—  Task TASK B start  —
TASK B – 0
TASK B – 1
TASK B – 2

However, we are also blocking the main thread because we’re working on the main queue ! -.-
In order to not block the main thread, we want to work in another queue where it is being run concurrently with the main queue.
Thus, we just throw everything inside of a concurrent queue.

Our concurrent queue works on the main queue, thus, the UI is responsive.
The blocking of our DB tasks are done within the context of our concurrent queue. It will block processing there,
but won’t touch the main queue. Thus, won’t block the main thread.