Binary Tree Concepts

Basics

tree1

Why height of a tree?

We want to figure out the height of a tree because that is the worst case scenario of a search. It is the max number of step it takes to find an item.

For example, if we find the item at the root node. Great! it took only 1 step. If we need to find another item and took 2 or 3 steps, okay average. But what if the key we’re looking for is at the very bottom of the tree at a leaf node? Then we’d have to step h times where h is the height of the tree.

In the example below where the height is 3, the worst case scenario is for us to step 3 times to a leaf node to either find a match or denote that whatever we’re trying to find is not found.

Proof

The number of nodes in a tree is denoted by n

1 + 2^1 + 2^2 + 2^3 + 2^4 … + 2^h = n

This is also true:

n = 2^(h+1) – 1

binarytree1

Reminder that height of a node at the leaf node is 0…and so on. as shown in the first image of this article.

Reminder that height of a node at the leaf node is 0…and so on. as shown in the first image of this article.

Hence

1 + 2^1 + 2^2 + 2^3 + 2^4 … + 2^h = 2^(h+1) – 1

n = 2^(h+1) – 1
n + 1 = 2^(h + 1)

note that lg has base of 2

We apply lg on both sides:
lg (n + 1) = (h + 1) lg 2

Hence, because lg 2 = 1, then lg (n+1) = h + 1
lg (n+1) – 1 = h

Hence by definition of O notation where f(x) = c * g(x) for some coefficient c,

We can say that h = lg(n).

Usage

In order to find out the max number of steps from root to bottom of the tree, from the proof above, we use the equation:
h = lg(n + 1) – 1, where n is the total number of nodes in a balanced tree.

Reminder. The total number of nodes n = 2 ^ (h + 1) – 1 as shown in the image above.

if we want to find the total number of steps it takes from root to bottom, we must simplify the equation for height h.

Calculating height of tree

given that total number of nodes n
n = 2 ^ (h + 1) – 1
as shown in the image above.

2 ^ (h + 1) = n + 1
(h + 1) lg 2 = lg (n + 1 )
since lg 2 = 1

then h = lg (n + 1 ) – 1

For example, given number of nodes n = 31, total number of height (or vertices) is n = lg(31 + 1) – 1 = 4 steps.

Calculating terms, or node levels

If you want to find terms or level of nodes in the tree, we use serial geometric equation as defined by:

S = a + ar + ar^2 + ar^3 + … + ar^n-1

where ‘a’ is the initial number of nodes.
‘r’ is the rate of growth.

term1 + term2 + term3 + term4 + term5
1 + 1(2^1) + 1(2^2) + 1(2^3) + 1(2 ^4) = 1 + 2 + 4 + 8 + 16 = 31

A term represents a node level.

The equation is
s = l_0 * (q^n – 1)/(q-1)
where q is the rate of growth and n is number of terms, or node levels
l_0 is the initial number of nodes.

hence, l_0 = 1, q = 2
S = 1*(2^n – 1)/(2-1) = 2^n – 1

S = 2^n – 1
S + 1 = 2 ^ n
lg (S+1) = n * lg 2

since lg 2 = 1
lg (S+1) = n

Hence number of node levels is defined as:
n = lg(S + 1)

NOTE: that node levels vs height is VERY different things.
Height is number of vertices. whereas node levels is each floor of the tree

Balanced Tree

In order to have a balanced so that we can have O(log n), we can use:

  • AVL
  • Red Black Tree

MVC for iOS

MVC for iOS

Data Models – objects that you use to maintain, manipulate and retrieve retrieve.

The data model represents the core information that your application is being used to access and manipulate.

For example, User is a piece of data. Then you would want to have a class UserDB, where you specifically have methods to pass in user data and retrieve user data from the database. All of that combined is a very basic data model. As you add more objects to represent all the information that will interact with each other, you will have a more complex data model.

The Data Model works strictly between the controller and the database storage.
They’re kinda like the warehouse guys at retail stores. A Manager would tell them what stuff they need to bring out from the warehouse and where to stock it. The bringing out of the stuff from warehouse is like data retrieving from the database. When the warehouse guys stock it, they are giving the needed data back to the controller, or managers.

Controller – A controller can send commands to the model to update the model’s state. It can also send commands to its associated view to change the view’s presentation of the model.

Essentially, its like a manager on a sales floor. Sales people let the manager know whats going on and what items has been sold out, or too much, or whatever the customers need. The manager then communicates with the warehouse guys to let them know what kind of items to bring out, take back, restock, overstock, etc.

Similarly, the controller see what the views want or request, then communicates with the data model to do it. Then gives any response back to the views.

View – These are the like the sales guys. They communicate with the customers and take care of all customer input.

The View takes in all user inputs. Then gives it to the controller to be processed.

From an iOS perspective:

mvc

Rotating CALayers around Y axis using Core Animation and Controlling it with Slider

ref:

http://stackoverflow.com/questions/3831867/trying-to-make-a-card-from-calayers-that-can-flip-over

http://www.binpress.com/tutorial/objectivec-lesson-13-keyvalue-coding/79

Setting up the Layers

Basically, we have two layers. One if the front with some text. The other is ‘back’ layer with some text, in which we place underneath the front.

However, you rotate the ‘back’ 180 degrees so that back’s text is facing the back. That way, when we rotate the whole thing, we can see the text of front and back.

But first, let’s make a container layer to contain the front and back layers. Notice that its a CATransformLayer. This a a layer that’s just for containing other layers (it can’t have things like backgroundColor or borderWidth). It does however maintain the true 3D relationship of its child layers (i.e., it doesn’t flatten them like a regular CALayer). So you can make two layers, flip one and offset its zPosition just a hair and put them both in a CATransformLayer and now you can flip this parent layer around and the child layers stay locked together and always render properly.

method that creates and returns the ‘back’ CALayer
Notice at the end that we flip this back layer 180 so that it faces the other way.

Now create the front layer. We leave front layer as is.

The basic set up. First we create the container layer. Then we add the back first, and then the front on top of that. Then, we have the self layer add this container layer. After that, we have a nice card where when we rotate this container layer, we can see the front and back as represented.

Create the Animation

valueForKeyPath or Key-Value Coding.

Accessing ivars is usually done through accessor methods or dot-notation. But as a means of indirection, there is another way—Key-Value Coding, or KVC. This is a means of setting and getting the values by using strings—specifically, NSString, and its corresponding methods, including the very useful stringWithFormat:.

The key is in fact the same name as your variable. It follows a specific sequence of steps:

1) First, it looks for a getter method with the same name. In the above example, it looks for the -name or -isName (for boolean values) methods; if they are found, it invokes that method and returns that result.
2) If the getter methods are not found, the method looks for an instance variable by that name: name or _name.

Hence we get the number for rotating around y. Then we say that we want the animation

Have the container layer add the Animation

We add the animation to our CATransformLayer and give it the name rotateYright.

Create the slider and add it to our self.view

Notice the minimum is set to 0 and max to 1. This is so that we can give decimal values from 0 to 1 and insert it into iVar timeOffset for our CATransformLayer container.

Implement the slider action method

then you set the slider value in decimal between 0 and 1 and feed it to CATransformLayer’s timeOffset. Hence, you can now control the animation of flipping a card around the Y axis.

using @synchronize to solve Mutual Exclusion

ref – http://stackoverflow.com/questions/28005734/making-a-class-thread-safe-in-ios

mutual exclusion refers to the requirement of ensuring that no two concurrent processes[a] are in their critical section at the same time; it is a basic requirement in concurrency control, to prevent race conditions.

A race condition or race hazard is the behavior of an electronic, software or other system where the output is dependent on the sequence or timing of other uncontrollable events. It becomes a bug when events do not happen in the order the programmer intended. The term originates with the idea of two signals racing each other to influence the output first.

Example using @synchronized:

member variables

method definitions

applicationDidFinishLaunch

result:
2015-04-05 11:30:38.080 YonoApp[336:27030] incrementing player A gems
……
2015-04-05 11:30:38.084 YonoApp[336:27030] incrementing player A gems
2015-04-05 11:30:38.084 YonoApp[336:27030] decrementing player B gems
….
2015-04-05 11:30:38.091 YonoApp[336:27030] decrementing player B gems
2015-04-05 11:30:38.091 YonoApp[336:27030] transferToPlayerA – player A now has 120 gems
2015-04-05 11:30:38.091 YonoApp[336:27030] transferToPlayerA – player B now has 80 gems

2015-04-05 11:30:38.093 YonoApp[336:27031] decrementing player A gems
……..
2015-04-05 11:30:38.099 YonoApp[336:27031] decrementing player A gems
2015-04-05 11:30:38.099 YonoApp[336:27031] incrementing player B gems
……
2015-04-05 11:30:38.106 YonoApp[336:27031] incrementing player B gems
2015-04-05 11:30:38.107 YonoApp[336:27031] transferToPlayerB – player A now has 100 gems
2015-04-05 11:30:38.107 YonoApp[336:27031] transferToPlayerB – player B now has 100 gems

Using self.mutex, if thread 1 is running transferToPlayerA locking on self.mutex, thread 2 cannot run transferToPlayerB because it sees that self.mutex is locked.

Once thread 2 finishes running, it will unlock self.mutex, in which thread 2 can then lock on it and run transferToPlayerB.

This behavior allows thread 1 to modify both playerAGems and playerBGems atomically, that is, without someone else mucking around in between. When thread 1 is done, @synchronize unlocks the self.mutex, which then allows thread 2 to continue running with it holding the lock on the mutex.

Thread safety means that the data can be accessed and/or modified by multiple threads without becoming corrupt.

If you were to take away the @synchronize, you’d be getting both threads changing self.playerAGems and self.playerBGems at the same time, which is by definition, is a race condition where the output (number of gems) is dependent on the sequence or timing of other uncontrollable events such as the speed execution of our threads and how it runs.

Safe Queue

One simple and classic approach is to use Objective-C’s @synchronized capability.

In this case, @synchronized(self.data) around all of your accesses to the array will ensure that only a single thread can access the array at a time.

Even though length doesn’t modify the array, you still need to protect its access because another thread could potentially modify the array –

dispatch_get_global_queue vs dispatch_get_main_queue

dispatch your tasks on dispatch_get_main_queue() for UI changes.

The main queue is a special serial queue. Unlike other serial queues, which are uncommitted, in that they are “dating” many threads but only one at time, the main queue is “married” to the main thread and all tasks are performed on it. Jobs on the main queue need to behave nicely with the runloop so that small operations don’t block the UI and other important bits. Like all serial queues, tasks are completed in FIFO order. You get it with dispatch_get_main_queue.

dispatch your tasks on dispatch_get_global_queue (background queue) upon which you can dispatch background tasks that are run asynchronously (i.e. won’t block your user interface). And if you end up submitting multiple blocks to the global queues, these jobs can operate concurrently.

NOTE THAT if you have multiple blocks of code that you want to submit to a background queue that you must have run sequentially in the background, you could create your own serial background queue

and dispatch to that.

Hence dispatch_get_global_queue is concurrent in nature.

Serial vs Concurrent Queue

Concurrent vs. serial determines how submitted tasks are to be run. A concurrent queue allows the tasks to run concurrently with one another. A serial queue only allows one of its tasks to run at a time.

Concurrent Queue

Concurrent queues (also known as a type of global dispatch queue) execute one or more tasks concurrently, but tasks are still started in the order in which they were added to the queue. The currently executing tasks run on distinct threads that are managed by the dispatch queue. The exact number of tasks executing at any given point is variable and depends on system conditions.

To create a concurrent queue:

In iOS 5 and later, you can create concurrent dispatch queues yourself by specifying DISPATCH_QUEUE_CONCURRENT as the queue type. In addition, there are four predefined global concurrent queues for your application to use. For more information on how to get the global concurrent queues

Tasks are executed in Parallel

“Concurrent queues (also known as a type of global dispatch queue) execute one or more tasks concurrently, but tasks are still started in the order in which they were added to the queue.”

Here we have an example of running Concurrent with async dispatching
A concurrent queue means that they are executed in parallel. Hence while block A may be processing, blocks B, C..etc may be executed at the same time as well. In other words, the current executing block can’t assume that it’s the only block running on that queue.

Also, because it’s a concurrent queue, it lets the async dispatching execute blocks whenever they are ready to. Hence that’s why the queue may dispatch its blocks out of sequence.

dispatch_async means control return immediately. In other words, “DON’T wait for me to finish my task, just go on with the next task….”. It DOES NOT BLOCK, which means the main thread (UI thread) keeps running and is responsive to user touches. This includes all other threads also, they keep going about their work because we are not blocking.

dispatch_sync means it blocks until it finishes processing. In other words, “WAIT for me to finish my task, then you can take over”. This BLOCKS all threads, including the main thread. So when you use dispatch_sync, all queues and threads wait for this to finish, including the UI thread so that it does not respond to user touches.

Concurrent Async example

In the code, we’re simply simulating spawning threads concurrently to do certain tasks in certain amount of time units.


^^^^^^^^^^^^^^ TASK C started ^^^^^^^^^^^^^^^
2015-08-01 00:36:28.321 YonoApp[4189:180141] ^^^^^^^^^^^^^^ TASK A started ^^^^^^^^^^^^^^^
2015-08-01 00:36:28.321 YonoApp[4189:180144] ^^^^^^^^^^^^^^ TASK B started ^^^^^^^^^^^^^^^
2015-08-01 00:36:28.321 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.321 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.321 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.321 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.321 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.321 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.322 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.323 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.323 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.323 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
CalendarViewController.m -initWithTabBar 1
2015-08-01 00:36:28.323 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.323 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.324 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.328 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.329 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.329 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.329 YonoApp[4189:180142] Task C UPDATE BBT TENDERNESS
2015-08-01 00:36:28.329 YonoApp[4189:180144] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:36:28.330 YonoApp[4189:180142] ———-> Task C is done with UPDATE BBT TENDERNESS <-------------- 2015-08-01 00:36:28.330 YonoApp[4189:180141] Task A UPDATE SIGN TENDERNESS ....A gets done ..then B is done

So as you can see, even though we ran it concurrently via dispatch_async, Task C started first because task A and task B were not ready. After C started, then A, and then B.

While A is working…B is working….etc ….and they keep mingling. This happens between all the threads

In other words, your code will not wait for execution to complete. Both blocks will dispatch (and be enqueued) to the queue and the rest of your code will continue executing on that thread. Then at some point in the future, (depending on what else has been dispatched to your queue), Task A will execute and then Task B will execute.

Concurrent Sync example

In the dispatch_sync example, however, you won’t dispatch TASK n+1 until after TASK n has been dispatched and executed. This is called “blocking”. Your code waits (or “blocks”) until the task executes. If were to change dispatch_async to dispatch_sync, then the result would be like this:

2015-08-01 00:40:09.076 YonoApp[4231:181577] ^^^^^^^^^^^^^^ TASK A started ^^^^^^^^^^^^^^^
2015-08-01 00:40:09.076 YonoApp[4231:181577] Task A UPDATE SIGN TENDERNESS
….
2015-08-01 00:40:09.084 YonoApp[4231:181577] Task A UPDATE SIGN TENDERNESS
2015-08-01 00:40:09.084 YonoApp[4231:181577] ———-> Task A is done

2015-08-01 00:40:09.084 YonoApp[4231:181577] ^^^^^^^^^^^^^^ TASK B started ^^^^^^^^^^^^^^^
2015-08-01 00:40:09.084 YonoApp[4231:181577] Task B UPDATE SIGN TENDERNESS
………
2015-08-01 00:40:09.090 YonoApp[4231:181577] Task B UPDATE SIGN TENDERNESS
2015-08-01 00:40:09.091 YonoApp[4231:181577] ———-> Task B is done

2015-08-01 00:40:09.091 YonoApp[4231:181577] ^^^^^^^^^^^^^^ TASK C started ^^^^^^^^^^^^^^^
2015-08-01 00:40:09.091 YonoApp[4231:181577] Task C UPDATE BBT TENDERNESS
…………
2015-08-01 00:40:09.092 YonoApp[4231:181577] Task C UPDATE BBT TENDERNESS
2015-08-01 00:40:09.092 YonoApp[4231:181577] ———-> Task C is done

Serial Queues

Serial queues are monogamous, but uncommitted. If you give a bunch of tasks to each serial queue, it will run them one at a time, using only one thread at a time. The uncommitted aspect is that serial queues may switch to a different thread between tasks.

Serial queues always wait for a task to finish before going to the next one.

Thus tasks are completed in FIFO order. You can make as many serial queues as you need with dispatch_queue_create.

By definition, Serial Queues says that there is only one block running at a time, and they are executed in order.

So if we add in blocks A, B, C, D…then they are started and ended in order. Also notice since we use dispatch_async, that means it returns control to the Main Thread, for other threads to start spawning.

To create a serial queue:

If we dispatched async for tasks A, B, and C

The result would be:

Task A started
Task A ended
Task B started
Task B ended
Task C started
Task C ended

Because by definition serial only allows on task to be running at one time. Async means it does not block, so it is not blocking anything while we run. Hence another serial queue may be running its task at the same time.

If we dispatched sync for tasks A, B, and C, we’d get the same result because by definition sync blocks everyone while it works. Once its done, it unblocks and let’s the next task go.

Multiple Serial Queues

However, if you create four serial queues, each queue executes only one task at a time up to four tasks could still execute concurrently, one from each queue.

Let’s see what happens when we get 2 serial queues together.

2 Serial queues, using Async

serialQueue1 – Task A
serialQueue2 – Task B C

2015-08-01 01:23:47.782 YonoApp[4451:196545] ^^^^^^^^^^^^^^ TASK A1 started ^^^^^^^^^^^^^^^
2015-08-01 01:23:47.782 YonoApp[4451:196544] ^^^^^^^^^^^^^^ TASK B2 started ^^^^^^^^^^^^^^^
2015-08-01 01:23:47.786 YonoApp[4451:196545] Task A1 UPDATE SIGN TENDERNESS
2015-08-01 01:23:47.786 YonoApp[4451:196544] Task B2 UPDATE SIGN TENDERNESS

RIGHT HERE. task A1 and task B2 are executing at the same time. In respective to their own queues, they are running one task at a time, but from a multiple queue standpoint, they are running their one task at a time simultaneously with each other.

2015-08-01 01:23:47.794 YonoApp[4451:196544] Task B2 UPDATE SIGN TENDERNESS
2015-08-01 01:23:47.795 YonoApp[4451:196544] Task B2 UPDATE SIGN TENDERNESS
2015-08-01 01:23:47.795 YonoApp[4451:196544] ———-> Task B2 is done with UPDATE SIGN TENDERNESS <-------------- 2015-08-01 01:23:47.795 YonoApp[4451:196544] ^^^^^^^^^^^^^^ TASK C2 started ^^^^^^^^^^^^^^^ 2015-08-01 01:23:47.797 YonoApp[4451:196544] Task C2 UPDATE BBT TENDERNESS 2015-08-01 01:23:47.797 YonoApp[4451:196544] Task C2 UPDATE BBT TENDERNESS 2015-08-01 01:23:47.797 YonoApp[4451:196544] ----------> Task C2 is done with UPDATE BBT TENDERNESS <-------------- Hence, having multiple serial queues simply means each queue's block work individually and in order, but the serial queues themselves are parallel.

Multiple Serial Queues with Sync

When we have multiple serial queues, and we want them to be in order, we can use dispatch_sync

dispatch_sync means that while a block is executing in this particular queues, ALL OTHER BLOCKS are on hold…until this one finishes. Hence before when we had multiple serial queues doing their reads and writes, you see read/write overlaps because even though one serial queue is doing it one block at a time, the other serial queue(s) are doing their one block at a time as well…resulting in “the one and only working blocks” from multiple serial queues doing their work at the same time.

In order to solve this, we use dispatch_sync, which means for all other blocks to hold and let this block finish. When this block is finished, then we let the next block start.

We you apply dispatch_sync to the code, you’ll see that all tasks are done in order and without overlapping:

^^^^^^^^^^^^^^ TASK A1 started ^^^^^^^^^^^^^^^
2015-08-01 01:27:26.061 YonoApp[4489:197886] Task A1 UPDATE SIGN TENDERNESS
2015-08-01 01:27:26.136 YonoApp[4489:197886] Task A1 UPDATE SIGN TENDERNESS
2015-08-01 01:27:26.136 YonoApp[4489:197886] ———-> Task A1 is done with UPDATE SIGN TENDERNESS

2015-08-01 01:27:26.137 YonoApp[4489:197886] ^^^^^^^^^^^^^^ TASK B2 started ^^^^^^^^^^^^^^^
2015-08-01 01:27:26.137 YonoApp[4489:197886] Task B2 UPDATE SIGN TENDERNESS
2015-08-01 01:27:26.217 YonoApp[4489:197886] Task B2 UPDATE SIGN TENDERNESS
2015-08-01 01:27:26.218 YonoApp[4489:197886] Task B2 UPDATE SIGN TENDERNESS
2015-08-01 01:27:26.218 YonoApp[4489:197886] ———-> Task B2 is done with UPDATE SIGN TENDERNESS

2015-08-01 01:27:26.218 YonoApp[4489:197886] ^^^^^^^^^^^^^^ TASK C2 started ^^^^^^^^^^^^^^^
2015-08-01 01:27:26.218 YonoApp[4489:197886] Task C2 UPDATE BBT TENDERNESS
2015-08-01 01:27:26.219 YonoApp[4489:197886] Task C2 UPDATE BBT TENDERNESS
2015-08-01 01:27:26.221 YonoApp[4489:197886] Task C2 UPDATE BBT TENDERNESS
2015-08-01 01:27:26.221 YonoApp[4489:197886] ———-> Task C2 is done with UPDATE BBT TENDERNESS

Other Notes

Serial queues (also known as private dispatch queues) execute one task at a time in the order in which they are added to the queue. The currently executing task runs on a distinct thread (which can vary from task to task) that is managed by the dispatch queue. Serial queues are often used to synchronize access to a specific resource.

You can create as many serial queues as you need, and each queue operates concurrently with respect to all other queues. In other words, if you create four serial queues, each queue executes only one task at a time but up to four tasks could still execute concurrently, one from each queue.

Serial means the tasks are executed in order. This means that the block of the queue that is executing can assume IT IS THE ONLY BLOCK RUNNING ON THAT QUEUE. However, blocks from other queues may be running concurrently with this queue. That’s why you need to use dispatch_sync to make sure ONLY ONE BLOCK is running from a multiple queue standpoint.

Concurrent means the tasks are executed in parallel. This means that the block of the queue that is executing CAN’T assume that its the only block running on that queue.

Threads Basics (using objective c)

ref: http://stackoverflow.com/questions/79923/what-and-where-are-the-stack-and-heap

Local variables are stored in each thread’s own stack. That means that local variables are never shared between threads.

The OS allocates the stack for each system-level thread when the thread is created.
That also means that all local primitive variables are thread safe.

For example, in objective C:

The result is:
Thread B is looking at local variable with address 0x102b87c30
Thread A is looking at local variable with address 0x102affc30

Each thread gets a stack, while there’s typically only one heap for the application.
Typically the OS is called by the language runtime to allocate the heap for the application.

The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.

The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).

The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or free. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor’s cache, making it very fast. Another performance hit for the heap is that the heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be – typically – synchronized with “all” other heap accesses in the program.

In a multi-threaded environment each thread will have its own completely independent stack but they will share the heap. Concurrent access has to be controlled on the heap and is not possible on the stack.

Example, let’s have a member variable created on the heap like so:

result is:

Thread A is looking at Logic object in heap with address 0xb000000000000002
Thread B is looking at Logic object in heap with address 0xb000000000000002
Thread A is looking at local variable with address 0x102ab3c44
Thread B is looking at local variable with address 0x102b3bc44

So as you can see, the local variable is unique to each thread. However, the object created on the heap, given that it was made previously and created once, is seen by BOTH threads.

Number object example (part 1) – sharing of same heap objects

For example, in the beginning, thread A creates NSNumber 0 with address 0xb000000000000002.

Then thread C comes in and creates NSNumber 0 but it points to the same object with the address 0xb000000000000002

Because the heap is a global resource, it has its own data structure that it looks through to see if there already is an object with type NSNumber and value 0 created. If it does, it’ll just return it for use.

The heap’s main objective is to be efficient and save space and execution time, thus, if it sees that multiple threads wishes to access the same object type and value, it will return that common object from its data structure and thus just retain it. When all of the threads finish and point away from it, it will see that the commonly used object retain count is 0 and thus releases it from its heap data structure.

Thus, you will see that as Thread ‘A’, ‘C’, and ‘B’ come in to create NSNumber with value 0, the heap returns the same object of address 0xb000000000000002.

Difference from part 2

The reason why in part 2, every newly allocated object has different address is because I used random to generate a different starting point number. Thus for each thread, there are no same numbers. Each thread works off of a different starting number.

Thread ‘A’ started from 16807, while thread ‘B’ started off 282475249. Thus NSNumber objects with different values will obviously need to be allocated with their own address space.

—- BEGIN DO IT: address of identifier A is: 0x1009c01e8 —-
—- BEGIN DO IT: address of identifier C is: 0x1009c0228 —-
Thread A changed number in heap to: 0, object in heap address 0xb000000000000002
Thread C changed number in heap to: 0, object in heap address 0xb000000000000002
—- BEGIN DO IT: address of identifier B is: 0x1009c0208 —-
Thread A ‘s number object address 0x7f9771501cc0
Thread C ‘s number object address 0x7f9771501cc0
Thread B changed number in heap to: 0, object in heap address 0xb000000000000002
Thread A changed number in heap to: 1, object in heap address 0xb000000000000012
Thread B ‘s number object address 0x7f9771501cc0
Thread A ‘s number object address 0x7f9771501cc0
Thread C changed number in heap to: 1, object in heap address 0xb000000000000012
Thread B changed number in heap to: 1, object in heap address 0xb000000000000012
—- BEGIN DO IT: address of identifier D is: 0x1009c0248 —-
Thread A changed number in heap to: 2, object in heap address 0xb000000000000022
Thread C ‘s number object address 0x7f9771501cc0
Thread B ‘s number object address 0x7f9771501cc0
Thread D changed number in heap to: 0, object in heap address 0xb000000000000002
Thread A ‘s number object address 0x7f9771501cc0
Thread C changed number in heap to: 2, object in heap address 0xb000000000000022
Thread B changed number in heap to: 2, object in heap address 0xb000000000000022
Thread D ‘s number object address 0x7f9771501cc0
Thread A changed number in heap to: 3, object in heap address 0xb000000000000032

Number object example (part 2) – Another example

Say we have an instance object’s method like so:

We spawn multiple threads to run over that method:

Here is what’s happening:

1) First thread comes into the method, it pushes the identifier onto its stack. Remember all parameters and local variables gets pushed onto the thread’s own stack. You can verify this by looking at the address of the parameter ‘identifier’. Both threads have different ‘identifier’ variables.

Thread ‘A’ uses the random function to get a start value of 16807

It then manipulates our class member variable, _objectInHeap, which scope is global to all those who use this class. You can look at the _objectInHeap’s address at each thread’s manipulation to see that they are all manipulating the same _objectInHeap.

When thread ‘A’ points _objectInHeap to a new object in heap, that NSNumber object is allocated and auto-released.

thread_basics_objectInHeap

2) You will then see the second thread with name ‘B’ come in. We see that the identifier pushed onto this thread’s is different than thread ‘A’.

Thread ‘B’ uses the random function to get a start value of 282475249.

3) Next step, our thread ‘A’ then re-points _objectInHeap pointer to another object in the heap. Notice that objectInHeap is the same, and the object in the heap has a different address. That’s because the heap allocated a new object with a new addresses so that _objectInHeap can point to it. The previous NSNumber object, assuming now that no one is pointing to it, would be auto-released by the heap data structure.

4) Thread ‘B’ now manipulates _objectInHeap to point to a newly allocated NSNumber with value 282475249.

Hence both threads ‘A’ and ‘B’ uses the same class member variable pointer _objectInHeap and basically both are pulling _objectInHeap’s pointer to their own commands of allocated objects in the heap.

result:

2015-05-20 09:09:11.721 NSNotificationExample[3621:483901] —- BEGIN DO IT: address of identifier A is: 0x10346d210 —-
2015-05-20 09:09:11.722 NSNotificationExample[3621:483901] start number 16807 assigned to thread A, should end with: 16907
2015-05-20 09:09:11.722 NSNotificationExample[3621:483901] Thread A changed number in heap to: 16807, object in heap address 0xb000000000041a72
2015-05-20 09:09:11.722 NSNotificationExample[3621:483901] Thread A ‘s number object address 0x7fd7e0730630
2015-05-20 09:09:11.722 NSNotificationExample[3621:483902] —- BEGIN DO IT: address of identifier B is: 0x10346d230 —-
2015-05-20 09:09:11.723 NSNotificationExample[3621:483901] Thread A changed number in heap to: 16808, object in heap address 0xb000000000041a82
2015-05-20 09:09:11.723 NSNotificationExample[3621:483902] start number 282475249 assigned to thread B, should end with: 282475349
2015-05-20 09:09:11.723 NSNotificationExample[3621:483901] Thread A ‘s number object address 0x7fd7e0730630
2015-05-20 09:09:11.723 NSNotificationExample[3621:483902] Thread B changed number in heap to: 282475249, object in heap address 0xb00000010d63af12
2015-05-20 09:09:11.723 NSNotificationExample[3621:483901] Thread A changed number in heap to: 16809, object in heap address 0xb000000000041a92

Implicit vs Explicit

In computer programming, a mutex is a program object that allows multiple program threads to share the same resource, such as file access, but not simultaneously. When a program is started, a mutex is created with a unique name.

IMPLICIT – implied though not plainly expressed.
In other words, implied, hinted at, suggested, insinuated.

IMPLICIT LOCK:
Objectivity will implicitly obtain the appropriate locks for your application
AT THE POINT at which they are needed. An operation that reads an object will obtain a read lock; an operation that modifies an object will obtain a write lock.

The Objective-C language level synchronization uses the mutex.

The @synchronized directive is a convenient way to create mutex locks on the fly in Objective-C code. The @synchronized directive does what any other mutex lock would do—it prevents different threads from acquiring the same lock at the same time. In this case, however, you do not have to create the mutex or lock object directly. Instead, you simply use any Objective-C object as a lock token, as shown in the following example:

The object passed to the @synchronized directive is a unique identifier used to distinguish the protected block. If you execute the preceding method in two different threads, passing a different object for the anObj parameter on each thread, each would take its lock and continue processing without being blocked by the other.

If you pass the same object in both cases, however, one of the threads would acquire the lock first and the other would block until the first thread completed the critical section.

As a precautionary measure, the @synchronized block implicitly adds an exception handler to the protected code. This handler automatically releases the mutex in the event that an exception is thrown. This means that in order to use the @synchronized directive, you must also enable Objective-C exception handling in your code. If you do not want the additional overhead caused by the implicit exception handler, you should consider using the lock classes

EXPLICIT – stated clearly and in detail, leaving no room for confusion or doubt.

EXPLICIT LOCK:
Some applications, however, may need to reserve access to all required resources in advance. Reasons for doing so might be to secure required access rights to the necessary objects before beginning an operation, or to prevent other sessions from modifying objects critical to the operation.

An application needing to reserve access to all required objects in advance can explicitly lock objects. Suppose an application needs to calculate a value based upon the state of many objects at a specific point in time. Although the application cannot check all of the necessary objects simultaneously, it can achieve the same effect by “freezing the state of the objects” and then checking them in sequence. Explicit locking “effectively freezes the objects”, because no other session can modify them as long as they are locked.

For example, NSLock:

thread safe class

http://stackoverflow.com/questions/28005734/making-a-class-thread-safe-in-ios

Thread safety means that the data structure can be accessed and/or modified by multiple threads without becoming corrupt.

One simple approach is to use Objective-C’s @synchronized capability.

In this case, @synchronized(self.data) around all of your accesses to the array will ensure that only a single thread can access the array at a time.

Even though length doesn’t modify the array, you still need to protect its access because another thread could potentially modify the array –

Better Solution

Instead of using synchronize, we can use serial queues.

1) Create private property

get: self.privateFoo
set: self.privateFoo = someObject

2) Create custom set and get method.

Use a serial queue in there to access the private property. Get should be async. Set should be sync.

http://shanghaiseagull.com/index.php/2015/05/22/thread-safety-in-ios/