Monthly Archives: November 2015

Big O notation basics

Hardware execution time

Let’s say an assignment

is one step.

This requires the same amount of time no matter how big the length of array is. We can say that time, T, to insert an item into an unsorted array is a constant K:

T = K

In detail, the actual time (in ms or whatever unit of measure) is required by the insertion is:

  • related to the speed of the microprocessor
  • how efficiently the compiler has generated the program code
  • ..other factors
Linear Search

The number of comparisons that must be made to find a specified item is, ON AVERAGE, half of the total number of items. Hence, if N is the total number of items, the search time T is

N = number of items
T = time
K = execution time of hardware

T = K * (N / 2)

For a handier formula, we can lump the 1/2 into the K, because we are dealing with constant numbers. Hence we can say a constant K is
some constant K which represents execution time of the hardware divided by 2.

Hence, T = K * N

Time = hardware execution time * Number of items

The reason why we do this is because, we’re not interested in the actual execution time. We’re just interested in how fast the algorithm is running IN ACCORDANCE TO THE NUMBER OF ELEMENTS N.

Thus, whatever execution time is for the particular processor and compiler at the time does not matter to us. Whether its a 333mhz processor or a 3ghz processor, it doesn’t matter.

We just lump that constant numeric together and pull out the number of elements N SO THAT we can convey how running times are affected by the number of items.

This says that average linear search times are proportional so the size of the array.

For example,

In an unordered array, Search takes O(N) time because you need to go through N/2 elements on average. Multiple that that by processing time constant K time T = (N/2) * K = K * N = O(N) where we don’t care about literal processing time of K.

In an unordered array, Deletion takes O(N) time because once you use Search to find the element to delete, you still need to shift elements down to fill the empty spot. On average its N/2 elements that you need to shift down. Multiple that by processing time constant K time T = (N/2) * K = K * N = O(N) where we don’t care about processing time of K.

Binary Searching

T is time
K is hardware execution time
N is number of items

T = K * log (base 2) N

But because whether we’re dealing with binary three, or tinery tree, or whatever, base 2, base 3…etc is simply a constant numeric. i.e lay out a base 2 framework and a base 10 framework, their difference is 3.322. That’s a numeric that we can include into the constant K.

Hence, T = K * log N.

Eliminating the Constant K for Big O

When comparing algorithms you don’t really care about the particular microprocessor chip or compiler, all you want to compare is how T changes for different values of N. Therefore, the constant isn’t needed.

Big O notation uses the uppercase letter O, which you can think of as meaning “order of”. In Big O notation, we would say that a linear search takes O(N) time, binary search takes O(log N) time. Insertion into an unordered array takes O(1), or constant time.

The idea in Big O notation isn’t to give an actual figure for running time, but to convey how running times are affected by the number of items.


In the case of binary search, you are trying to find the maximum number of iterations, and therefore the maximum number of times the search space can be split in half.

This is accomplished by dividing the size of the search space n, by 2 repeatedly until you get to 1 number.

Let’s give the number of times you need to divide n by 2 the label x. Since dividing by 2 x times is equivalent to dividing by 2^x, you end up having to solve for this equation:

n total number of items
x number of times you need to divide by 2 to get the result 1

n / 2^x = 1

x is the number of times you can split a space of size n in half before it is narrowed down to size 1.
The solution to this equation be found by taking the logarithm of both sides:

2^x = n

by log mathematical inductions….

log (2^x) = log (n^1)

x log 2 = 1 * log n

x = log ( n ) / log ( 2 ) = log base 2 (n)


x = log base 2 (n)

means that given n numbers, x is the number of times we split those n numbers in half, in order to get a result of 1.

Core Data #3 – sibling Main/Private MOC on a PSC

CoreData Stack #3 xCode 7.3

ref –

In iOS 5, Apple introduced

  • 1) parent/child contests – settings a context as a child to a parent context is that whenever the child saves, it doesn’t persist its changes to disk. It simply updates it own context via a NSManagedObjectContextDidSaveNotification message. Instead, it pushes its changes up to its parent so that the parent can save it.
  • 2) concurrency queue that is tied to a context. It guarantees that if you execute your code block using that context, it will use execute your code block on its assigned queue

Thus, we can use parent/child contexts and concurrency queue to set up our Core Data Stack.

Say we create a parent context. It will act as the app’s Task Master. Its children will push their changes up so that the task master will and executes them in a background queue.

When creating this Task Master context, we init it with macro NSPrivateQueueConcurrencyType to let it know that the queue is a private one. Because its a private queue, the threads used to run the tasks in this private queue will be executed in the background.

Now, let’s a child context. We we create a temporary MOC who’s parent context is our Task Master.

Notice its parentContext assigned to our Task Master.

This implies that whenever our temporary MOCs finish their tasks, (whether its getting data from the net or reading in loads of data from somewhere or whatever task that takes quite a bit of time) and that we call MOC’s save,
this does 2 things:

1) save sends the NSManagedObjectContextDidSaveNotification message, and thus our Notification center observes. We go to handleMOCDidSaveNotification method where we call a method for the parentContext (task master) to do the save.

2) The background context’s changes are pushed into its parent context (task master).

Here is the handler that handles all name:NSManagedObjectContextDidSaveNotification message triggered by our child temporary MOCs.

where handleMOCDidSaveNotification is:

In handleMOCDidSaveNotification, we ignore other context, we take care of these temporary MOCs only. We RETURN and do nothing iff the temp MOCs are NOT tracked in our NSHashTable directory, or if its the UI context.

If the temp MOCs ARE tracked, that means they are temp MOCs, and we go ahead merge our temp MOCs with the UI MOC. We merge our changes with the UI MOC, because the UI MOC is not a parent context of the temp MOCs. The UI MOC is a separate entity.

When a MOC saves, its changes are pushed into its parent context. These however are not immediately written to disk. We write to the disk, after the background MOC saves, sends the NSManagedObjectContextDidSaveNotification notification, we go into handleMOCDidSaveNotification, and in the method saveMasterContext. saveMasterContext saves our parent private context, thus making it write to disk, and if it takes a long, its ok because its a private background queue.

In short,

To clear up the confusion: after creating the child context from a parent context, that child context has the same “state” as the parent. Only if the two contexts do different things (create, modify, delete objects) the content of the two contexts will diverge.

So for your setup, proceed as follows:

create the child context
do the work you want to do (modifying or creating objects from the downloaded data),
save the child context
At this stage, nothing is saved to the persistent store yet. With the child save, the changes are just “pushed up” to the parent context. You can now

save the parent context
to write the new data to the persistent store. Then

update your UI,
best via notifications (e.g. the NSManagedObjectContextDidSaveNotification).