Serializing Data – domain model to persistence

http://nshipster.com/nscoding/

There are many ways you serialize data, or convert domain model to persistence:

Core Data
NSKeyedArchiver
pList
NSUserDefault
etc…

Core Data may not be your answer because:

Not all apps need to query data.
Not all apps need automatic migrations.
Not all apps work with large or complex object graphs.

NSKeyArchiver and NSCoding

Thus, using NSKeyArchiver and NSCoding maybe a great solution in some cases.

NSCoding is a simple protocol, with two methods: -initWithCoder: and encodeWithCoder:. Classes that conform to NSCoding can be serialized and deserialized into data that can be either be archived to disk or distributed across a network.

NSKeyArchiver serializes -compliant classes to and from a data representation.

NSCoding delegates

This is so that you can tell it what properties to encode into objects. Encode means to “convert into a coded form”.
Hence, we inject our properties into encodeObject.

Decode means to convert a coded message into something readable.
Thus, that’s why we have our properties take on data returned by the decoder.

Each property is encoded or decoded as an object or type, using the name of the property of as the key each time.

File System

NSKeyedArchiver and NSKeyedUnarchiver provide a convenient API to read / write objects directly to / from disk.

You first get the path of the directory you want. Then you append a file name to that path. The resulting NSString appFile is your full path to the file you are saving to.

Then, use your Array or Dictionary, and insert it into NSKeyedArchiver’s archiveRootObject toFile method in order to save.
When you want to load, use unarchiveObjectWithFile:appFile.

pLists

A property list (a.k.a. pList) is a structured data representation used by Cocoa and Core Foundation as a convenient way to store, organize, and access standard types of data. You can build little more complex structures but not that much. NSString, NSData, NSNumber, NSDate, NSArray, NSDictionary – these are the only Objective-C data types that property list supports. Sub-entries are only allowed with NSDictionary or NSArray. A property list is used to store “less than a few hundred kilobytes” of data. A very significant and popular use of property lists is to define application settings.

If you open a pList with Xcode this is how it will show a very comfortable view of key-value pairs. But if you open the pList with a text editor you will see the actual format, which is XML. Property list is a subset of XML.

ARC vs MRC

https://www.quora.com/How-does-garbage-collection-happen-in-iOS
http://stackoverflow.com/questions/6385285/why-doesnt-ios-have-automatic-garbage-collection

Manual Reference Counting has the developer take care of the allocation and release of the object life cycle by using new/delete, alloc/release. You have to make sure every alloc is matched with a release.

MRC behind the wheel :
1. Every time you create and allocate an object, you increase the reference count of the variable. So suppose you created object foo , so now the reference count got increased to 1.
2. If any other object is also referring to this object, then the count increases to 2.
3. Now if I decide to remove my ownership and i want to remove my reference, then the reference count drops by -1;

So technically, every reference to an object is referenceCount++ and every release of the object is a referenceCount–

Once the reference count reaches zero, that’s when the object foo becomes eligible garbage collection.

Automatic Reference Counting – the iOS compiler looks after the memory management. Thus, you don’t have to worry about releasing the objects. You just allocate and that’s it.

In other words, it is a memory management enhancement where the task of keeping track of reference counting for objects is removed from the programmer and onto the compiler.

ARC differs from Cocoa’s garbage collection [4] in that there is no background process doing the deallocation of objects.[5] Unlike garbage collection, ARC does not handle reference cycles automatically. It is up to the program to break cycles using weak references.[6]

Developers no longer have to worry about retain / release of objects, you don’t have a garbage collector process slowing execution randomly, and you still maintain fairly tight control over memory usage.

How ARC works

ARC works its magic at compile time to do the reference counting for you thereby making it unnecessary (and actually non-allowed) to use any other sort of memory management.

Now what ARC does is it looks up your code, sees where the scope of the variable ends and then adds autorelease, on those objects, and release is deprecated.
So what autorelease does is it adds the object to AUTORELEASE POOL , which basically means once the scope/block ends, objects in that block will be sent release messages.

Why it does not use Garbage Collection

At WWDC 2011, Apple explained that they didn’t want garbage collection on their mobile devices because they want apps to be able to run with the best use of the provided resources, and with great determinism. The problem with garbage collection is not only that objects build up over time until the garbage collector kicks in, but that you don’t have any control over when the garbage collector will kick in. This causes non-deterministic behavior that could lead to slowdowns that could occur when you don’t want them.

Bottom Line: You can’t say, “OK. I know that these objects will be freed at X point in time, and it won’t collide with other events that are occurring.”

Height of Tree

Height of tree is used to find the maximum number of steps to search for an item in a data set using logarithmic division.

If you wanted to find a number in that tree, you compare the toFind integer with the current node. If larger, go down right side. If smaller, go down left side.

You may find your integer at the very top, or at the next node, or a few steps down the tree. But the absolute maximum number of steps it takes to find your item is the height of the tree.

By going down 1 step, you’ve removed half of the data set by process of elimination. For example, if you search for a 2, and the node is 8, you go down the left side. You don’t need to worry about the right side of 8, because all those numbers is guaranteed to be bigger.

With each step you take, you eliminate half of the data set until you finally get to your number.

The number of eliminations (or divisions) you make is basically the height of the tree.

1) get the height of the tree
2) Go through each level of the height and print

level is how many depths to go into the tree..
0 is root
1 means go down 1 level..display all nodes there.
2 means go down 2 levels…display all nodes there..
etc
NOTICE LEVEL. we recursively traverse via LEVEL

QuickSort

http://www.algolist.net/Algorithms/Sorting/Quicksort

Sort in Place

In-place sorting is a form of sorting in which a small amount of extra space is used to manipulate the input set. In other words, the output is placed in the correct position while the algorithm is still executing, which means that the input will be overwritten by the desired output on run-time.

stable sorting

http://stackoverflow.com/questions/1517793/stability-in-sorting-algorithms

A sorting algorithm is said to be stable if two objects with equal keys appear in the
same order in sorted output as they appear in the input array to be sorted.

Suppose we have input:

peach
straw
apple
spork

Sorted would be:

apple
peach
straw
spork

and this would be stable because even though straw and spork are both ‘s’, their ordering is kept the same.

In an unstable algorithm, straw or spork may be interchanged, but in stable sort, they stay in the same relative positions (that is, since ‘straw’ appears before ‘spork’ in the input, it also appears before ‘spork’ in the output).

Examples of stable sorts:

mergesort
radix sort
bubble sort
insertion sort

hash table

xCode 7.3 demo

Hash Table

First, let’s define the hash entry. It is simply a key and a value. It means that
given a key, this key will map to this value.

First, we define an array of HashEntry pointers.

table must be a double pointer, because the initial pointer points to the address of the first HashEntry pointer.

When adding, the idea is:

1) hash = key % TABLE_SIZE

hash is just a index of where we store our entry. The key could be any number. We use a simple modulo to round robin that number to be within
0 – TABLE_SIZE. Then we just store our entry there

2) If the slot is already filled with another entry we step forward to the next slot to see if its available:

Getting the Value

We hash to the place with the key.
However, the key/value may be stored in a farther slot due to different keys hashing to the same index.
Remember, when a 2nd key hashes to a slot that was occupied, it steps linearly through the array, and tries to find
an empty slot.

Hence, if our keys do not much match for the current slot, we gotta step through and look on the next slot for the
key that matches our key.

Time Complexities

Virtually all hash table implementations offer O(1) on the vast, vast, vast majority of inserts. This is the same as array inserting – it’s O(1) unless you need to resize, in which case it’s O(n), plus the collision uncertainty.

Hash tables suffer from O(n) worst time complexity due to two reasons:

1) Usually a search for a key, would get you the value O(1). Its simply 1 lookup. 1 instruction.
However, if too many elements were hashed into the same key, looking inside this key may take O(n) time.

For example, imagine the strings “it was the best of times it was the worst of times” and “Green Eggs and Ham” both resulted in a hash value of 123.

When the first string is inserted, it’s put in bucket 123. When the second string is inserted, it would see that a value already exists for bucket 123. It would then compare the new value to the existing value, and see they are not equal. In this case, an array or linked list is created for that key. At this point, retrieving this value becomes O(n) as the hashtable needs to iterate through each value in that bucket to find the desired one.

For this reason, when using a hash table, it’s important to use a key with a really good hash function that’s both fast and doesn’t often result in duplicate values for different objects.

2) Once a hash table has passed its load balance – it has to rehash [create a new bigger table, and re-insert each element to the table].

Deleting in Azure

Given that a refresh pull involves pulling data that is filtered according to an attribute, Soft Delete involves setting that attribute to YES/NO. This affects clients in that they will then not be able to pull that data. Additionally, that data is kept safe in Easy Table for future references and undeletes.

For example, let’s say you create an attribute “complete”.
When pulling data, you may specify that you want to pull all data that has NO for attribute “complete”.

Once you assign YES for attribute complete on say row 88, client refresh pulls will not include row 88 anymore. It will include all rows with NO for attribute “complete”.

When fetching from your fetch controller/core data, simply filter data according to complete == NO.

HARD DELETE – Delete on Local and Server

If you want to remove local AND server data, all you have to do is call the delete method from your MSSyncTable.

It sends a request to your local data source to remove the given item, then queues a request to send the delete to the mobile service.

It first removes the data locally.
Then, when the queued request goes through into the mobile service, then it will update remotely, and you can log into your Azure account, look at the Easy Tables, and see that the item has been removed.

Notes

Do not remove data by hand on the backend directly. Currently, MS have no way to re-syncing, and your client app will have many error messages on its request queue.

Using MSTable’s readWithCompletion to pull data from Server

For the issue of when there is a delete on the server side, it means we have less data on the server side
than the local side.

we need to make sure our local db sync up to that. One way to do that is to pull the data by using MSTable’s
readWithCompletion to get the latest from the server.

Than compare it to the core data.

In your QS******Service.m,

declare some properties to save data:

Then, we implement a method where we store the results from the server. We use MSTable’s
readWithCompletion method like so:

Then when you get the latest results

How to deal with erroneous server db deletion

xCode 7.3 demo

UPDATE 8/17/16

If you want to remove local AND server data, all you have to do is call the delete method from your MSSyncTable.
It sends a request to your local data source to remove the given item, then queues a request to send the delete
to the mobile service.

It first removes the data locally.
Then, when the queued request goes through into the mobile service, then it will update remotely,
and you can log into your Azure account, look at the Easy Tables, and see that the item has been removed.

Let’s say you have added 4 entries:

leftover_1

The database reflects this:

leftover_2

Then, let’s say there is a direct deletion in the DB. The item we deleted it is has id A24D9651-A252-4A70-81A8-61520BB5C0D1

leftover_3

Now, even though this is not the way Microsoft wants us to do deletion, we do need a way to sync our local DB with the server DB just in case this happens.

Open the project. Let’s implement a log method to display the data in our local db.

QSAppDelegate.h

The logging is to verify correctness of local db. It is also used to get the id of users you want to delete.

For example, on the server side, if someone deletes Ivan, and in case you forgot to safe the ID, you can always use the log method to display the local results:

2 — text = Ivan, id = 46206A2F-26A7-4667-8A8E-391CA51EE731

QSAppDelegate.m

Delete – server and local

Then, let’s write the delete method for the service, so that we can remove the server data, and the local data.

Notice that we use MSTable’s deleteWithId to remove the item from the server.

We use MSSyncTable’s delete:item to remove item locally.

QSTodoService.h

QSTodoService.m

Now, given an ID, let’s check to see if the server has that ID. Insert the ID of the item you just deleted (A24D9651-A252-4A70-81A8-61520BB5C0D1) into rowId.

For matching local data wit direct delete in the server DB, we simply use local delete.

For simplicity purposes, we’ll just do the logging, and checking in refresh method of QSTodoListViewController:

Run the program…first you’ll see the logging to prove that your local database is out of sync with the server database:

2016-08-16 16:19:42.297 EpamEvents[4441:456591] -[QSAppDelegate logAllTodoItem] – logAllTodoItem method called
2016-08-16 16:19:42.297 EpamEvents[4441:456591] ———— RESULTS FROM DATABASE ————
2016-08-16 16:19:42.298 EpamEvents[4441:456591] 4
2016-08-16 16:19:42.298 EpamEvents[4441:456591] 0 — text = Dean, id = 1D9AB54C-793A-4240-A8FB-3E968BB16D09
2016-08-16 16:19:42.298 EpamEvents[4441:456591] 1 — text = Ricky, id = A24D9651-A252-4A70-81A8-61520BB5C0D1
2016-08-16 16:19:42.298 EpamEvents[4441:456591] 2 — text = Ralph, id = 49E6A521-CB9D-4D37-9CEB-F81403707202
2016-08-16 16:19:42.299 EpamEvents[4441:456591] 3 — text = Ivan, id = 46206A2F-26A7-4667-8A8E-391CA51EE731

Then, when you run through the method, you will see the checkServerDeletionCompletion method call readWithId. First it checks for item (A24D9651-A252-4A70-81A8-61520BB5C0D1). Remember, someone has mistakenly deleted this item from the DB directly, hence you’ll get an error warning like so:

2016-08-16 16:20:06.084 EpamEvents[4441:456591] Error Domain=com.Microsoft.MicrosoftAzureMobile.ErrorDomain Code=-1302 “The item does not exist” UserInfo={NSLocalizedDescription=The item does not exist, com.Microsoft.MicrosoftAzureMobile.ErrorRequestKey= { URL: https://epamevents.azurewebsites.net/tables/TodoItem/A24D9651-A252-4A70-81A8-61520BB5C0D1 }, com.Microsoft.MicrosoftAzureMobile.ErrorResponseKey= { URL: https://epamevents.azurewebsites.net/tables/TodoItem/A24D9651-A252-4A70-81A8-61520BB5C0D1 } { status code: 404, headers {
“Cache-Control” = “no-cache”;
“Content-Length” = 35;
“Content-Type” = “application/json; charset=utf-8”;
Date = “Tue, 16 Aug 2016 08:19:58 GMT”;
Etag = “W/\”23-xvKyQMaUWUD9x7DI0LISBQ\””;
Expires = 0;
Pragma = “no-cache”;
Server = “Microsoft-IIS/8.0”;
“Set-Cookie” = “ARRAffinity=871fe01e072348697c5ee601ae1b8377c6473f1f3b3bb965170b664f5c32221d;Path=/;Domain=epamevents.azurewebsites.net”;
“X-Powered-By” = “Express, ASP.NET”;
} }}

Then once the error is noticed, we delete it from our local via the delete method. As you can see, locally, Ricky has been deleted.

The result is:

2016-08-16 16:20:06.091 EpamEvents[4441:457812] DELETED LOCALLY!
2016-08-16 16:20:09.886 EpamEvents[4441:456591] -[QSAppDelegate logAllTodoItem] – logAllTodoItem method called
2016-08-16 16:20:09.887 EpamEvents[4441:456591] ———— RESULTS FROM DATABASE ————
2016-08-16 16:20:09.887 EpamEvents[4441:456591] 3
2016-08-16 16:20:09.887 EpamEvents[4441:456591] 0 — text = Dean, id = 1D9AB54C-793A-4240-A8FB-3E968BB16D09
2016-08-16 16:20:09.888 EpamEvents[4441:456591] 1 — text = Ralph, id = 49E6A521-CB9D-4D37-9CEB-F81403707202
2016-08-16 16:20:09.888 EpamEvents[4441:456591] 2 — text = Ivan, id = 46206A2F-26A7-4667-8A8E-391CA51EE731