1) Protocol definition

Given protocol in DataModel.h, we have:

2) Define the protocol delegate in delegator (source) class

Our DataModel class is the delegator. It is the “source” of all of our delegations.

the delegate means it “delegates” the messages sent by DataModel (source) to whatever “view” class/controllers (destination) that the delegates point to. Hence, that “view” class/controller will have to implement the delegate methods(s).

3) Implement protocol method in delegator(source) class

In DataModel.m, we have:

..this means, whatever object responds (or has the implementation) to what our delegate has asked for in the protocol declaration, we make that object take care of it.

4)Conform to the delegate in view/controller(delegator) class

Hence, let’s say we have RegistrationViewController. We make it conformed to our delegate UpdateViewDelegate like so:

Which means this class (RegistrationViewController) will have to implement the delegate methods here.

And since our UpdateViewDelegate are all optional methods, we can optionally implement those methods. In our example, let’s implement showMessageBox:

However, those implemented UpdateViewDelegate methods won’t be of any use because if no delegate messages gets passed here. Hence, we will need to have other classes that conforms to UpdateViewDelegate to pass their delegates.

We do so by using those classes (delegator) in our RegistrationViewController (delegatee):

We use the DataModel (which conforms to UpdateViewDelegate) object in our class:

Then assign the delegate from our DataModel object to self.

Now, whenever DataModel object has delegated messages to our RegistrationViewController, the RegistrationViewController’s protocol methods will be able to take care of it.

create and attach PCH file for xCode 6



  • Make new file: ⌘cmd+N
  • iOS/Mac > Other > PCH File > YourProject-Prefix.pch.
  • Project > Build Settings > Search: “Prefix Header”.
  • Under “Apple LLVM 6.0” you will get the Prefix Header key.
  • Make sure you create the .pch file in your YonoApp folder. If you decide to create it under your project, then you’ll need to use $(SRCROOT)/YourProject-Prefix.pch instead.

  • Type in: “$(SRCROOT)/$(PROJECT_NAME)/YourProject-Prefix.pch”.
  • Clean project: ⌘cmd+⇧shift+K
  • Build project: ⌘cmd+B


Holistic Element

Holistic Element for the iPad is a lifestyle app that connect its users to the power and joy of healthy living and a balanced lifestyle. The app strives to help users get started in basic traditional cooking, easy interval exercises that they will love to do daily, and articles that educate them in health.

Basically there are 3 sections: education, cooking, and exercises.

Education is basically a professionally well written article with references for further reading.


Cooking involves instructing the user how to cook via a video and image tutorial.




Same goes for exercises.
In additional, for exercises, there is a timer workout program that the user can use in their living room.



Face Recognizer using openCV on the iPad


This project was built using

  • openCV 2.4.8
  • xCode 5.0.2
  • for iOS 7
  • iPad air

set up instructions here
download the full source here (26.9MB)

I created 2 sections. The recognition screen and the registration screen.

face_demo_welcomeWe first start off on the main page.

In the MainMenuViewController, I have 2 variables:

  • CameraWrapper * cameraWrapper;
  • DataModel * dataModel;

The cameraWrapper is basically an object that wraps all the functionality of the openCV camera. What it does it it captures an image repeatedly. Each image is a matrix object cv::Mat. It gets passed to you, and it is up to you to process it, see where the detected face is, and do things to it.

The dataModel holds all the face data structures. Namely, we are trying to save all the user faces into this data structure. We also have a user labels data structure that matches the faces. Finally it has a UpdateViewDelegate that calls whatever UIViewController that conforms to this delegate to update its front end whenever something gets updated int our data model.

For the register view controller, we add faces we’ve detected to our carousel. That way whenever a system has finished collecting user’s faces, it can train it, and thus, be able to recognize the user in the future.

In the case of recognition, our UpdateViewDelegate is all about updating user interface items.

When you push the registration button you open up a UIViewController to register your face.

When you push the recognition button you open up a UIViewController to recognize your face.

Both views repeatedly detect your face.

Face Detection and 2 ways of facial lighting

I’ve used 2 main methods of getting different lighting of the face. For example, one way it to have the user move their face around by controlling the position of their nose with a axis and a yellow dot. Have their nose line up with the yellow dot and then you’ll be able to position their face differently. Thus, you’ll get different facial lighting and thus, our training image set can be much more various. This helps in the recognition phase.

A second way is to use accelerometer. Have the user stand in one spot and snap images from 0 – 360 degrees. That way, you will also get various lighting on the user’s facial structures.

Camera leak…fix and usage

In openCV’s module highhui folder, go to file
i.e, /Users/rickytsao/opencv-2.4.8/modules/highgui/src/
and replace the stop method with this:

Insert the lines where I have (++). This is where we need to release the captureSession one more time. Or else, you will leak memory when you opens and closes the camera one too many times.

Step 1, process the face

Whether we are in registration or recognition, the program is constantly detecting a subject’s face via the method ProcessFace.

Every time camera from the iPad captures an image, it gets passed into our Processface method here. That image is of object Mat.

The method getPreprocessedFace will return the preprocessedFace, which means whether it has detected a valid face or not. Then we see what phase we are in and do recognition, or collect. In our case, since it is registration, we are collecting faces. So whatever face we detect, will be saved into a data structured to be trained later.

As of now, we are collecting face so we use the MODE_COLLECT_FACES phase. This is where we for each image we snap, we inserted it into a data structure called preprocessedFaces.

Before the insertion, we do 3 things to the image. We shrink it to 70×70. Grayscale it. And equalize it. That way, the size of our image is much smaller and easier to work with. Then we pass it into openCV’s detectMultiScale, and it will return a cv::Rect object, detailing, where the face(s) is on the screen. In our case, we are working only with one face. We detect the largest face on the current image.

Then we draw a square around the face.

Once that’s done, we then see what phase we are in:


Since we are in MODE_COLLECT_FACES, we push our image matrix into a data structure.


I used iCarousel to add all the images I took and insert them onto an image carousel. That way, we can scroll through and check out all the images we took. These are the images that were added into our preprocessedFaces data structure.

Also, in previous versions, I simply used the yellow circle to match up to the person’s nose. Thus, angling their face JUST a little so that I can collect different angles of their face. In this version, I use the accelerometer, where the user angles their faces horizontally from 0 to 360.



After the faces have been collected, we automatically train those images using
method learnCollectedFacesWithFaces:withLabels:andStrAlgorithm: in DataModel.m.

We train it on a cv::Ptr object. Once its trained, we can recognize it by having the phase be changed to recognize in DataModel.m. Once our phase change to recognition, we run the below method, and use

to predict which image matrix below to which identity.