implementing the feedTheKraken

Get the project running

Download the source code onto your computer
download v1.0

cd into the directory, and install all packages
npm install

Then start the server:
npm start
You should see the server starting up.

In the directory, there is an index.html file. Double click it.

You’ll see the web page.

Go ahead and starting using it. Click on the browse button and select images you want to kraken.

Then click the “Feed it” button. This sends all your images to the server.
The server will then download these images into an ‘unprocessed’ folder that is unique to your browser.

Once the images are in that folder, it sends the images to Kraken.io to be processed. You will see the images being processed in your terminal.

Once processed, Kraken.io will return urls that contains the finished image. The server takes those url and downloads these finished image into a ‘processed’ folder.

Then it zips the processed folder and will log out the finishing steps.

Starting the project from scratch

set up a basic project with gulp:

ref – http://shanghaiseagull.com/index.php/2015/06/30/starting-gulp/

You should now have a functioning server going with nodemon as work flow.

install body parser

npm install express body-parser –save

edit your app.js

We implement the server to be a bit more detailed, and standard.

Creating the index.html

in your project directory, touch index.html

First, we have an file input control that takes in multiple files.
Second, we have a button right underneath it. This button will execute a js function. The JS function will proceed to pass the file information onto a url that hits our server.

First, let’s see the skeleton. Notice we have included boostrap css. This is so that we have some ready
made CSS to use.

Fingerprint2

Make sure you include this in your script area

Multiple browsers will be hitting our servers. Hence, all the unprocessed and processed images are to be kept in a folder for that browser only. There will be many folders that match up to each browser via a unique string id. As the browsers make their requests, images will be kept in those folders. That is how we know which image belong to which browsers. Fingerprint2 basically distinguishes browsers by returning a unique id for that browser.

Using axios to send files to node server

Make sure you include this in your script area

Keep in mind that in order to distinguish ourselves from the other browsers, we throw in the browser id in the url parameter. That way, when we save our images into a folder on the server, the server will keep track of it via our browser id.

1) first, we get the array of images received from the “file” control.
2) Then we create a FormData class that collects.
3) We include the browser id in the url parameter, and pass in the FormData in the parameter
4) We receive response from server

Don’t forget to implement upload for POST requests on the server. The point is that we have different browsers uploading images. We keep track of each browser’s image by creating a “unprocessed-${browser id}” folder. It holds all uploaded images from that browser that is not current processed by Kraken.

You should then be able to see the response back to index.html with result: “good”.

Installing Multer

In your directory, install multer:

npm i multer

Create a function called processImagesFromClientPromise and implement it like so.

make sure you implement createFolderForBrowser because as the images come in, you’ll need a place to store them.

Zipping a folder

After krakening all the images, we place it in the “processed” folder.
In the future, we may want to send all these images via email, or a link so the user can download it.
Its best if we can zip all these images together. We use the archiver to do this.

First, we install archiver:


npm install archiver –save

This is how we implement it. However, in the future, we want to place it inside of a Promise for refactoring.

Downloading the Krakened image from provided url

something like this:

https://dl.kraken.io/api/83/b0/37/75905b74440a18e6fe315cae5e/IMG_1261.jpg

We provide the link string via uri.
We give it a filename such as “toDownload”
Then we provide a callback for once it is done.

used like so:

Function setup

However, the problem is that all of that deteriorates down to pyramid of Doom. Each task does something asynchronous, and we wait until it is done. When it’s complete, inside of the callback, we call the next task.

Hence our tasks list is something like this:

  1. processImagesFromClient
  2. readyKrakenPromises
  3. runAllKrakenPromises
  4. saveAsZip

Some of the functionalities are run inside of a callback. Some of them are run at the end of the functions, hence at the end, we get this complicated chain that no one wants to follow.

Hence, let’s use Promises to fix it.

Promises version

ref – http://shanghaiseagull.com/index.php/2017/10/03/promise-js/

…with promises, it looks much prettier:

full source promises version

Basically we group code inside a new Promise. Then return that promise. Whatever variable we return, shall be passed inside of resolve. Resolve indicates that we move on to the next Promise.

Also, whatever parameter that gets passed into resolve, will appear in the .then parameter. You may then pass that parameter on to the next function.

Encapsulation

However, make sure we encapsulate the functionalities. We don’t want outside to be able to use functions such as readyKrakenPromises, runAllKrakenPromises, and saveAsZip.

So we change these functions to be private functions. Then create a public function that does the Promise calls like so:

used like so:

app.js full source

index.html full source