calling function on optional, optional initializer

Having optionals is very helpful in that if you decide to call a function on it, it will not crash and simply return nil.

In our case, we created a class where if its dictionary has valid entries, others can use that class and query it.
However, if that dictionary does not have anything, then execution would continue without a crash.

Quick note about dictionaries

https://www.weheartswift.com/dictionaries/

A dictionary is an unordered collection that stores multiple values of the same type.

Each value from the dictionary is associated with a unique key. All the keys have the same type.

The type of a dictionary is determined by the type of the keys and the type of the values. A dictionary of type[String:Int] has keys of type String and values of type Int.

Declare Dictionaries
To declare a dictionary you can use the square brackets syntax([KeyType:ValueType]).

You can access specific elements from a dictionary using the subscript syntax. To do this pass the key of the value you want to retrieve within square brackets immediately after the name of the dictionary.

Because it’s possible not to have a value associated with the provided key (i.e, nil) the subscript will return an optional value of the value type

Thus this means that

Hence, to unwrap the value returned by the subscript you can do one of two things: use optional binding or force the value if you know for sure it exists.

Example 1

If we have a valid property in order to initialize the dictionary, we return a valid self object. This let’s others create and query our OutOfBoundsDictionary.

If our property name does not exist, then we do not want others to be able to query. Thus in the initialization function, it put a ? to denote that what we return is optional self. If the name does not exist, then we return nil. When we return nil, any function calls on a nil, will simply be ignored.

Thus, if name is initialized, then execution will run through normally and print a valid last name.
If property name was not initialized, then a is returned as nil and calling any functions on it will be ignored.

Another way to do it

…is to have a standard initializer. Return a valid self object. However due to not having any entries in the dictionary, when other objects try to use findPerson with a firstName, it will return nil. And thus, will not print anything

Hash Table, prime numbers, and hash functions

basic hash table to Strings (xcode 8.3.3)
Hash table with a stack or queue (xcode 8.3.3)
HashTable with choice for data structure

http://www.partow.net/programming/hashfunctions/
https://www.quora.com/Why-are-prime-numbers-used-for-constructing-hash-functions
http://algs4.cs.princeton.edu/34hash/
https://computinglife.wordpress.com/2008/11/20/why-do-hash-functions-use-prime-numbers/
https://cs.stackexchange.com/questions/11029/why-is-it-best-to-use-a-prime-number-as-a-mod-in-a-hashing-function

Why do hash functions use prime numbers for number of buckets ?

Consider a hash function (or a set of numeric data) that gives you multiples of 10.

If we use a bucket size of say, 4 buckets, we get:

10 mod 4 = 2

20 mod 4 = 0

30 mod 4 = 2

40 mod 4 = 0

50 mod 4 = 2

So from the set of hash results such as {10, 20, 30, 40, 50}, if we were to hash them into our buckets, all of them would go either into bucket 0, or bucket 2. all odd numbers would collide at bucket 2. All even numbers collide at bucket 0. The distribution of data into buckets is not good.

Let’s say we used 7 buckets instead. We take the generated hash keys, and do the mod to see how they are distributed throughout the hash table:

10 mod 7 = 3

20 mod 7 = 6

30 mod 7 = 2

40 mod 7 = 4

50 mod 7 = 1

much better. The numbers are getting distributed more evenly.

Let’s say we used 5 buckets.

10 mod 5 = 0

20 mod 5 = 0

30 mod 5 = 0

40 mod 5 = 0

50 mod 5 = 0

Even though 5 is a prime number, all of our keys are multiples of 5, and thus the mod will always be 0. This will distribute all of our keys into bucket 0.

Therefore, this means we have to choose a prime number that doesn’t divide our keys, choosing a large prime number is usually enough.

the reason prime numbers are used is to neutralize the effect of patterns in the keys in the distribution of collisions of a hash function.

In other words, say we have a function that generates a set (or just a simple data list) of data K = {0, 1, 2, 5, 88,…92847 }

and a hash table where the number of buckets is m = 12 (non-prime) Let’s call each number inside of data K, a hash key.

We map a hash key onto the bucket size m via AND 0xff, or % array size m. (In our example, we use % array size)

hash-to-bucket

If K is uniformly distributed (i.e., every number in K is equally likely to occur), then the choice of bucket size ‘m’ is not so critical.

But, what happens if K is not uniformly distributed? Imagine that the keys that are most likely to occur are the multiples of of 10 (like our example above) such as 10, 20, 30, 40, 50…and they keep appearing a lot.

In this case, all of the buckets that are NOT multiples of 10 will be empty with high probability (which is really bad in terms of hash table performance).

In general:

Every key in K that shares a common factor with the (number of buckets) m will be hashed to a bucket that is a multiple of this factor.

Every key in K, say 14 – 2, 7
# of buckets (12) – 2, 3, 4, 6

14 shares a common factor of 2 with (m, 12 buckets)
Hence any multiples of 14 will be hashed to a bucket that is a multiple of 2.

14 % 12 = 2
28 % 12 = 4
42 % 12 = 6
56 % 12 = 8
70 % 12 = 10
84 % 12 = 0
98 % 12 = 2


Hence, every key in K – 14, 28, 42, 56, …etc
that shares a common factor with (m number of buckets, 12) – common factor is 2
will be hashed to a bucket that is a multiple of that common factor (which is 2, 4, 6, 8)

Therefore, to minimize collisions, it is important to reduce the number of common factors between m and the elements of K. How can this be achieved?

By choosing m to be a number that has very few factors: a prime number.

extending String

initially getting an index from String is used like this:

We call String’s index, use its variable startIndex (which is a static String.Index)
to indicate we start at beginning of the String. Then we off set by i. And that’s where we’ll
return the character in the String.

By using the subscript above, we simply use it to initialize a String, then return that String

Finally, in we extend Character, and create a var ASCII.
In it, we first create a String with its self Character. We access unicodeScalars, which is a
list of ASCII numbers for each character.

Has table – Separate Chaining, Open Addressing.

ref – https://en.wikipedia.org/wiki/Associative_array

The most frequently used general purpose implementation of an associative array is with a hash table: an array combined with a hash function that separates each key into a separate “bucket” of the array. The basic idea behind a hash table is that accessing an element of an array via its index is a simple, constant-time operation. Therefore, the average overhead of an operation for a hash table is only the computation of the key’s hash, combined with accessing the corresponding bucket within the array. As such, hash tables usually perform in O(1) time, and outperform alternatives in most situations.

Hash tables need to be able to handle collisions: when the hash function maps two different keys to the same bucket of the array. The two most widespread approaches to this problem are separate chaining and open addressing.

https://en.wikipedia.org/wiki/Hash_table#Separate_chaining

Algorithm Average Worst Case
Space O(n)[1] O(n)
Search O(1) O(n)
Insert O(1) O(n)
Delete O(1) O(n)

The idea of hashing is to distribute the entries (key/value pairs) across an array of buckets. Given a key, the algorithm computes an index that suggests where the entry can be found:

index = f(key, array_size)

hash = hashfunc(key) // where hash is some number
index = hash % array_size // in order to fit the hash into the array size, it is reduced to an index using modulo operator

In this method, the hash is independent of the array size, and it is then reduced to an index (a number between 0 and array_size − 1) using the modulo operator (%) .

Choosing a hash function

http://eternallyconfuzzled.com/tuts/datastructures/jsw_tut_hashtable.aspx

Table size and range finding

The hash functions introduced in The Art of Hashing were designed to return a value in the full unsigned range of an integer. For a 32-bit integer, this means that the hash functions will return a value in the range [0..4,294,967,296). Because it is extremely likely that your table will be smaller than this, it is possible that the hash value may exceed the boundaries of the array.

The solution to this problem is to force the range down so that it fits the table size.

For example, if the table size is 888, and we get 8,403,958, how do we fit this value within the table?

A table size should not be chosen randomly because most of the collision resolution methods require that certain conditions be met for the table size or they will not work correctly. Most of the time, this required size is either a power of two, or a prime number.

Why a power of two? Because we use bitwise operation to get performance benefits. Bitwise operations are done on binary bit combinations and thus, that’s why the table size needs to be in power of 2. A table size of a power of two may be desirable on some implementations where bitwise operations offer performance benefits. The way to force a value into the range of a power of two can be performed quickly with a masking operation.

For example, to force the range of any value into eight bits, you simply use the bitwise AND operation on a mask of 0xff (hexadecimal for 255):

0x8a AND 0xff = 0x8a

from hex to digit:

Note that _ _ _ _ _ _ _ _ = 8 bits = 1 byte.
2 2 2 2 2 2 2 2 = 2^8 = 256 representations (memory addresses)

0x8a -> 1000(8) 1010(a) -> 1000 1010 binary

binary to decimal is 1 * 128 + 0 * 64 + 0 * 32 + 0 * 16 + 1 * 8 + 0 * 4 + 1 * 2 + 0 * 1 = 128 + 10 = 138.

from digit to hex:

Thus, if we were to get a value of 138, we force this value to give us a return index from an array size of 8 bits by using AND operation with 0xff.

Thus, in code, you’ll get the parameter 138. Then you convert it to 1000 1010, then to hex which is 0x8a.
Then you apply the AND bit op which gives you 0x8a AND 0Xff = 0x8a

so 138 in 256. But what if you get some large number like 888?
888 in binary is 0x378

0x378 AND 0x0ff = 78

we append 0 to front of 0xff because we’re dealing with 3 hex places. We apply the AND and get 78. Thus if you get hash value 888, it would give you index 78.

table[hash(key) & 0xff]
This is a fast operation, but it only works with powers of two. If the table size is not a power of two, the remainder of division can be used to force the value into a desired range with the remainder operator. Note that this is slightly different than masking because while the mask was the upper value that you will allow, the divisor must be one larger than the upper value to include it in the range. This operation is also slower in theory than masking (in practice, most compilers will optimize both into the same machine code):

table[hash(key) % 256]
When it comes to hash tables, the most recommended table size is any prime number.

This recommendation is made because hashing in general is misunderstood, and poor hash functions require an extra mixing step of division by a prime to resemble a uniform distribution. (https://cs.stackexchange.com/questions/11029/why-is-it-best-to-use-a-prime-number-as-a-mod-in-a-hashing-function)

Another reason that a prime table size is recommended is because several of the collision resolution methods require it to work. In reality, this is a generalization and is actually false (a power of two with odd step sizes will typically work just as well for most collision resolution strategies), but not many people consider the alternatives and in the world of hash tables, prime rules.

What is the equivalent of an Objective-C id in Swift?

https://stackoverflow.com/questions/24005678/what-is-the-equivalent-of-an-objective-c-id-in-swift

Swift 3

Any, if you know the sender is never nil.

@IBAction func buttonClicked(sender : Any) {
println(“Button was clicked”, sender)
}
Any?, if the sender could be nil.

@IBAction func buttonClicked(sender : Any?) {
println(“Button was clicked”, sender)
}

Reader Writer #1 using Semaphore

Semaphore Reader Writer #1 demo

https://en.wikipedia.org/wiki/Readers%E2%80%93writers_problem

Starting off

We have a semaphore that holds 1 process. We the reference to this object semaphoreResource
The whole point of this semaphore is for Readers and Writers to fight over it. If a Reader is holding it, a Writer cannot write.
If a Writer is holding it, Readers cannot read.

Writers fight with Readers for semaphoreResource but will never ever touch semaphoreReaderMutex.

That’s because semaphoreReaderMutex is used between Readers in order to update/change a variable called readCount.

readCount determines whether the first reader will hold the semaphoreResource and also whether the last reader will let go of semaphoreResource.
That’s the whole purpose of semaphoreReaderMutex.

We define two callbacks to simulate reading and writing. Reading takes 3 seconds and writing takes 5 seconds.

Writer

In the case of Writers, its very straightforward. It grabs the resource semaphoreResource. If its succeeds, it will then do the writing by simply calling the runWriteCodeBlock callback. After the writing, it will let go of the semaphore:

Readers

The idea is that we grab the reader semaphore (semaphoreReaderMutex) in order to make changes to the resource semaphore (semaphoreResource).
The reader semaphore allows the 1st reader to lock the resource semaphore, and naturally, the last reader to unlock the semaphore.

Any readers after the 1st one, does not need to lock the resource semaphore anymore. They just go on ahead and do their reading.
However, they DO NEED to grab the reader mutex because they are changing the readCount variable. This is to update the number of readers.
Only when the last reader finishes reading (updates the readCount to 0) then it unlocks the resource semaphore so that the writers can write.

However, it may be that other readers will read also. In this solution, every writer must claim the resource individually. This means that a stream of readers can subsequently lock all potential writers out and starve them. As long as future readers keep coming in, the next waiting writer will NEVER be able to write.

This is so, because after the first reader locks the resource, no writer can lock it, before it gets released. All future writers MUST WAIT FOR ALL READERS TO FINISH (readCount back to 0) in order to grab hold of the resource semaphore in order to do its writing.

In other words, a few readers come along, increases the readCount. Then leaves. Then more readers come. And more, and further even into the future such that readCount is always > 0. Hence this is what will starve the writer.

Therefore, this solution does not satisfy fairness.

full source

DispatchSemaphore

https://priteshrnandgaonkar.github.io/concurrency-with-swift-3/

Dispatch Groups

Dispatch group demo xCode 8.3.3

DispatchQueue uses enter() and leave() to group chunks of code into one work item and process them in the group. This can be done async or sync.

Note that DispatchQueues do not use sync, because the purpose would not make sense. The whole purpose of a DispatchGroup is to have multiple tasks run, and notify when they are done. Since sync is done one by one, and in order, there is no purpose to notify anything due to the nature of sync: Each task must begin and end in order.

Source Code

First, take note that we are looping through placing code chunks onto a global queue via async operation. Thus,

Then these 4 code chunks gets processed by the global queue asynchronously. Their initial lines gets printed:

—– code execution entered dispatchGroup for index 0 ——
—– code execution entered dispatchGroup for index 2 ——
—– code execution entered dispatchGroup for index 3 ——
—– code execution entered dispatchGroup for index 1 ——
code execution 0 start
code execution 2 start
code execution 3 start
code execution 1 start

This means that for each code chunk, that code chunk has placed a section of code into the dispatchGroup. In our case, the section of code prints 0 to 100.
The dispatchGroup now has 4 section of code to process.

The dispatchGroup will now execute the 4 section of code asynchronously.

Due to code chunk 0’s NO SLEEP, it simply does its printing first and it does it fast. Code chunk 1 can’t start because it requires every loop to sleep for 0.01 seconds before it starts. Thus, since code chunk 0’s loop does 0 sleep, it prints all first.

Then code chunk 1 starts, and sleeps for 0.01 between each print. Code chunk 2 will also start, and it sleeps for 0.02 between each print….

Due to

index 1’s loop having to sleep 0.01ms,
index 2’s loop having to sleep 0.02ms,
index 3’s loop having to sleep 0.03ms,

for every iteration, you can see that index 1’s printing of its loop is just a tad bit faster than index 2 and index 3’s loops.

index 1’s code chunk finishes and thus, it leaves the group.

eventually index 2’s code also finishes and leaves the group. Then index 3 will finally group after printing its loop.

Then when the group is finally empty, it will notify us and process the code block to print “Block 4”. Since this command is within the for loop, it will print 4 times.

Of course you can also remove the sleep line, if that’s the case, all of code chunks on the global queue will be processed asynchronously and they will all finish in similar time to no sleep for any of them. It still holds that once ALL of them are finished, they will notify, and thus, the notify code blocks will run.

output

—– code execution entered dispatchGroup for index 0 ——
—– code execution entered dispatchGroup for index 2 ——
—– code execution entered dispatchGroup for index 3 ——
—– code execution entered dispatchGroup for index 1 ——
code execution 0 start
code execution 2 start
code execution 3 start
code execution 1 start
loop index 0—- code chunk 0—
loop index 1—- code chunk 0—
….
loop index 100—- code chunk 0—
code execution 0 finish

—— code execution left dispatchGroup for index 0 ——–
loop index 0—- code chunk 1—
loop index 0—- code chunk 2—
loop index 1—- code chunk 1—
loop index 0—- code chunk 3—
….
loop index 100—- code chunk 1—
code execution 1 finish

—— code execution left dispatchGroup for index 1 ——–
loop index 51—- code chunk 2—
loop index 35—- code chunk 3—
loop index 52—- code chunk 2—

loop index 68—- code chunk 3—
loop index 100—- code chunk 2—
code execution 2 finish

—— code execution left dispatchGroup for index 2 ——–
loop index 69—- code chunk 3—
loop index 70—- code chunk 3—
loop index 71—- code chunk 3—

loop index 98—- code chunk 3—
loop index 99—- code chunk 3—
loop index 100—- code chunk 3—
code execution 3 finish

—— code execution left dispatchGroup for index 3 ——–

Block 4
Block 4
Block 4
Block 4

delegate vs callbacks

https://medium.cobeisfresh.com/why-you-shouldn-t-use-delegates-in-swift-7ef808a7f16b

The difference between delegates and callbacks is that

with delegates, the NetworkService is telling the delegate “There is something changed.”

We declare a protocol that says, whatever object conforms to this, must implement func didCompleteRequest(result: String):
This is so that we can pass the result String to that object.

Hence, we have a NetworkService object, it has a delegate to some object A that conforms to NetworkServiceDelegate. This means that object A will implement
func didCompleteRequest(result: String).

That way, whenever something is fetched from a URL, we can call on the delegate (reference that object A), and pass the result String via the protocol method
didCompleteRequest:

Hence, the delegate (the object that conforms to the protocol) is notified of the change

With callbacks, the delegate is observing the NetworkService

It will call “networkService.fetchDataFromUrl(url: “http://www.google.com”)” somewhere. Then it will observe for the data to pass through from fetchDataFromURL, and finally to the defined definition of onComplete as declared in viewDidLoad.

callbacks in swift

https://medium.cobeisfresh.com/why-you-shouldn-t-use-delegates-in-swift-7ef808a7f16b

CallBackSwift demo

Using callbacks for delegation

Callbacks are similar in function to the delegate pattern. They do the same thing: letting other objects know when something happened, and passing data around.

What differentiates them from the delegate pattern, is that instead of passing a reference to yourself, you are passing a function. Functions are first class citizens in Swift, so there’s no reason why you wouldn’t have a property that is a function!

MyClass now has a myFunction property that it can call and anyone can set (since properties are internal by default in Swift). This is the basic idea of using callbacks instead of delegation. Here’s the same example as before but with callbacks instead of a delegate:

1) we declare a class called NetworkService

2) We create a function type, called onComplete, that takes in a parameter of String type and returns Void. We make it into an optional so that we can take advantage of Optional Chaining, where we can query the reference. If its valid, great. If its nil, it would return nil and will not crash.

3) We create a function called fetchDataFromUrl that simulates getting data from a server by using sleep. After 2 seconds, granted something came back, we call our callback function property. However! Note here that our onComplete is defaulted to nil. Hence if we call onComplete in fetchDataFromUrl, it will query the nil and get nil back. Nothing will happen. In order for something to happen, we need to implement the onComplete optional function. You can implement onComplete in NetworkService initializer or externally since our onComplete is public.

a) Hence, usually, we will instantiate an object of our class. We have a reference to that object called service.
b) We then declare the callback definition for onComplete.
c) Finally, we call fetchDataFromUrl. After it runs through the function implementation, it calls the onComplete function as defined in b)

Another way to use callbacks – Data has changed!

1) as always, declare your callback. This time, we’re declaring the function type as having a parameter of “an array of String”, and returns void. Let’s call this property onUsernamesChanged

2) implement an init for our class, and define the definition for our callback. Since our callback’s function type has a parameter of an array of String, we use names as an identifier for that parameter.

3) Then, we simply use it in didSet.

Thus, another great way to use callbacks is when you want to get notified data has been changed.

Full Source Code

So why are callbacks better?

Decoupling

Delegates lead to pretty decoupled code. It doesn’t matter to the NetworkService who its delegate is, as long as they implement the protocol.

However, the delegate has to implement the protocol, and if you’re using Swift instead of @objc protocols, the delegate has to implement every method in the protocol. (since there’s no optional protocol conformance)

When using callbacks, on the other hand, the NetworkService doesn’t even need to have a delegate object to call methods on, nor does it know anything about who’s implementing those methods. All it cares about is when to call those methods. Also, not all of the methods need to be implemented.

Multiple delegation

What if you want to notify a ViewController when a request finishes, but maybe also a some sort of logger class, and some sort of analytics class.
With delegates, you would have to have an array of delegates, or three different delegate properties that might even have different protocols! (I’ll be the first to admit I’ve done this)

With callbacks, however, you could define an array of functions (I love Swift) and call each of those when something’s done. There’s no need to have a bunch of different objects and protocols risking retain cycles and writing boilerplate code.

Clearer separation of concerns

The way I see the difference between delegates and callbacks is that with delegates, the NetworkService is telling the delegate “Hey, I’ve changed.” With callbacks, the delegate is observing the NetworkService.
In reality, the difference is minimal, but thinking in the latter way helps prevent anti-patterns often found with delegation, like making the NetworkService transform results for presentation, which should not be its job!

Easier testing!
Ever felt like your codebase is twice as big with unit tests, because you have to mock every protocol, including all of the delegates in your app?
With callbacks, not only do you not have to mock any delegates, but it lets use use whatever callback you want in each test!
In one test, you might test if the callback gets called, then in another you might test if it’s called with the right results. And none of those require a complicated mocked delegate with someFuncDidGetCalled booleans and similar properties.

Optional Types

http://lithium3141.com/blog/2014/06/19/learning-swift-optional-types/

“But wait,” you say, “Int is a value type, not an object! How can I use nil for a value?…”

Well, you’re right. NSInteger didn’t have a nil value (or, rather, using nil with the right coercion would get you an integer with a value of 0).

Instead, we defined a ton of marker values that meant “no value:”: 0, 1, NSIntegerMin, NSIntegerMax, and NSNotFound all mean “nothing” in some API.

When you stop to think about it, this is really a limitation: by not having a consistent, defined way of saying no integer value, we’re layering a tiny bit of additional complexity around any use of such a value, then attempting to paper over it with documentation. Want to find an object in an array? Well, if that object doesn’t exist, you get NSNotFound – but if you try to find a nonexistent row in a table view, you get -1 instead.

Swift defines a new type called Optional that always has exactly one of two values: a defined “nothing” value called None, or a wrapped-up value of some other type T.

It’s as if Swift can take regular values and place them inside a box, which may or may not be empty:

In this example, the first integer is a plain Int type.

The second and third, though, are both of type Optional – or, for short, Int?.

Notice that the third value here is actually an “empty box” (the None value), even though its type is Int?.

This ability, to pass around None anywhere an optional type can go, is how Swift can provide things like nil for value types like Int (or, for that matter, any type, whether value or reference). Since this value will have the same type as a “real” value wrapped up in Optional, they can both be represented in the same variable without trying to rely on special values to stand in for the concept of “no value.”

Given:

We need some way of getting at the value inside an optional’s box – and, for that matter, checking whether such a value exists at all! Thankfully, Swift has us covered with the ! operator, which extracts the value out of an optional:

This works great for optionals that have a value. But what about those that don’t?

The ! operator only applies to optionals that have an actual value inside them. If your optional has nil (an alias for .None), it can’t be unwrapped and will throw a runtime error.

Let’s make our code a bit smarter. Instead of unconditionally unwrapping our optional value, we can check whether the value is nil first – much like we might have done in Objective-C.

Not very good way of checking whether the value is nil

Swift has us covered here too, with a syntax called optional binding. By combining an if and a let statement, we can write a concise one-line check for a newly-bound variable that is only conjured into existence if there’s a real value to go along with it:

Chaining – calling methods on those variables

Your program might have some custom classes – most do, after all – and you could want to call a method on an variable that might be an instance, or might be nil.
Developers can make use of optional chaining to call methods on potentially-nil objects:

By sticking a ? between the variable name and method call, we can indicate that we want either a real answer back (in the event that y is a valid instance) or another nil (in the case that y is itself nil).

Even though someMethod() is declared to return an Int, z gets type Optional because we used optional chaining to call the method.

This might seem like a hassle, but can actually be helpful, especially when combined with optional binding from above. If we stick with the same class definition, we can try something like this:

This remains concise while still dealing with all the various concerns we might have:

If y is nil (as it is here), the optional chaining will still allow us to write this code without a type error.

If y is nil or someMethod() returns nil, the optional binding will catch that case and avoid giving us a nil value for non-optional z.

In the event we do get a z, we’re not required to hand-unwrap it because it’s optionally bound.

All in all, this is a pretty clean system for passing around nil values for just about any type. We get some extra type safety out of the deal, avoid using specially defined values, and can still be just as concise as Objective-C – if not more!

Rough Edges

Unary ? operator – Not valid for Swift 3.

It’s valid Swift to take an optional variable and throw a ? at the end (Not for > Swift 3). However, unlike the unwrapping operator !, appending ? doesn’t actually affect the variable in any way: it is still optional.

Surrounding if checks will still look to see if the variable is nil, rather than evaluating its contained truth value (if any).

This can cause extra trouble when combined with Optional Bool

Since the Optional type is defined using generics (it can wrap any other type in the language) it’s possible to construct an optional boolean variable. In fact, it’s virtually mandatory the language allow this: to special-case Bool to disallow optionals would be an exceptional change, requiring serious modifications to the language or the Optional type.

That does, however, lead to a way developers can construct a kind of three-state variable: an Optional can be true, false, or nil. (What the latter means is rather context-dependent.) This can be very misleading, though, when combined with an if check: