4.4. Application: Programming with Sets*#

On a computer, all pieces of data are represented, ultimately, as strings of zeros and ones. At times, computers need to work with sets. How can sets be represented as strings of zeros and ones? And once we have represented sets on a computer, how do we program with them?

Important

In this section we go into detail about representing and computing with sets. You won’t be examined on this in Reasoning & Logic. You will find that there is again quite some overlap with your study materials from Computer Organisation however, as we discuss different numeric systems.

4.4.1. Representing sets#

A set is determined by its elements. Given a set \(A\) and an entity \(x\), the fundamental question is, does \(x\) belong to \(A\) or not? If we know the answer to this question for each possible \(x\), then we know the set. For a given \(x\), the answer to the question, ‘’Is \(x\) a member of \(A\)’’, is either yes or no. The answer can be encoded by letting 1 stand for yes and 0 stand for no. The answer, then, is a single bit, that is, a value that can be either zero or one. To represent the set \(A\) as a string of zeros and ones, we could use one bit for each possible member of \(A\). If a possible member \(x\) is in the set, then the corresponding bit has the value one. If \(x\) is not in the set, then the corresponding bit has the value zero.

Now, in cases where the number of possible elements of the set is very large or infinite, it is not practical to represent the set in this way. It would require too many bits, perhaps an infinite number. In such cases, some other representation for the set can be used. However, suppose we are only interested in subsets of some specified small set. Since this set plays the role of a universal set, let’s call it \(U\). To represent a subset of \(U\), we just need one bit for each member of \(U\). If the number of members of \(U\) is \(n\), then a subset of \(U\) is represented by a string of \(n\) zeros and ones. Furthermore, every string of \(n\) zeros and ones determines a subset of \(U\), namely that subset that contains exactly the elements of \(U\) that correspond to ones in the string. You know by now from Computer Organisation that a string of \(n\) zeros and ones is called an \(n\)-bit binary number. So, we see that if \(U\) is a set with \(n\) elements, then the subsets of \(U\) correspond to \(n\)-bit binary numbers.

To make things more definite, let \(U\) be the set \(\{0,1,2,... ,31\}\). This set consists of the 32 integers between 0 and 31, inclusive. Then each subset of \(U\) can be represented by a 32-bit binary number. We use 32 bits because most computer languages can work directly with 32-bit numbers. For example, the programming languages Java has a data type named int. A value of type int is a 32-bit binary number.[1] Before we get a definite correspondence between subsets of \(U\) and 32-bit numbers, we have to decide which bit in the number will correspond to each member of \(U\). Following tradition, we assume that the bits are numbered from right to left. That is, the rightmost bit corresponds to the element 0 in \(U\), the second bit from the right corresponds to 1, the third bit from the right to 2, and so on. For example, the 32-bit number

\[1000000000000000000001001110110\]

corresponds to the subset \(\{1,2,4,5,6,9,31\}\). Since the leftmost bit of the number is 1, the number 31 is in the set; since the next bit is 0, the number 30 is not in the set; and so on.

From now on, I will write binary numbers with a subscript of 2 to avoid confusion with ordinary numbers. Further, we will often leave out leading zeros. For example, 1101\(_2\) is the binary number that would be written out in full as

\[00000000000000000000000000001101\]

and which corresponds to the set \(\{0,2,3\}\). On the other hand 1101 represents the ordinary number one thousand one hundred and one.

Table 4.3 The 16 hexadecimal digits and the corresponding binary numbers. Each hexadecimal digit corresponds to a 4-bit binary number. Longer binary numbers can be written using two or more hexadecimal digits. For example, \(101000011111_2 = 0xA1F\).#

Hex.

Binary

Hex.

Binary

0

\(0000_2\)

8

\(1000_2\)

1

\(0001_2\)

9

\(1001_2\)

2

\(0010_2\)

A

\(1010_2\)

3

\(0011_2\)

B

\(1011_2\)

4

\(0100_2\)

C

\(1100_2\)

5

\(0101_2\)

D

\(1101_2\)

6

\(0110_2\)

E

\(1110_2\)

7

\(0111_2\)

F

\(1111_2\)

Even with this notation, it can be very annoying to write out long binary numbers—and almost impossible to read them. So binary numbers are never written out as sequences of zeros and ones in computer programs. An alternative is to use hexadecimal numbers. Hexadecimal numbers are written using the sixteen symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F. These symbols are knows as the hexadecimal digits. Each hexadecimal digit corresponds to a 4-bit binary number, as shown in Table 4.3. To represent a longer binary number, several hexadecimal digits can be strung together. For example, the hexadecimal number C7 represents the binary number 11000111\(_2\). In Java and many related languages, a hexadecimal number is written with the prefix 0x. Thus, the hexadecimal number C7 would appear in the program as 0xC7. We’ll follow the same convention here. Any 32-bit binary number can be written using eight hexadecimal digits (or fewer if leading zeros are omitted). Thus, subsets of \(\{0,1,2,... ,31\}\) correspond to 8-digit hexadecimal numbers. For example, the subset \(\{1,2,4,5,6,9,31\}\) corresponds to 0x80000276, which represents the binary number 1000000000000000000001001110110\(_2\). Similarly, 0xFF corresponds to \(\{0,1,2,3,4,5,6,7\}\) and 0x1101 corresponds to the binary number 0001000100000001\(_2\) and to the set \(\{0,8,12\}\).

Now, if you have worked with binary numbers or with hexadecimal numbers, you know that they have another, more common interpretation. They represent ordinary integers. Just as 342 represents the integer \(3\cdot 10^2 + 4\cdot 10^1 +2\cdot 10^0\), the binary number 1101\(_2\) represents the integer \(1\cdot 2^3 +1\cdot 2^2 +0\cdot 2^1 +1\cdot 2^0\), or 13. When used in this way, binary numbers are known as base-2 numbers, just as ordinary numbers are base-10 numbers. Hexadecimal numbers can be interpreted as base-16 numbers. For example, 0x3C7 represents the integer \(3\cdot 16^2 + 12\cdot 16^1 + 7\cdot 16^0\), or 874. So, does 1101\(_2\) really represent the integer 13, or does it represent the set \(\{0,2,3\}\,\)? The answer is that to a person, 1101\(_2\) can represent either. Both are valid interpretations, and the only real question is which interpretation is useful in a given circumstance. On the other hand, to the computer, 1101\(_2\) doesn’t represent anything. It’s just a string of bits, and the computer manipulates the bits according to its program, without regard to their interpretation.

Of course, we still have to answer the question of whether it is ever useful to interpret strings of bits in a computer as representing sets.

4.4.2. Computing with sets#

If all we could do with sets were to ‘represent’ them, it wouldn’t be very useful. We need to be able to compute with sets. That is, we need to be able to perform set operations such as union and complement.

Many programming languages provide operators that perform set operations. In Java and related languages, the operators that perform union, intersection, and complement are written as \(\,|\,\), \(\&\), and \(\sim\). For example, if \(x\) and \(y\) are 32-bit integers representing two subsets, \(X\) and \(Y\), of \(\{0,1,2,... ,31\}\), then \(x\,|\,y\) is a 32-bit integer that represents the set \(X\cup Y\). Similarly, \(x\,\&\,y\) represents the set \(X\cap Y\), and \(\sim\)\(x\) represents the complement, \(\overline{X}\).

The operators \(\,|\,\), &, and \(\sim\) are called bitwise logical operators because of the way they operate on the individual bits of the numbers to which they are applied. If 0 and 1 are interpreted as the logical values false and true, then the bitwise logical operators perform the logical operations \(\vee\), \(\wedge\), and \(\lnot\) on individual bits. To see why this is true, let’s look at the computations that these operators have to perform.

Let \(k\) be one of the members of \(\{0,1,2,... ,31\}\). In the binary numbers \(x\), \(y\), \(x\,|\,y\), \(x\,\&\,y\), and \(\sim x\), the number \(k\) corresponds to the bit in position \(k\). That is, \(k\) is in the set represented by a binary number if and only if the bit in position \(k\) in that binary number is 1. Considered as sets, \(x\,\&\,y\) is the intersection of \(x\) and \(y\), so \(k\) is a member of the set represented by \(x\,\&\,y\) if and only if \(k\) is a member of both of the sets represented by \(x\) and \(y\). That is, bit \(k\) is 1 in the binary number \(x\,\&\,y\) if and only if bit \(k\) is 1 in \(x\) and bit \(k\) is 1 in \(y\). When we interpret 1 as true and 0 as false, we see that bit \(k\) of \(x\,\&\,y\) is computed by applying the logical ‘and’ operator, \(\wedge\), to bit \(k\) of \(x\) and bit \(k\) of \(y\). Similarly, bit \(k\) of \(x\,|\,y\) is computed by applying the logical ‘or’ operator, \(\vee\), to bit \(k\) of \(x\) and bit \(k\) of \(y\). And bit \(k\) of \(\sim x\) is computed by applying the logical ‘not’ operator, \(\lnot\), to bit \(k\) of \(x\). In each case, the logical operator is applied to each bit position separately. (Of course, this discussion is just a translation to the language of bits of the definitions of the set operations in terms of logical operators:
\(A\cap B=\{x| x\in A \wedge x\in B\}\), \(A\cup B=\{x| x\in A \vee x\in B\}\), and \(\overline{A}=\{x\in U| \lnot(x\in A)\}\).)

For example, consider the binary numbers 1011010\(_2\) and 10111\(_2\), which represent the sets \(\{1,3,4,6\}\) and \(\{0,1,2,4\}\). Then 1011010\(_2\) \(\&\) 10111\(_2\) is 10010\(_2\). This binary number represents the set \(\{1,4\}\), which is the intersection \(\{1,3,4,6\}\cap\{0,1,2,4\}\). It’s easier to see what’s going on if we write out the computation in columns, the way you probably first learned to do addition:

1 0 1 1 0 1 0

\(\{\)

6,

4,

3,

1

\(\}\)

&

0 0 1 0 1 1 1

\(\{\)

4,

2,

1,

0

\(\}\)

0 0 1 0 0 1 0

\(\{\)

4,

1

\(\}\)

Note that in each column in the binary numbers, the bit in the bottom row is computed as the logical ‘and’ of the two bits that lie above it in the column. We have written out the sets that correspond to the binary numbers to show how the bits in the numbers correspond to the presence or absence of elements in the sets. Similarly, we can see how the union of two sets is computed as a bitwise ‘or’ of the corresponding binary numbers.

1 0 1 1 0 1 0

\(\{\)

6,

4,

3,

1

\(\}\)

|

0 0 1 0 1 1 1

\(\{\)

4,

2,

1,

0

\(\}\)

1 0 1 1 1 1 1

\(\{\)

6,

4,

3,

2,

1,

0

\(\}\)

The complement of a set is computed using a bitwise ‘not’ operation. Since we are working with 32-bit binary numbers, the complement is taken with respect to the universal set \(\{0,1,2,... ,31\}\). So, for example, \(\sim\)1011010\(_2\) = 11111111111111111111111110100101\(_2\) Of course, we can apply the operators \(\&\), \(\,|\,\), and \(\sim\) to numbers written in hexadecimal form, or even in ordinary, base-10 form. When doing such calculations by hand, it is probably best to translate the numbers into binary form. For example,

\[\begin{split}\begin{align*} 0xAB7\ \&\ 0x168E &= 1010\,1011\,0111_2\ \&\ 1\,0110\,1000\,1110_2\\ &= 0\,0010\,1000\,0110_2\\ &= 0x286 \end{align*}\end{split}\]

When computing with sets, it is sometimes necessary to work with individual elements. Typical operations include adding an element to a set, removing an element from a set, and testing whether an element is in a set. However, instead of working with an element itself, it’s convenient to work with the set that contains that element as its only member. For example, testing whether \(5\in A\) is the same as testing whether \(\{5\}\cap A\not=\emptyset\). The set \(\{5\}\) is represented by the binary number 100000\(_2\) or by the hexadecimal number 0x20. Suppose that the set \(A\) is represented by the number \(x\). Then, testing whether \(5\in A\) is equivalent to testing whether 0x20 \(\&\) \(x\not=0\). Similarly, the set \(A\cup\{5\}\), which is obtained by adding 5 to \(A\), can be computed as \(x\) \(|\) 0x20. The set \(A\smallsetminus \{5\}\), which is the set obtained by removing 5 from \(A\) if it occurs in \(A\), is represented by \(x\) \(\&\) \(\sim\)0x20e sets \(\{0\}\), \(\{1\}\), \(\{2\}\), \(\{3\}\), \(\{4\}\), \(\{5\}\), \(\{6\}\), … , \(\{31\}\) are represented by the hexadecimal numbers 0x1, 0x2, 0x4, 0x8, 0x10, 0x20, … , 0x80000000. In typical computer applications, some of these numbers are given names, and these names are thought of as names for the possible elements of a set (although, properly speaking, they are names for sets containing those elements). Suppose, for example, that \(a\), \(b\), \(c\), and \(d\) are names for four of the numbers from the above list. Then \(a\,|\,c\) is the set that contains the two elements corresponding to the numbers \(a\) and \(c\). If \(x\) is a set, then \(x\,\&\,\sim d\) is the set obtained by removing \(d\) from \(x\). And we can test whether \(b\) is in \(x\) by testing if \(x\,\&\,b\not=0\).

Here is an actual example, which is used in the Apple Mac operating systems (macOS). Characters can be printed or displayed on the screen in various sizes and styles. A font is a collection of pictures of characters in a particular size and style. On the Mac, a basic font can be modified by specifying any of the following style attributes: bold, italic, underline, outline, shadow, condense, and extend. The style of a font is a subset of this set of attributes. A style set can be specified by or-ing together individual attributes. For example, an underlined, bold, italic font has style set underline \(|\) bold \(|\) italic. For a plain font, with none of the style attributes set, the style set is the empty set, which is represented by the number zero.

The Java programming language uses a similar scheme to specify style attributes for fonts, but currently there are only two basic attributes, Font.BOLD and Font.ITALIC. A more interesting example in Java is provided by event types. An event in Java represents some kind of user action, such as pressing a key on the keyboard. Events are associated with ‘components’ such as windows, buttons and scroll bars. Components can be set to ignore a given type of event. We then say that that event type is disabled for that component. If a component is set to process events of a given type, then that event type is said to be enabled. Each component keeps track of the set of event types that are currently enabled. It will ignore any event whose type is not in that set. Each event type has an associated constant with a name such as AWTEvent.MOUSE\_EVENT\_MASK. These constants represent the possible elements of a set of event types. A set of event types can be specified by or-ing together a number of such constants. If \(c\) is a component and \(x\) is a number representing a set of event types, then the command ‘\(c\). enableEvents (\(x\))’ enables the events in the set \(x\) for the component \(c\). If \(y\) represents the set of event types that were already enabled for \(c\), then the effect of this command is to replace \(y\) with the union, \(y\,|\,x\). Another command, ‘\(c\). disableEvents (\(x\))’, will disable the event types in \(x\) for the component \(c\). It does this by replacing the current set, \(y\), with \(y\,\&\,\sim x\).

4.4.3. Exercises#