Outline of Boolean Algebra

Outline of Boolean Algebra Dr. Robert K. Moniot January 13, 2011 1 Introduction Boolean algebra is suited to the design of digital circuits, which ...
Author: Eileen Bryan
4 downloads 0 Views 119KB Size
Outline of Boolean Algebra Dr. Robert K. Moniot January 13, 2011

1

Introduction

Boolean algebra is suited to the design of digital circuits, which represent information using physical quantities that take on only two distinct values. Usually the physical quantity is the low or high voltage of an output of a circuit, but it is also possible to use other quantities such as the off or on state of a transistor, or north or south orientation of a magnetic domain. The two physical states are assigned the binary values 0 and 1. We shall see that binary numbers and boolean algebra provide a complete information processing framework: Representation: any information that can be represented in symbolic form can be represented using binary numbers, and Processing: any information-processing task that can be specified in terms of transformations of binary numbers can be implemented as a boolean logic function. In what follows, we will always equate the binary value 0 to the logical value false, and the binary value 1 to the logical value true. When discussing the representation of information, the binary (numerical) significance will generally be more useful, whereas when discussing the processing of information, the boolean (logical) significance will be used. When a circuit is to be implemented in hardware, an assignment must be made to map between physical and binary values. This assignment is arbitrary, and can be made in two equally valid ways. We refer to the two voltage levels in a circuit as Low and High, according to their relative values. In TTL circuits, for example, logic low is 0 volts while logic high is 5 volts. The comparison is algebraic, not based on magnitude. For example, in ECL circuits, logic low is about −2 volts, while logic high is about −1 volt. Whatever the actual voltage values may be, the two different assignment schemes are • Positive Logic: Low = 0, High = 1. • Negative Logic: High = 0, Low = 1. Regardless of which of these assignments is used, a signal is said to be asserted if it is true (binary 1), and deasserted if it is false (binary 0).

2

Boolean Algebra

We now define boolean algebra as used in digital logic design, and describe methods for implementing a desired function using logic gates. 1

2.1

Laws and properties

Boolean variables are represented with letters as in normal algebra. They can take on the values 0 or 1. There are three fundamental boolean operations: • AND, written A · B or AB, • OR, written A + B, and • NOT, written A or A0 . (This notation follows common electrical engineering conventions. In mathematical logic the notation that is often used is AND: A ∧ B; OR: A ∨ B; NOT: ¬A.) These can be defined by truth tables, which list the result of the operation for each of the different possible combinations of inputs. A 0 0 1 1 It is

B A·B A+B 0 0 0 1 0 1 0 1 0 1 1 1 easy to verify that these

Identities: Dominance: Inverses: Involution: Idempotent: Commutative: Associative: Distributive: DeMorgan’s:

A 0 1

A 1 0

operations obey the following laws: A·1=A A·0=0 A·A =0

A+0=A A+1=1 A+A =1

(A ) = A A+A=A A·A=A A+B =B+A A·B =B·A A + (B + C) = (A + B) + C A · (B · C) = (A · B) · C A · (B + C) = (A · B) + (A · C) A + (B · C) = (A + B) · (A + C) (A + B) = A · B (A · B) = A + B

Observe that these laws obey the principle of duality: interchanging 0 and 1, and simultaneously interchanging + with · leaves the table unchanged, except that the columns are swapped. Likewise the truth tables for + and · are simply interchanged. Thus any statement that is valid for a boolean expression is valid also for its dual. Example: x + x · y = x · 1 + x · y = x · (1 + y) = x, so duality says that x · (x + y) = x. (This example shows that when forming the dual of an expression you may need to insert parentheses to maintain the order of evaluation of sub-expressions.) Truth tables can be used to prove these and other laws. The method is to write down the truth table for each side of the equation, and show that the columns corresponding to each side are identical. Example: Proving that x + x · y = x: x 0 0 1 1

y 0 1 0 1

x·y 0 0 0 1

x+x·y 0 0 1 1

Here the first column (x) and the last column (x + x · y) are identical, so the equation holds. 2

2.2

Minterms and maxterms

Each row of a truth table can be associated with a minterm, which is a product (AND) of all variables in the function, in direct or complemented form. A minterm has the property that it is equal to 1 on exactly one row of the truth table. Here is the three-variable truth table and the corresponding minterms: A

B

C

minterm

0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1

A · B · C = m0 A · B · C = m1 A · B · C = m2 A · B · C = m3 A · B · C = m4 A · B · C = m5 A · B · C = m6 A · B · C = m7

The subscript on the minterm is the number of the row on which it equals 1. (The row numbers are obtained by reading the values of the variables on that row as a binary number.) Minterms provide a way to represent any boolean function algebraically, once its truth table is specified. The function is given by the sum (OR) of those minterms corresponding to rows where the function is 1. By the minterm property, the OR will contain a term equal to 1 (making the function 1) on exactly those rows where the function is supposed to be 1. Example: suppose a function F is defined by the following truth table: A 0 0 0 0 1 1 1 1

B 0 0 1 1 0 0 1 1

C 0 1 0 1 0 1 0 1

F 0 1 1 0 1 0 0 1

Since F = 1 on rows 1, 2, 4, and 7, we obtain F

= m1 + m2 + m4 + m7 = A ·B ·C +A ·B·C +A·B ·C +A·B·C

A compact notation is to write only the numbers of the minterms included in F , using the Greek letter capital sigma to indicate a sum: P F = (1, 2, 4, 7) This form can be written down immediately by inspection of the truth table. The foregoing proves that once we have specified a boolean function by means of its truth table, we are (in principle) able to implement it by means of logic gates that perform the AND, OR, and NOT functions. 3

Equivalence of two functions: two boolean expressions represent the same function if their truth tables are identical. In Σ form they will be the same. Each row of a truth table is also associated with a maxterm, which is a sum (OR) of all the variables in the function, in direct or complemented form. A maxterm has the property that it is equal to 0 on exactly one row of the truth table. Here is the three-variable truth table and the corresponding maxterms: A 0 0 0 0 1 1 1 1

B 0 0 1 1 0 0 1 1

C 0 1 0 1 0 1 0 1

maxterm A + B + C = M0 A + B + C = M1 A + B + C = M2 A + B + C = M3 A + B + C = M4 A + B + C = M5 A + B + C = M6 A + B + C = M7

Like minterms, maxterms also provide a way to represent any boolean function algebraically once its truth table is specified. The function is given by the product (AND) of those maxterms corresponding to rows where the function is 0. By the maxterm property, the AND will contain a term equal to 0 (making the function 0) on exactly those rows where the function is supposed to be 0. Example: for the same function as previously, we observe that it is 0 on rows 0, 3, 5, and 6. So F

= M 0 · M3 · M5 · M 6 =

(A + B + C) · (A + B + C ) · (A + B + C ) · (A + B + C)

This form also lends itself to a compact notation: using the Greek letter capital pi to denote a product, we write only the numbers of the maxterms included in F : Q F = (0, 3, 5, 6) Two boolean functions are equivalent if their Π forms are the same. The Σ and Π notational forms for a given function are related: each form contains all the row numbers omitted in the other form.

2.3

Two-level forms

We now look at other ways to represent a boolean function by expressions. Two important ways are as a sum of products (SOP form), or as a product of sums (POS form). Both of these are called two-level forms because the corresponding logic circuits consist of two layers of gates: the first layer to combine the variables by AND or OR into products or sums respectively, and the second layer to combine those terms by OR or AND to produce the function. The sum-of-minterms and product-of-maxterms forms are special cases of two-level forms, in which each term contains all the variables of the function. For general SOP or POS forms, each term need not contain every variable. Example: • F (A, B, C) = ABC + AB + B C is in SOP form.

4

• G(x, y, z) = x(x + z)(x + y + z ) is in POS form. • H(A, B, C, D) = A(B + CD ) is not a two-level form. A function which is not in two-level form can be converted to two-level form by using the distributive law. Example: The function H in the previous example can be converted to SOP form using the distributive law to take A inside: H = AB + ACD . It can alternatively be converted to POS form by using the distributive law of + over · to expand the inner term into two sums: H = A(B + C)(B + D ). A function which is in SOP form can be converted to sum-of-minterms form by using the law of inverses and the distributive law to expand each sum to a group of minterms. The idempotent law is then used to eliminate duplicate minterms that may be generated from different product terms. Example: the function F in the preceding example can be converted to sum-ofminterms as follows: F

= ABC + AB + B C = ABC + AB (C + C) + (A + A )B C = ABC + AB C + AB C + AB C + A B C = ABC + AB C + AB C + A B C P = (0, 4, 5, 7)

Similarly, the function G can be converted to product-of-maxterms form: G =

(x)(x + z)(x + y + z )

=

(x + yy + zz )(x + yy + z)(x + y + z )

=

(x + y + z)(x + y + z )(x + y + z)(x + y + z )(x + y + z)

=

Q

(x + y + z)(x + y + z ) (0, 1, 2, 3, 4, 6, 7)

It is worth noting that if a boolean expression is in POS form, its complement will be in SOP form (by DeMorgan’s law), and vice versa. Example: For the functions F and G above, F

= ABC + AB + B C

F

= (ABC) · (AB ) · (B C ) =

(A + B + C )(A + B)(B + C)

G = x(x + z)(x + y + z ) G

= x + (x + z) + (x + y + z ) = x + xz + xyz

5

In general, the complement of a boolean expression is formed by taking the dual of the expression and complementing every variable.

2.4

Other useful functions

Besides the familiar AND, OR, and NOT, there are some other boolean functions that are useful enough to be given their own names. The first pair of these are the exclusiveOR (XOR), symbolized by the operator ⊕, and its complement, equivalence (EQV), symbolized by ≡. These are defined by the following truth tables: A B A⊕B A≡B 0 0 0 1 0 1 1 0 1 0 1 0 1 1 0 1 These can be easily remembered by the rules that XOR is true when one or the other but not both of the variables is true, while EQV is true if the two variables have the same truth values. These two functions turn out to be useful in building adder and subtractor units, and in parity generation and checking, as well as other applications. Another pair of useful functions are the complement of AND, called NAND, and the complement of OR, called NOR. They are defined by the following truth tables: A B AB A + B 0 0 1 1 0 1 1 0 1 0 1 0 0 0 1 1 These two functions are called universal since either one alone is capable of implementing any function. The proof is simple, and involves showing that A , A + B, and AB can be generated by using only the NAND operation, or only the NOR. This proof is left as an exercise.

3

Logic Gates

The purpose of putting a boolean function into algebraic form is so that it can be implemented by a circuit of logic gates. In this section we look at how logic circuits are drawn, and see some practical considerations that lie outside the domain of boolean algebra.

3.1

Gate symbols

The logic gates for the commonly used boolean functions are drawn on a schematic diagram using the following symbols:

6

  

A

A

A

Buffer A B

A B

AB

AB

NAND

A B

A+B

OR

A B

A

Inverter

AND

A B

 



A+B

NOR

A B

A⊕B

A≡B

EQV

XOR

The buffer is not necessary from the point of view of performing logic functions, but it is often required in actual circuits to strengthen a signal. It is important to realize that the circuit diagrams include only the logic paths; the actual chips must also include connections for supplying power and ground to operate the gates. A “bubble” (the small circle on the output of the right-hand gates above) signifies inversion (complementing) of a logic signal. Bubbles can also be placed on the inputs to a gate to avoid the need for drawing an explicit inverter gate. Thus the following two circuits are equivalent:



A B

A B

AB



AB

Gates can have more than two inputs. Because AND, OR, and XOR are associative, there is no ambiguity as to what function is produced. NAND and NOR are not associative. Therefore the output of the multi-input NAND is defined as the complement of the corresponding multi-input AND, and similarly for NOR and EQV. Example: A B C



A B C

ABC

ABC

3-input NAND

3-input AND

3.2



Integrated circuits

Nowadays, logic gates are constructed from transistors, which are devices based on semiconductor materials such as silicon or gallium arsenide. A transistor is essentially an amplifier, which receives an input signal and produces an amplified or strengthened version of that signal as output. For digital logic circuits, transistors are operated in a mode such that they produce either minimum output or maximum output, never anything in between. Thus they can be thought of as “switches” that are either “off” or “on,” depending on the value of an input voltage. In the early days of computer 7

construction, individually packaged transistors were wired together to form the gates that provided the logic functions of the computer. Eventually, a way was found to place a number of transistors, together with their interconnections and other necessary components, on a single piece of silicon, called a chip. As the technology has developed, designers have been able to put more and more transistors onto one chip, until now transistor counts of many millions are the state of the art. The following abbreviations are used to describe the degree of integration on a given chip: • SSI (Small-scale integration) ≈ 10 gates • MSI (Medium-scale integration) ≈ 100 gates • LSI (Large-scale integration) ≈ 1000 gates • VLSI (Very-large-scale integration) > 10, 000 gates (These counts are very approximate, and it is really up to the manufacturer to decide into which category to place a given product.)

3.3

Logic families

Designers of integrated circuits have developed a number of different logic families, each based on a particular kind of transistor circuit and particular choices for the operating voltage, etc. Chips in one family are generally compatible with one another, but mostly not with chips in a different family. (The major exception is that TTL can often be combined with CMOS.) The most commonly used logic families have the following characteristics: • TTL: sturdy, cheap. Low = 0 V, High = +5 V. • ECL: fast, high power. Low = −1.8 V, High = −0.9 V. • MOS, CMOS: low power. Low = 0 V, High = +3 to +10 V. Within any logic family, higher speed can be obtained by using higher power. Power dissipation is an important consideration for LSI and VLSI circuits. The higher the power dissipation, the hotter the chip will operate. This factor makes MOS and CMOS the logic families best suited to VLSI circuits, because they have the lowest power dissipation. (Of the two, CMOS has the lower power dissipation.) ECL is used mainly in high-performance supercomputers, where expensive refrigeration systems can be justified for the sake of obtaining the greatest possible speed.

3.4

Positive and Negative Logic

As mentioned previously, there are two ways of assigning the boolean values 0 and 1 to the two voltage levels Low and High of a circuit. In the positive logic interpretation, Low = 0 and High = 1. In negative logic, Low = 1 and High = 0. In consequence, the identification of the logic function produced by a given circuit depends on which of these two interpretations is used. Example: The TTL chips with part numbers 7408 and 7432 generate outputs according to the function table below. The interpretations of these outputs according to the two logic conventions are shown. 8

A L L H H

B L H L H

7408 L L L H

7432 L H H H

→ Positive logic

A 0 0 1 1

B 0 1 0 1

7408 0 0 0 1

7432 0 1 1 1

↓ Negative logic A B 7408 7432 A B 7408 7432 1 1 1 1 0 0 0 0 → 1 0 1 0 0 1 1 0 Sort rows 1 0 1 0 0 1 1 0 0 0 0 0 1 1 1 1 Comparing the tables on the right with the known truth tables for the basic boolean functions, we find that in positive logic the 7408 performs the AND function and the 7432 performs the OR. In negative logic, the 7408 is an OR gate and the 7432 is an AND. As a general rule, the boolean function produced by a chip, as interpreted in negative logic, is the dual of the positive logic function.

4

Simplification

In the previous sections, we saw how it is always possible to obtain an algebraic expression for any boolean function. For instance, from the truth table for a function, we can obtain the sum of minterms or product of maxterms form. However, these forms are not necessarily the most practical for actually implementing the function in a circuit. For this purpose it is desirable to simplify the function. The main goals of simplification are: • Reduce the number of terms in the expression. Each term requires a gate, and it is desirable to use as few gates as possible. • Reduce the number of variables in each term. Each variable requires an input to a gate, which means more transistors are required to construct the gate. • Reduce the depth of the expression. Each level of gates increases the delay of the circuit, slowing down the operation of the computer. These goals are not always compatible with each other. For example, decreasing the depth of a circuit generally increases the number of gates and the number of inputs per gate. In practice, one seeks a compromise solution that balances these trade-offs in a reasonable way. Methods exist which are guaranteed to obtain SOP and POS forms (these are the minimum possible depth) which have the fewest terms and the fewest variables in each term. In an actual design, greater depth might be accepted (i.e., departing from SOP or POS form) in order to gain further decreases in the number and size of terms. Another document describes the SIS software program that can perform transformations and simplifications of boolean expressions. We will not describe practical simplification methods here.

9

4.1

Don’t-care conditions

Sometimes the designer of a circuit knows that the circuit will never receive certain input combinations in normal operation. This is because the inputs have a specific meaning, and for the given application certain patterns of inputs may not have a defined meaning. Assuming that the other parts of the system that provide these inputs are working properly, these patterns will never be presented to the circuit. So it is irrelevant what output the circuit would give for these input combinations. The output can be chosen to be whatever makes the circuit simplest. These input combinations are called “don’t-care” conditions, since we don’t care what the output would be. Example: Suppose we represent the months of the year by a code in which January = 1, February = 2, ... December = 12. Let us design a circuit which receives a month code as input and outputs a 1 if the month has 31 days, and a 0 if it is shorter. Representing the month number in binary requires 4 bits, since 12 = 1100 in binary. Now, 4 bits can represent a total of 16 different numbers, 0 through 15. Since only the numbers 1 through 12 are used for this application, the numbers 0 and 13 through 15 have no meaning and will never be provided to the circuit. They are the don’t-care conditions. Let us call the bits of the binary month number m3 (most significant), m2 , m1 , and m0 (least significant). We call the function L (for long-month). The months having 31 days are January, P March, May, July, August, October, and December, so we can write immediately L =P (1, 3, 5, 7, 8, 10, 12). We can use a similar notation to describe the don’t cares: d = (0, 13, 14, 15). We fill in the truth-table for L with 1’s for the minterms as always, but with X’s for the don’t-cares. Here is the result: m3 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1

m2 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 1

m1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1

m0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

L X 1 0 1 0 1 0 1 1 0 1 0 1 X X X

When performing the simplification of this function, each X can be taken to be a 0 or a 1, according to whichever choice makes the function simpler. Here is the result: L = m3 m0 + m3 m0 Checking this function against the table, we see that it matches the function wherever a 0 or 1 is specified, which is all that is required. If we had set the X’s to 0’s arbitrarily in advance, the simplest function would have been 10

L = m3 m2 m0 + m3 m1 m0 + m3 m0 So the use of don’t-care conditions has resulted in a significantly simpler function.

11