PLANT RECOGNITION USING HARDWARE-BASED NEURAL NETWORK

Paper No. 983040 An ASAE Meeting Presentation PLANT RECOGNITION USING HARDWARE-BASED NEURAL NETWORK by Won Suk Lee Graduate Research Assistant David...
Author: Evan Robbins
2 downloads 0 Views 104KB Size
Paper No. 983040 An ASAE Meeting Presentation

PLANT RECOGNITION USING HARDWARE-BASED NEURAL NETWORK by Won Suk Lee Graduate Research Assistant

David C. Slaughter Associate Professor

Department of Biological and Agricultural Engineering University of California, Davis One Shields Ave Davis, CA95616, USA E-mail : [email protected] and [email protected]

Written for Presentation at the 1998 ASAE Annual International Meeting Sponsored by ASAE

Disney’s Coronado Springs Resort Orlando, Florida July 12-16, 1998 Summary: This study showed the feasibility of using a hardware-based neural network for increasing processing speed and plant identification rate, however it also indicated that new features needed to be developed for better recognition of tomato plants. With the hardware-based neural network, 38.9% of tomato cotyledons, 37.5% of tomato true leaves, and 85.7% of weeds were correctly identified.

Keywords: Plant Identification, Neural Network, Real-time, Weed control, Machine Vision.

The author(s) is solely responsible for the content of this technical presentation. The technical presentation does not necessarily reflect the official position of ASAE, and its printing and distribution does not constitute an endorsement of views which may be expressed. Technical presentations are not subject to the formal peer review process by ASAE editorial committees; therefore, they are not to be presented as refereed publications. Quotation from this work should state that it is from a presentation made by (name of author) at the (listed) ASAE meeting. EXAMPLE - From Author's Last Name, Initials. "Title of Presentation." Presented at the Date and Title of meeting. Paper No. X. ASAE, 2950 Niles Rd., St. Joseph, MI 49085-9659 USA. For information about securing permission to reprint or reproduce a technical presentation, please address inquiries to ASAE. ASAE, 2950 Niles Rd., St. Joseph, MI 49085-9659 USA Voice: 616.429.0300 FAX: 616.429.3852 E-Mail:

PLANT RECOGNITION USING HARDWARE-BASED NEURAL NETWORK Won Suk Lee Graduate Research Assistant

David C. Slaughter Associate Professor

ABSTRACT A real-time intelligent robotic weed control system for tomatoes has been developed and tested for selective spraying of in-row weeds using a machine vision system and precision chemical application. Bayesian classifiers are widely used for classifying agricultural products. However, for recognition of tomato plants and weeds, a Bayesian classifier based upon leaf shape features from a 2-D top view did not perform well enough for field use, especially when wind caused plants to lean over, or when plants were partially occluded which is typical of plants bigger than the cotyledon stage. In order to increase processing speed and recognition accuracy, a hardware-based artificial neural network was studied. This study showed the feasibility of using a hardware-based neural network for increasing processing speed and plant identification rate, however it also indicated that new features needed to be developed for better recognition of tomato plants. With the hardware-based neural network, 38.9% of tomato cotyledons, 37.5% of tomato true leaves, and 85.7% of weeds were correctly identified.

INTRODUCTION Plant identification has been one of the goals for farm automation. By locating individual plants in the field, many farm operations such as weeding, thinning, and applying herbicides could be automated. Until recent technologies evolved, farming has been dependent on human power. In order to automate farm operations, agriculture should take advantage of today’s state of the art technology. More and more computers are being used in farming to minimize costs and maximize yields. Even satellites are used for smart and precision farming using the Global Positioning Systems (GPS) and Geographic Information Systems (GIS). Complete farm automation would be an ultimate goal. Costly, tedious and labor-intensive manual operations need to be automated. In this research, as one of the primary goals for farm automation, we would like to automate the weeding operation for tomato production. Ever since humans started farming, weeds have been one of the major obstacles to maximizing production. Currently hand hoeing is used to remove in-row weeds in a commercial processing tomato field in northern California. Hand hoeing is costly, time consuming and labor intensive. For example, the cost for hand weeding was about $80 per 0.4 ha (1 acre) for processing tomato production in northern California in 1996. In order to automate costly and tedious hand weeding operation, a real-time intelligent robotic weed control system for tomatoes has been developed and tested for selective spraying of

1

in-row weeds using a machine vision system and a precision chemical application (Lee, 1998). Current machine vision technology based on morphology (shape extraction) has been successful in many situations, but not always. Bayesian classifiers are widely used for classifying agricultural products. However, for recognition of tomato plants and weeds, a Bayesian classifier based upon leaf shape features from a 2-D top view did not perform well enough for field use, especially when wind caused plants to lean over, or when plants were partially occluded which is typical of plants bigger than the cotyledon stage. Artificial neural network (ANN) has been applied successfully in many agricultural settings, from classifying grain kernels, detecting defects to sorting agricultural products, typically with machine vision technology. Most of them were software-based applications. However, software-based neural network may not be a practical solution since there are processor speed limitations, inter-neuron associations, and the need to program complex recognition algorithms. Hardware-based neural network could be feasible for increasing processing speed and recognition accuracy. The overall goal for this research is to increase performance of the real-time prototype robotic weed control system. More specifically, the objective is to explore the feasibility of tomato plant recognition in real-time using a hardware-based neural network and to compare the performance with that of Bayesian classifier with images acquired in commercial processing tomato fields in northern California.

BACKGROUND Artificial neural network (ANN) has been widely used in many agricultural settings, such as classifying grain kernels, detecting defects (Yie et al. (1993), Patel et al. (1995), and Miller and Throop (1997)), classifying food materials (Ding and Gunasekaran (1994)), and sorting agricultural products (Ozer et al. (1995), Xu et al. (1995), Park et al. (1996) and Ghazanfari et al. (1996)), and weed detection (El-Faki et al. (1997) and Yang et al. (1997)). These were all software-based applications. In most situations, since real-time operations are required despite of some loss of accuracy, hardware-based neural network may be practicable to achieve high processing speed and better performance. Since the early 90’s, there has been some research implementing hardware-based neural networks. Most of these implementations took advantage of parallel processing of the input data. Iigo et al. (1990) presented a neural network which had digitally stored self adjusting weights. They described a circuit for future VLSI implementation. They used three different training patterns to train the circuit and reported that its weights were self-adjusted for the presented pattern. Mauduit et al. (1992) developed a neural network VLSI chip, called Lneuro 1.0 and tested it with Kohonen self organizing maps. They reported that a scaleable digital neurocomputer based on a parallel coprocessor seemed an efficient solution to the neural network simulation problem. Brown et al. (1992) utilized a neural network VLSI chip for classifying image-map pixels into separate features such as roads and rivers. A 32 neuron fully interconnected neural network chip was used with a one microsecond neuron time constant and 1000 synapses (weights). They reported the results of different classifiers as 81.5%, 91.2%, 91.9% and 89.8% for neural network hardware, software neural network, K-nearest neighbors,

2

and Bayesian-unimodal Gaussian algorithms, respectively. Linares-Barranco et al. (1992) presented a modular transconductance-mode (T-mode) for analog hardware implementation of neural networks. They showed that by changing the interconnection strategy different neural network systems could be implemented. They reported that these networks were successfully tested. Botros and Abdul-Aziz (1994) presented a hardware implementation of a fully digital multi-layer perceptron artificial neural network using Xilinx Field Programmable Gate Arrays (FPGAs). They applied the same acoustic input patterns of spoken word to both the hardware neural network and the simulated software network and reported that the hardware performed correctly. Lansner (1995) developed an experimental hardware neural network built of cascadable, analogue CMOS test chips and trained successfully with a host computer using hardware-in-the-loop backpropagation learning. He reported that the neural chipset showed a good applicability in terms of cascadability, learning and generalization capability. Ienne et al. (1996) presented a survey of digital systems to implement neural networks. They divided the systems in two different categories: parallel systems with standard digital components and parallel systems with custom processors. These systems all tried to implement the intrinsic parallelism of ANN algorithms. They reported that software support and system integration was just beginning to have the versatility that a wider class of users may require, while the hardware was achieving some maturity. They described that IBM’s ZISC036 (which was used in this research) had the advantage of a more precise distance computation and a choice of norms but lacks the versatility that the on-chip microcontroller gave to the Ni1000 (Nestor, Inc. Providence, RI02906), which was particularly attractive thanks to its mix of hardwired computation in the highly parallel and time-consuming part of the process, and the programmable, highly versatile control in the learning process. Reilly (1997) described RCE (Relative Coulomb Energy) neural network as radial basis function network and reported that the RCE training algorithm eliminated the need to know in advance how many cells to specify in the middle layer of the network. He described the Ni1000 Recognition Accelerator chip (Nestor, Inc., Providence, RI) as an example of the application of the RCE neural network to vehicle detection on roadways and finger print classification. Yingwei (1998) reported a detailed performance analysis of the minimal resource allocation network learning algorithm, which was a sequential learning radial basis function neural network.

MATERIALS AND METHODS Machine Vision System The prototype machine vision system is shown in Fig. 1 (Lee et al., 1997). All research was conducted with juvenile processing tomato plants grown in commercial tomato fields in northern California. The UC Davis Robotic Cultivator was utilized as a guidance system to find the center of a row. Morphology Based Bayesian classifier In order to select the best feature subset, field images were used which were taken from 13 commercial processing tomato fields in Northern California starting in late May - late June

3

1996 and late March until mid-May 1997. The tomato plants were in various stages of maturity from just emerging to the second true leaf stage. For developing a Bayesian classifier, images were divided into two groups of good and bad image quality based on the focus, camera aperture, wind, cotyledon opening, state of maturity, and occlusion. It was an especially windy spring in Northern California in 1997 and most of the tomato plants in the commercial fields were lying down along the direction of wind travel. Tomato plants in the good image quality group were easier to recognize with an image processing algorithm since they retained their original shape. Tomato plants in bad images were harder to recognize since many of them lost their original shape due to occlusion and from being blown down by wind. From each group, a training set and a validation set were created in order to estimate the plant recognition performance by the image processing algorithm. For the good image group, a total of 117 images were used in the training set and 157 images were used in the validation set. For the bad group, a total of 129 images and 133 images were used respectively (Table 1). The images in the training sets were selected carefully to represent the entire group while those in the validation sets were selected randomly from each group. There was no overlap between training and validation sets.

A CC UM UL A T O R

55.88 cm

Figure 1. The prototype robotic weed control system.

4

Table 1. Number of images used for feature selection in each group.

Training set Validation set Total

Good group 117 157 274

Bad group 129 133 262

Total 242 290 536

Plant objects were divided into 4 classes; tomato cotyledon, tomato true leaf, third group, and weed classes (Table 2). The plant objects in the third group consisted of cotyledons and tomato true leaves which were curled, occluded, eaten by bugs, and partially hidden by the edge of the image. From now on, the class numbers (1, 2, 3, and 4) will be used instead of class description (tomato cotyledon, tomato true leaf, third group, and weed). Table 2. Class assignment for plant leaves in an image. Class 1 2 3 4

Description Tomato cotyledon Tomato true leaf Tomato Third group Weed

Table 3 shows the number of plant leaves used for the feature selection procedure in each class in each group. Since class 3 was not separable from class 4 using only a single 2-D top view, the objects in class 3 were set aside temporarily during the feature selection process and only classes 1, 2, and 4 were used for the feature selection process (Lee, 1998). Table 3. Number of plant leaves used for feature selection for each class in each group.

Class 1 2 4 Total

Good group Training Validation 78 150 127 179 198 202 403 531

Bad group Training Validation 90 123 114 111 138 157 342 391

Hardware-based neural network A real-time neural network board, ZISC (Zero Instruction Set Computer, IBM Inc.), was used to recognize tomato plants and weeds. ZISC is a real-time pattern recognition board, which has adaptive learning capability, developed by IBM. ZISC is a feed forward three layers network (an input layer, a hidden layer and an output layer). ZISC036 was used as a processing element in the hidden layer. One ZISC036 hosts 36 neurons. Each neuron has a register for prototype storage and a distance evaluation unit to ensure a high level of parallelism. The ZISC036 is

5

capable of identifying a vector within a one to 64 dimensional space, each component of the vector is coded on an 8-bit number. The ZISC036 can produce up to 16,383 (= 214 -1) different categories in the output layer. With a ZISC operating at 20MHz, it takes 3.2 ms to process 64 inputs and an evaluation could be achieved within 0.5 ms after the last component has been fed. This capability allows more than 250,000 evaluations per second (2.2 giga instructions/s). A daisy chain connects all neurons of the chip. All neurons communicate together via the inter-ZISC communication bus, which is internally re-driven to allow connection of several ZISC modules without decreasing the performance. ZISC utilizes the RBF-like (Radial Basis Function) approach, which consisted of mapping an N-dimensional space by prototypes. Each prototype is associated with a category and an influence field representing a part of the N-dimensional space around the prototype. A prototype is a vector defining the coordinates of a point within the N-dimensional space. Within the network, several prototypes may be associated with one given category, and the influence fields can partially overlap each other.

Category A Category B

V2

Distance Influence Field

P2

V1

P1

Figure 2. An example of a Radial basis function mapping, where (P1, P2) are stored prototype coordinates and (V1, V2) are input vector coordinates. Figure 2 shows how a RBF-like approach can map a two-dimensional space. The classification task consists of evaluating if an N-dimension input vector lies within the influence field of any prototype stored in the network. This is done by calculating the distance between the input vector and all stored prototypes, and comparing it to the influence field associated with the prototype. The RBF-like neural network topology is a three layer network where each input node, corresponding to component (Vi) of a feature vector is connected to every node of the second layer (hidden layer), and each node of the hidden layer is connected to one output node which corresponds to a category, Figure 3.

6

V0

CAT 0 CAT 1

V1

V2

CAT 2

. . .

. . .

. . .

Vn Input layer

CAT n

Hidden layer

Output layer

Figure 3. RBF-like network topology

Figure 4 shows the structure of a ZISC036 neuron. First of all, an input vector is coming from the data bus into the distance evaluator. The distance evaluator calculates the distance between the incoming vector and the weight storage (prototype storage). The radius comparator compares the distance and the influence field stored in the radius storage. If The distance is inside the influence field, the radius comparator fires (generates) the corresponding output category (prototype). Then, the category register stores the output category. If it is not inside the radius (influence field), it goes to the radius processing unit and stores an adjusted influence field. The radius processing unit is used only in training process.

DATA BUS

Weight storage

Distance evaluator

Radius storage

Radius comparator FIRE Category register

Radius processing

INTER NEURON BUS

Figure 4. Structure of a ZISC036 neuron.

7

Recognition by the hardware-based neural network As a preliminary stage of this research, 6 different illumination conditions were selected among the images taken from 13 different processing tomato fields since the illumination conditions were very different from field to field. It would be very difficult to recognize all possible conditions because there were a lot of variations in the color of plants and background. Primarily a total of 252 neurons (7 ZISC036 chips) were used to distinguish tomato plants and weeds. A user interface was developed using Microsoft Visual Basic 5.0 to collect the feature vector (pixel value), to train the board, and to recognize tomato plants and weeds. Feature vectors were obtained as 8 bit pixel values from 24 bit true color images using a 10 pixel by 10 pixel region of interest (ROI), Figure 5. Forty-eight pixels from three RGB channels (16 pixels from each channel) were fed as an input feature vector. There were two output categories: tomato plants and weeds.

Figure 5. Input feature pattern in a 10 pixel by 10 pixel region of interest, where gray squares ( ) are pixels obtained inside the ROI. As a preliminary step, a total 10 training images were selected carefully to represent all possible situations in the 6 illumination conditions. Fifteen validation images were selected randomly from the same 6 illumination conditions. There was no overlap between the training images and the validation images. The input feature vectors were obtained manually using a computer mouse. A validation process was accomplished using the same size region of interest used in the training process. The 10 pixel by 10 pixel ROI moves from the top left corner to the bottom right corner with a step of 2 pixels in both the horizontal and vertical direction. If the incoming vector is inside the influence field of a weed prototype, a red dot is drawn at the center of the ROI.

RESULTS AND DISCUSSION For the Bayesian classifier, the feature subset of AREA (number of pixels in an object), LTP (a ratio of length to perimeter), and OCCR (a ratio of area to projected area) was selected as the overall best subset for identifying tomato plants and weeds (Lee, 1998). With this final best subset, a discriminant analysis (SAS Proc Discrim) was conducted with 4 classes (classes 1, 2, 3, and 4) by introducing the third group (class 3) after feature selection to show the classification results for all classes. In this analysis, the classifier was trained with the classes 1, 2, and 4 and was validated with all 4 classes since the classifier was designed to classify classes 1, 2, and 4. The following a priori probabilities were used: class 1 = 0.1, class 2 = 0.1 and class 4 = 0.8. The results are given in Table 4 for the good group and for the bad group.

8

Table 4. Classification result for 4 classes in good and bad groups using the best three features (AREA, LTP, and OCCR). Good group, validation set: AREA, LTP, OCCR Number of Observations and Percent Classified into CLASS: From CLASS 1 2 4 Total 1 121 0 29 150 80.67 0.00 19.33 100.00 2 14 24 141 179 7.82 13.41 78.77 100.00 3 45 37 556 638 7.05 5.80 87.15 100.00 4 9 0 193 202 4.46 0.00 95.54 100.00 Total 189 61 919 1169 Percent 16.17 5.22 78.61 100.00 Priors 0.1000 0.1000 0.8000 Error Count Estimates for CLASS: 1 2 3 4 0.1933 0.8659 . 0.0446

Rate

Total 0.6287

Bad group, validation set: AREA, LTP, OCCR Number of Observations and Percent Classified into CLASS: From CLASS 1 2 3 4 Total Percent Priors

Rate

1 72 58.54 2 1.80 34 5.13 10 6.37 118 11.20 0.1000

2 0 0.00 8 7.21 25 3.77 1 0.64 34 3.23 0.1000

4 51 41.46 101 90.99 604 91.10 146 92.99 902 85.58 0.8000

Error Count Estimates for CLASS: 1 2 3 4 0.4146 0.9279 . 0.0701

Total 123 100.00 111 100.00 663 100.00 157 100.00 1054 100.00

Total 0.7277

For the good group, 80.7% of class 1 (tomato cotyledons) and 95.5% of class 4 (weeds) were correctly classified, however the recognition rates for class 2 (21.2% of tomato true leaves were correctly classified as tomato plants) and class 3 (12.9% of the third class were correctly classified as tomato plants) were very low, indicating that the recognition of those classes would be very difficult. For bad group, the result showed similar trend with reasonably high classification rates for the classes 1 (62.6%) and 4 (87.9%) and extremely low rates for classes 2 and 3. The classification rates were only 7.2% for class 2 and 14.2% for class 3. The high error rates for classes 2 and 4 came mainly from their shape being very similar to weeds. Sometimes true leaves didn’t have distinct notches on their boundaries, and showed the similar convex shape of weeds. Even when true leaves had clear notches, their shape was frequently similar to the one of overlapped weeds or to concave weeds. Most of the third group was composed of curled, laid down, or upright tomato leaves, thus they couldn’t satisfy the feature criteria of tomato leaves. Although the

9

classification rates for true leaves and the third group were very low in both groups, the result was still useful since most of the weeds were correctly recognized for cultivation and tomatoes were usually over planted. The recognition result by the hardware-based neural network (ZISC036 board) is also shown in Figure 6. Figures (a), (c), (e), and (g) in Figure 6 are color images taken in commercial fields and (b), (d), (f), and (h) are recognition results by the ZISC036 board. The red dot indicates that the corresponding ROI is weed plant. When there were only weeds in an image, the neural network system identified the weeds correctly (Figure 6 (b) and (d)). However, if there were tomato plants and weeds together in an image, part of tomato plants were also recognized as weeds (Figure 6 (f) and (h)). This was because those portions of tomato plants were not included previously in the training images. The ZISC036 board seemed to work very well if it had seen the incoming pattern already. On the contrary, if the board had not seen the incoming pattern in advance, it could not produce a correct output. In this regard, a much more comprehensive training process would be needed to completely train the ZISC036 board with sufficient number of neurons. Table 5 shows the number of correctly and incorrectly recognized tomato plants and weeds. Table 5. Recognition rate for tomato plants and weed by the ZISC036. Recognition Correct Incorrect Total Correctly recognized (%)

Tomato cotyledon 7 11 18 38.9

Tomato true leaf 9 15 24 37.5

Weed 24 4 28 85.7

The recognition rates for weeds were reasonably high but, those of tomato cotyledons and tomato true leaves were very low because their patterns were not included in the training process. Because they were based upon different feature sets it would not be appropriate to make any final conclusions in comparing these results with those of the Bayesian classifier, however, the recognition rate of the Bayesian classifier looked better than the ZISC036 board for tomato cotyledons and weeds. The ZISC036 showed a higher recognition rate for tomato true leaves than the Bayesian classifier. This fact indicated that new features need to be developed other than color and texture, such as ratio of colors and shape features, or to convert the process of humans’ recognition into some forms of other features.

10

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Figure 6. Recognition result by the hardware-based neural network.

11

CONCLUSIONS This study showed the feasibility of using a hardware-based neural network for increasing processing speed and plant identification rate, however it also indicated that new features needed to be developed for improved recognition of tomato plants. With the hardware-based neural network, 38.9% of tomato cotyledons, 37.5% of tomato true leaves, and 85.7% of weeds were correctly identified. With the Bayesian classifier in validation tests with 290 field images from 13 different commercial processing tomato fields, the image processing algorithm for distinguishing tomato plants from weeds correctly identified 58.5% - 80.7% of tomato cotyledons, 9.0% - 21.2% of tomato true leaves, 8.9% - 12.9% of tomato leaves that were curled, occluded, bug-eaten or partially hidden by the edge of the image, and 93.0% - 95.5% of weeds using plant area, length to perimeter ratio, and occupational ratio.

FUTURE WORK New features other than color and texture need to be developed, similar to the process that humans use to recognize plants since even a person with limited experience in weed recognition can usually distinguish crop plants correctly from weeds. Different input patterns and different ROIs need to be tried, utilizing the full 64 dimension input capability of the ZISC036 board.

ACKNOWLEDGMENT The authors would like to thank Mr. Guy Paillet and Mr. Damien Chastrette (Silicon Recognition, Inc., Sunnyvale, CA94086) for their technical support.

REFERENCES Botros, N. M. and M. Abdul-Aziz. 1994. Hardware implementation of an artificial neural network using field programmable gate arrays (FPGAs). IEEE Transactions on industrial electronics. 41(6): 665-667. Brown, T. X., M. D. Tran, T. Duong, T. Daud, and A. P. Thakoor. 1992. Cascaded VLSI neural network chips: hardware learning for pattern recognition and classification. Simulation 58(5): 340-347. Ding, K. and S. Gunasekaran. 1994. Shape feature extraction and classification of food material using computer vision. Transactions of the ASAE 37(5): 1537-1545.

12

El-Faki, M. S., N. Zhang and D. E. Peterson. 1997. Weed detection using color machine vision. ASAE Paper No. 97-3134. ASAE, 2950 Niles Rd., St. Joseph, MI49085-9659 USA. Ghazanfari, A., J. Irudayaraj, and A. Kusalik. 1996. Grading pistachio nuts using a neural network approach. Transactions of the ASAE 39(6): 2319-2324.

IBM, Component Development Lab, Microelectronics Division. 1997. ZISC036 neurons user’s manual. 91105 Corbeil-Essonnes, France. Ienne, P., T. Cornu, and G. Kuhn. 1996. Special-purpose digital hardware for neural networks: an architectural survey. Journal of VLSI Signal Processing Systems, 13: 5-25. I igo, R. M., A. Bonde, and B. Holcombe. 1990. Self adjusting weights for hardware neural networks. Electronics letters. 26(19): 1630-1632. Lansner, J. A. 1995. An experimental hardware neural network using a cascadable, analogue chipset. International Journal of electronics. 78(4): 679-690. Lee, Won Suk. 1998. Robotic weed control system for tomatoes. Ph.D. dissertation. Department of Biological and Agricultural Engineering. University of California, Davis. Lee, W. S., D. C. Slaughter, and D. K. Giles. 1997. Robotic weed control system for tomatoes using machine vision system and precision chemical application. ASAE Paper No. 97-3093. ASAE, 2950 Niles Rd., St. Joseph, MI49085-9659 USA. Linares-Barranco, B., E. Sanchez-Sinencio, and A. Rodriguez-Vazquez. 1992. A modular T-mode design approach for analog neural network hardware implementations. IEEE Journal of solid-state circuits, 27(5): 701-713. Lippmann, R. P. 1987. An introduction to computing with neural nets. IEEE Acoustics, Speech and Signal Processing Magazine 4(2): 4-22. Maudit, N., M. Duranton, J. Gobert and J-A. Sirat. 1992. Lneuro 1.0: A piece of hardware LEGO for building neural network systems. IEEE Transactions on Neural Networks. 3(3): 414422. Miller, W. M. and J. A. Throop. 1997. Pattern recognition models for spectral reflectance evaluation of apple blemishes. ASAE Paper No. 973080. ASAE, 2950 Niles Rd., St. Joseph, MI49085-9659 USA. Ozer, N., B. A. Engel, and J. E. Simon. 1995. Fusion classification techniques for fruit quality. Transactions of the ASAE 38(6): 1927-1934.

13

Park, B., Y. R. Chen, and M. Nguyen. 1996. Multispectral image analysis using neural network algorithm. ASAE Paper No. 963034. ASAE, 2950 Niles Rd., St. Joseph, MI49085-9659 USA. Patel, V. C., R. W. McClendon and J. W. Goodrum. 1995. Detection of cracks in eggs using color computer vision and artificial neural networks. ASAE Paper No. 953258. ASAE, 2950 Niles Rd., St. Joseph, MI49085-9659 USA. Reilly, D. L. 1997. The RCE neural network. The industrial electronics handbook. Editor-in-chief, J. David Irwin. CRC Press: 1025-1037. Yang, C., S. O. Prasher, and J. Landry. 1997. Application of machine vision and artificial neural networks in precision farming. ASAE Paper No. 973107. ASAE, 2950 Niles Rd., St. Joseph, MI49085-9659 USA. Yie, T. J., K. Liao, M. R. Paulsen, J. F. Reid, and E. B. Maghirang. 1993. Corn kernel stress crack detection by machine vision. ASAE Paper No. 93-3526. ASAE, 2950 Niles Rd., St. Joseph, MI49085-9659 USA. Yingwei, L., N. Sundararajan, and P. Saratchandran. 1998. Performance evaluation of a sequential minimal radial basis function (RBF) neural network learning algorithm. IEEE Transactions on Neural Networks. 9(2): 308-318. Xu, J. Z., Z. Y. Ruan, and J. X. Zhang. 1995. Classification of barley malts using machine vision and neural networks. ASAE Paper No. 953550. ASAE, 2950 Niles Rd., St. Joseph, MI49085-9659 USA.

14

Suggest Documents