Welcome to the IKCEST
Journal
Computers in Industry

Computers in Industry

Archives Papers: 558
Elsevier
Please choose volume & issue:
A visible-range computer-vision system for automated, non-intrusive assessment of the pH value in Thomson oranges
Sajad Sabzi; Juan Ignacio Arribas;
Abstracts:Fruit may be classified for the purposes of usage, packaging and marketing based on the pH (potential of hydrogen) value – a numeric scale used to specify the acidity or basicity of an aqueous solution measured in units of moles per liter of hydrogen ions. In this study, a new approach for the automated and non-intrusive estimation of the pH value of the Thomson navel orange (CRC 969, Citrus sinensis) fruit is presented based on visible-range image processing, image feature extraction and with the use of hybrid imperialist competitive algorithm (ICA)-artificial neural network (ANN) regression. Image features studied include length, width, area, eccentricity, perimeter, blue-value, green-value, red-value, width, contrast, texture, roughness and several ratios thereof. Principal component analysis (PCA) is applied to reduce the number of dimensions without loss of important information and a cubic polynomial function of the mean square error (MSE) versus several factors is computed using the response surface methodology (RSM) approach. Results for pH prediction are given and compared with true measured pH values over the entire 100 Thomson orange dataset, including estimated pH scatter regression plots and estimated pH boxplots. Cross validation is performed over 1000 repeated random trial experiments with uniform random train- and test-sample sets (80% training and 20% disjoint test samples). In addition, we provide numerical results based on the levels achieved by response surface methodology (RSM) evaluated over various error coefficients: the sum square error (SSE), the mean absolute error (MAE), the coefficient of determination (R2), the root mean square error (RMSE), and MSE, resulting in R2 = 0.843 ± 0.043, MSE = 0.046 ± 0.022, MAE = 0.166 ± 0.039, SSE = 0.915 ± 0.425, and RMSE = 0.214 ± 0.146, over the test set. The results demonstrate that such an automated pH-based sorting system with machine vision using the hybrid ICA-ANN algorithm can accurately compute the pH value of Thomson oranges without any contact with the fruit, and which has clear potential applications in the food industry.
Grapevine buds detection and localization in 3D space based on Structure from Motion and 2D image classification
Carlos Ariel Díaz; Diego Sebastián Pérez; Humberto Miatello; Facundo Bromberg;
Abstracts:In viticulture, there are several applications where 3D bud detection and localization in vineyards is a necessary task susceptible to automation: measurement of sunlight exposure, autonomous pruning, bud counting, type-of-bud classification, bud geometric characterization, internode length, and bud development stage. This paper presents a workflow to achieve quality 3D localizations of grapevine buds based on well-known computer vision and machine learning algorithms when provided with images captured in natural field conditions (i.e., natural sunlight and the addition of no artificial elements), during the winter season and using a mobile phone RGB camera. Our pipeline combines the Oriented FAST and Rotated BRIEF (ORB) for keypoint detection, a Fast Local Descriptor for Dense Matching (DAISY) for describing the keypoint, and the Fast Approximate Nearest Neighbor (FLANN) technique for matching keypoints, with the Structure from Motion multi-view scheme for generating consistent 3D point clouds. Next, it uses a 2D scanning window classifier based on Bag of Features and Support Vectors Machine for classification of 3D points in the cloud. Finally, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) for 3D bud localization is applied. Our approach resulted in a maximum precision of 1.0 (i.e., no false detections), a maximum recall of 0.45 (i.e. 45% of the buds detected), and a localization error within the range of 259–554 pixels (corresponding to approximately 3 bud diameters, or 1.5 cm) when evaluated over the whole range of user-given parameters of workflow components.
Automated multi-feature human interaction recognition in complex environment
Shafina Bibi; Nadeem Anjum; Muhammad Sher;
Abstracts:The recognition of people’s interactions is crucial for making the surveillance applications able to recognize unusual events in complex environments. Generally, multiple cameras are installed to capture videos from different views but these environments suffer from challenging issues: occlusions between persons and light and pose variations etc. We presented a computer vision system to recognize person-to-person interactions in public areas by considering individual actions and trajectory information under multiple camera views. We achieved our goal in two steps, namely, individual action recognition and interaction recognition. Extensive techniques have been used for individual action recognition with very good accuracy. Still, these techniques cannot handle the intricate settings in crowded areas. We have proposed Median Compound Local Binary Pattern (MDCLBP) and combined it with Histogram of Oriented Gradient (HOG). MDCLBP captures the information about the spatial organization of intensities and HOG uses histogram of oriented gradients to describe an image. MDCLBP is a modification of Compound Local Binary Pattern (CLBP). CLBP extracts texture information by using sign and magnitude information. MDCLBP is a variant of CLBP that uses sign information and instead of magnitude, difference from the median value at each 3 × 3 windows is used to get the descriptor robust to occlusions and light variations. We have combined the individual actions of two persons with trajectory information to recognize person-to-person interactions. Experiments are performed on well-known publically available IXMAS and OIXMAS datasets to demonstrate the effectiveness of our proposed technique for individual human action recognition. Person-to-person interaction recognition method is evaluated on HALLWAY dataset. Experiments carried out on varying views demonstrated that our proposed system achieved better accuracy and can meet the requirements of surveillance applications.
ABC algorithm based optimization of 1-D hidden Markov model for hand gesture recognition applications
K. Martin Sagayam; D. Jude Hemanth;
Abstracts:Hand gestures are extensively used to communicate based on non-verbal interaction with computers. This mode of communication is made possible by implementing machine learning algorithms for pattern recognition. A stochastic mathematical approach is used to interpret the hand gesture pattern for classification. In this work, a predominant method is used by 1-D hidden Markov model (1-D HMM) for classifying the patterns and to measure its performance. During training phase, 1-D HMM is used to predict its next state sequence of hand gestures using dynamic programming methods such as Baum-Welch algorithm and Viterbi algorithm. However, dynamic programming based prediction methodologies are complex. To enhance the performance of 1-D HMM model, its parameter and observation state sequence must be optimized using bio-inspired heuristic approaches. In this work, Artificial Bee Colony (ABC) algorithm is used for optimization. A hybrid 1-D HMM model with ABC optimization has been proposed which has yielded a better performance metrics like recognition rate and error rate for Cambridge hand gesture dataset.
Industry 4.0 as an enabler of proximity for construction supply chains: A systematic literature review
Patrick Dallasega; Erwin Rauch; Christian Linder;
Abstracts:The fourth industrial revolution (Industry 4.0) is changing not only the manufacturing industry but also the construction industry and its connected supply chains. Construction supply chains (CSCs) have specific characteristics, such as being temporary organizations that require high coordination efforts to align the processes of supply chain actors. The concept of proximity is used to analyze synchronization between suppliers and the construction site. This article presents a framework for explaining Industry 4.0 concepts that increase or reduce proximity. We find that Industry 4.0 technologies mainly influence technological, organizational, geographical and cognitive proximity dimensions. This presents benefits and challenges for CSCs. This framework is based on the results of a systematic literature review of scientific papers and analysis of applicability through practical publications and examples from industrial case studies.
Iterative individual plant clustering in maize with assembled 2D LiDAR data
David Reiser; Manuel Vázquez-Arellano; Dimitris S. Paraforos; Miguel Garrido-Izard; Hans W. Griepentrog;
Abstracts:A two dimensional (2D) laser scanner was mounted at the front part of a small 4-wheel autonomous robot with differential steering, at an angle of 30 ° pointing downwards. The machine was able to drive between maize rows and collect concurrent time-stamped data. A robotic total station tracked the position of a prism mounted on the vehicle. The total station and laser scanner data were fused to generate a three dimensional (3D) point cloud. This 3D representation was used to detect individual plant positions, which are of particular interest for applications such as phenotyping, individual plant treatment and precision weeding. Two different methodologies were applied to the 3D point cloud to estimate the position of the individual plants. The first methodology used the Euclidian Clustering on the entire point cloud. The second methodology utilised the position of an initial plant and the fixed plant spacing to search iteratively for the best clusters. The two algorithms were applied at three different plant growth stages. For the first method, results indicated a detection rate up to 73.7% with a root mean square error of 3.6 cm. The second method was able to detect all plants (100% detection rate) with an accuracy of 2.7–3.0 cm, taking the plant spacing of 13 cm into account.
Camouflage assessment: Machine and human
Timothy N. Volonakis; Olivia E. Matthews; Eric Liggins; Roland J. Baddeley; Nicholas E. Scott-Samuel; Innes C. Cuthill;
Abstracts:A vision model is designed using low-level vision principles so that it can perform as a human observer model for camouflage assessment. In a camouflaged-object assessment task, using military patterns in an outdoor environment, human performance at detection and recognition is compared with the human observer model. This involved field data acquisition and subsequent image calibration, a human experiment, and the design of the vision model. Human and machine performance, at recognition and detection, of military patterns in two environments was found to correlate highly. Our model offers an inexpensive, automated, and objective method for the assessment of camouflage where it is impractical, or too expensive, to use human observers to evaluate the conspicuity of a large number of candidate patterns. Furthermore, the method should generalize to the assessment of visual conspicuity in non-military contexts.
Fault tolerance in cloud computing environment: A systematic survey
Moin Hasan; Major Singh Goraya;
Abstracts:Fault tolerance is among the most imperative issues in cloud to deliver reliable services. It is difficult to implement due to dynamic service infrastructure, complex configurations and various interdependencies existing in cloud. Extensive research efforts are consistently being made to implement the fault tolerance in cloud. Implementation of a fault tolerance policy in cloud not only needs specific knowledge of its application domain, but a comprehensive analysis of the background and various prevalent techniques also. Some recent surveys try to assimilate the various fault tolerance architectures and approaches proposed for cloud environment but seem to be limited on some accounts. This paper gives a systematic and comprehensive elucidation of different fault types, their causes and various fault tolerance approaches used in cloud. The paper presents a broad survey of various fault tolerance frameworks in the context of their basic approaches, fault applicability, and other key features. A comparative analysis of the surveyed frameworks is also included in the paper. For the first time, on the basis of an analysis of various fault tolerance frameworks cited in the present paper as well as included in the recently published prime surveys, a quantified view on their applicability is presented. It is observed that primarily the checkpoint-restart and replication oriented fault tolerance techniques are used to target the crash faults in cloud.
A vision methodology for harvesting robot to detect cutting points on peduncles of double overlapping grape clusters in a vineyard
Lufeng Luo; Yunchao Tang; Qinghua Lu; Xiong Chen; Po Zhang; Xiangjun Zou;
Abstracts:Reliable and robust vision algorithms to detect the cutting points on peduncles of overlapping grape clusters in the unstructured vineyard are essential for efficient use of a harvesting robot. In this study, we designed an approach to detect these cutting points in three main steps. First, the areas of pixels representing grape clusters in vineyard images were obtained using a segmentation algorithm based on k-means clustering and an effective color component. Next, the edge images of grape clusters were extracted, and then a geometric model was used to obtain the contour intersection points of double overlapping grape clusters. Profile analysis was used to separate the regional pixels of double grape clusters by a line connecting double intersection points. Finally, the region of interest of the peduncle for each grape clusters was determined based on the geometric information of each pixel region, and a computational method was used to determine the appropriate cutting point on the peduncle of each grape cluster by use of a geometric constraint method. Thirty vineyard images that were captured from different perspectives were tested to validate the performance of the presented approach in a complex environment. The average recognition accuracy was 88.33%, and the success rate of visual detection of the cutting point on the peduncle of double overlapping grape clusters was 81.66%. The demonstrated performance of this developed method indicated that it could be used by harvesting robots.
Road surface temperature prediction based on gradient extreme learning machine boosting
Bo Liu; Shuo Yan; Huanling You; Yan Dong; Yong Li; Jianlei Lang; Rentao Gu;
Abstracts:The expressway is extremely important to transportation, but high road-surface temperatures (RST) can cause many traffic accidents. Most of the hourly RST prediction models are based on numerical methods, but the parameters are difficult to determine. Statistical methods cannot achieve the desired accuracy. To address these problems, this paper proposes a machine learning algorithm that utilizes gradient-boosting to assemble a ReLU (rectified linear unit)/softplus Extreme Learning Machine (ELM). By using historical data from the airport and Badaling expressways collected between November 2012 and September 2014, sigmoid ELM, ReLU ELM, softplus ELM, ReLU gradient ELM boosting (GBELM) and softplus GBELM were applied for RST forecasting, RMSE (root mean squared error), PCC (Pearson Correlation Coefficient), and the accuracy of these methods were analyzed. The experimental results show that ReLU/softplus can improve the performance of traditional ELM, and gradient boosting can further improve its performance. Thus, we obtain a more accurate model that utilizes GBELM with ReLU/softplus to forecast RST. For the airport expressway, our proposed model achieves an RMSE within 3 °C, an accuracy of 81.8% and a PCC of 0.954. For the Badaling expressway, our model achieves an RMSE within 2 °C, an accuracy of 87.4% and a PCC of 0.949.
Hot Journals