High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers' equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.
Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 1361-6501
Launched in 1923 Measurement Science and Technology was the world's first scientific instrumentation and measurement journal and the first research journal produced by the Institute of Physics. It covers all aspects of the theory, practice and application of measurement, instrumentation and sensing across science and engineering.
Open all abstracts, in this tab
Hamidreza Eivazi et al 2024 Meas. Sci. Technol. 35 075303
Simon Laflamme et al 2023 Meas. Sci. Technol. 34 093001
Structural health monitoring (SHM) is the automation of the condition assessment process of an engineered system. When applied to geometrically large components or structures, such as those found in civil and aerospace infrastructure and systems, a critical challenge is in designing the sensing solution that could yield actionable information. This is a difficult task to conduct cost-effectively, because of the large surfaces under consideration and the localized nature of typical defects and damages. There have been significant research efforts in empowering conventional measurement technologies for applications to SHM in order to improve performance of the condition assessment process. Yet, the field implementation of these SHM solutions is still in its infancy, attributable to various economic and technical challenges. The objective of this Roadmap publication is to discuss modern measurement technologies that were developed for SHM purposes, along with their associated challenges and opportunities, and to provide a path to research and development efforts that could yield impactful field applications. The Roadmap is organized into four sections: distributed embedded sensing systems, distributed surface sensing systems, multifunctional materials, and remote sensing. Recognizing that many measurement technologies may overlap between sections, we define distributed sensing solutions as those that involve or imply the utilization of numbers of sensors geometrically organized within (embedded) or over (surface) the monitored component or system. Multi-functional materials are sensing solutions that combine multiple capabilities, for example those also serving structural functions. Remote sensing are solutions that are contactless, for example cell phones, drones, and satellites. It also includes the notion of remotely controlled robots.
Liisa M Hirvonen and Klaus Suhling 2017 Meas. Sci. Technol. 28 012003
Time-correlated single photon counting (TCSPC) is a widely used, robust and mature technique to measure the photon arrival time in applications such as fluorescence spectroscopy and microscopy, LIDAR and optical tomography. In the past few years there have been significant developments with wide-field TCSPC detectors, which can record the position as well as the arrival time of the photon simultaneously. In this review, we summarise different approaches used in wide-field TCSPC detection, and discuss their merits for different applications, with emphasis on fluorescence lifetime imaging.
Martin Kögler and Bryan Heilala 2020 Meas. Sci. Technol. 32 012002
Time-gated (TG) Raman spectroscopy (RS) has been shown to be an effective technical solution for the major problem whereby sample-induced fluorescence masks the Raman signal during spectral detection. Technical methods of fluorescence rejection have come a long way since the early implementations of large and expensive laboratory equipment, such as the optical Kerr gate. Today, more affordable small sized options are available. These improvements are largely due to advances in the production of spectroscopic and electronic components, leading to the reduction of device complexity and costs. An integral part of TG Raman spectroscopy is the temporally precise synchronization (picosecond range) between the pulsed laser excitation source and the sensitive and fast detector. The detector is able to collect the Raman signal during the short laser pulses, while fluorescence emission, which has a longer delay, is rejected during the detector dead-time. TG Raman is also resistant against ambient light as well as thermal emissions, due to its short measurement duty cycle.
In recent years, the focus in the study of ultra-sensitive and fast detectors has been on gated and intensified charge coupled devices (ICCDs), or on CMOS single-photon avalanche diode (SPAD) arrays, which are also suitable for performing TG RS. SPAD arrays have the advantage of being even more sensitive, with better temporal resolution compared to gated CCDs, and without the requirement for excessive detector cooling. This review aims to provide an overview of TG Raman from early to recent developments, its applications and extensions.
A Sciacchitano 2019 Meas. Sci. Technol. 30 092001
Particle image velocimetry (PIV) has become the chief experimental technique for velocity field measurements in fluid flows. The technique yields quantitative visualizations of the instantaneous flow patterns, which are typically used to support the development of phenomenological models for complex flows or for validation of numerical simulations. However, due to the complex relationship between measurement errors and experimental parameters, the quantification of the PIV uncertainty is far from being a trivial task and has often relied upon subjective considerations. Recognizing the importance of methodologies for the objective and reliable uncertainty quantification (UQ) of experimental data, several PIV-UQ approaches have been proposed in recent years that aim at the determination of objective uncertainty bounds in PIV measurements.
This topical review on PIV uncertainty quantification aims to provide the reader with an overview of error sources in PIV measurements and to inform them of the most up-to-date approaches for PIV uncertainty quantification and propagation. The paper first introduces the general definitions and classifications of measurement errors and uncertainties, following the guidelines of the International Organization for Standards (ISO) and of renowned books on the topic. Details on the main PIV error sources are given, considering the entire measurement chain from timing and synchronization of the data acquisition system, to illumination, mechanical properties of the tracer particles, imaging of those, analysis of the particle motion, data validation and reduction. The focus is on planar PIV experiments for the measurement of two- or three-component velocity fields.
Approaches for the quantification of the uncertainty of PIV data are discussed. Those are divided into a-priori UQ approaches, which provide a general figure for the uncertainty of PIV measurements, and a-posteriori UQ approaches, which are data-based and aim at quantifying the uncertainty of specific sets of data. The findings of a-priori PIV-UQ based on theoretical modelling of the measurement chain as well as on numerical or experimental assessments are discussed. The most up-to-date approaches for a-posteriori PIV-UQ are introduced, highlighting their capabilities and limitations.
As many PIV experiments aim at determining flow properties derived from the velocity fields (e.g. vorticity, time-average velocity, Reynolds stresses, pressure), the topic of PIV uncertainty propagation is tackled considering the recent investigations based on Taylor series and Monte Carlo methods. Finally, the uncertainty quantification of 3D velocity measurements by volumetric approaches (tomographic PIV and Lagrangian particle tracking) is discussed.
Guanglin Chen et al 2024 Meas. Sci. Technol. 35 086202
Multi rotor unmanned aerial vehicles (UAVs) are extensively utilized across various domains, and the motor constitutes a pivotal element in the UAV power system. The majority of UAV failures and crashes stem from motor malfunctions, underscoring the imperative need for comprehensive research on fault diagnosis in UAV motors to ensure the stable and reliable execution of flight tasks. This study focuses on quadrotor UAVs as the research subject and devises targeted fault simulation experiments based on the structural features and operational characteristics of the DC brushless motor used in quadrotor UAVs, specifically examining the stator, rotor, and bearings. To address challenges related to the UAV's own loads, limited space for redundant parts, and the high cost and difficulty associated with installing sensors for traditional fault diagnostic signals such as vibration and temperature, this study opts to use current signals as a substitute. This approach resolves the issue of challenging data collection for UAVs and investigates a current signal based fault diagnosis method for UAV motors. Lastly, in response to the limited training samples available for fault data due to the UAV's highly sensitive characteristics regarding the health status of its components and flight stability, traditional machine learning and deep learning methods encounter difficulties in identifying representative features with a small number of training samples, leading to the risk of overfitting and reduced model accuracy in fault diagnosis. To overcome this challenge, we propose a hybrid neural network fault diagnosis model that incorporates a width learning system and a convolutional neural network (CNN). The width learning system eliminates temporal characteristics from the original current signal, capturing more comprehensive and representative sample features in the width feature space. Subsequently, the CNN is employed for feature extraction and classification tasks. In empirical small sample fault diagnosis experiments using current signal data for UAV motors, our proposed model outperforms other models used for comparison.
Adam Thompson et al 2021 Meas. Sci. Technol. 32 105013
Maximum permissible errors (MPEs) are an important measurement system specification and form the basis of periodic verification of a measurement system's performance. However, there is no standard methodology for determining MPEs, so when they are not provided, or not suitable for the measurement procedure performed, it is unclear how to generate an appropriate value with which to verify the system. Whilst a simple approach might be to take many measurements of a calibrated artefact and then use the maximum observed error as the MPE, this method requires a large number of repeat measurements for high confidence in the calculated MPE. Here, we present a statistical method of MPE determination, capable of providing MPEs with high confidence and minimum data collection. The method is presented with 1000 synthetic experiments and is shown to determine an overestimated MPE within 10% of an analytically true value in 99.2% of experiments, while underestimating the MPE with respect to the analytically true value in 0.8% of experiments (overestimating the value, on average, by 1.24%). The method is then applied to a real test case (probing form error for a commercial fringe projection system), where the efficiently determined MPE is overestimated by 0.3% with respect to an MPE determined using an arbitrarily chosen large number of measurements.
Louise Wright and Stuart Davidson 2024 Meas. Sci. Technol. 35 051001
Digital twinning is a rapidly growing area of research. Digital twins combine models and data to provide up-to-date information about the state of a system. They support reliable decision-making in fields such as structural monitoring and advanced manufacturing. The use of metrology data to update models in this way offers benefits in many areas, including metrology itself. The recent activities in digitalisation of metrology offer a great opportunity to make metrology data 'twin-friendly' and to incorporate digital twins into metrological processes. This paper discusses key features of digital twins that will inform their use in metrology and measurement, highlights the links between digital twins and virtual metrology, outlines what use metrology can make of digital twins and how metrology and measured data can support the use of digital twins, and suggests potential future developments that will maximise the benefits achieved.
W Hortschitz et al 2024 Meas. Sci. Technol. 35 052001
Due to the necessary transition to renewable energy, the transport of electricity over long distances will become increasingly important, since the sites of sustainable electricity generation, such as wind or solar power parks, and the place of consumption can be very far apart. Currently, electricity is mainly transported via overhead AC lines. However, studies have shown that for long distances, transport via DC offers decisive advantages. To make optimal use of the existing route infrastructure, simultaneous AC and DC, or hybrid transmission, should be employed. The resulting electric field strengths must not exceed legally prescribed thresholds to avoid potentially harmful effects on humans and the environment. However, accurate quantification of the resulting electric fields is a major challenge in this context, as they can be easily distorted (e.g. by the measurement equipment itself). Nonetheless knowledge of the undisturbed field strengths from DC up to several multiples of the fundamental frequency of the power-grid (up to 1 kHz) is required to ensure compliance with the thresholds. Both AC and DC electric fields can result in the generation of corona ions in the vicinity of the line. In the case of pure AC fields, the corona ions generated typically recombine in the immediate vicinity of the line and, therefore, have no influence on the field measurement further away. Unfortunately, this assumption does not hold for DC fields and hybrid fields, where corona ions can be transported far away from the line (e.g. by wind), and potentially interact with the measurement equipment yielding incorrect measurement results. This review will provide a comprehensive overview of the current state-of-the-art technologies and methods which have been developed to address the problems of measuring the electric field near hybrid power lines.
Jaqueline Stauffenberg et al 2024 Meas. Sci. Technol. 35 085011
This paper explores large area application of tip-based nanofabrication by field emission scanning probe lithography and showcases the simultaneous possibility of atomic force microscopy on macroscopic scales. This is made possible by the combination of tip-based technology and a planar nanopositioning and nanomeasuring machine. Using long range atomic force microscopy measurement of regular grating structures, the performance of the machine is thoroughly characterized over the full 100 mm range of motion of the positioning machine, which was confirmed by repeated measurements. After initially focussing on achieving the minimum line width of 40 nm in microscopic areas, a grating with a pitch of 1 μm is additionally fabricated over a total length of 10 mm, whereby the dimensions and deviations are also considered.
Open all abstracts, in this tab
Jingjing Zhang et al 2024 Meas. Sci. Technol. 35 086013
An effective binocular stereo distance measurement method is proposed to address challenges posed by low brightness and weak texture of images captured in underground coal mines for the machine vision method. This approach is based on illumination map estimation and the MobileNetV3 attention hourglass stereo matching network (MAHNet) model. First, a binocular stereo vision system is established in which infrared LEDs are uniformly distributed on both sides of the belt conveyor bracket as visual feature points. Second, images are preprocessed using illumination map estimation, and the optimization of inhomogeneous brightness image enhancement is achieved by adopting adaptive Gamma correction. Third, the YOLOv5 target detection network and Gaussian fitting fusion algorithm are utilized to detect infrared LED feature points. Fourth, the MAHNet model is employed to generate the cost volume and perform disparity regression, resulting in the acquisition of accurate disparity images. Finally, triangulation is applied to determine the depth of feature points. The experimental results of distance measurement demonstrate that an average relative ranging accuracy of 1.52% within the range of 50.0 cm to 250.0 cm can be achieved by the optimized method, thereby validating the effectiveness of this binocular distance measurement method in underground coal mines.
Sven Schulze et al 2024 Meas. Sci. Technol. 35 085020
The 2019 redefinition of the kilogram not only changes the way mass is defined but also broadens the horizon for a direct realization of other standards. The true becquerel project at the national institute of standards and technology is creating a new paradigm for realization and dissemination of radionuclide activity. Standard reference materials for radioactivity are supplied as aqueous solutions of specific radionuclides which are characterized by massic activity in the units becquerel per gram of solution, Bq/g. The new method requires measuring the mass of a few milligrams of dispensed radionuclide liquid solution. An electrostatic force balance is used, due to its suitability for a milligram mass range. The goal is to measure the mass of dispensed fluid of 1 mg–5 mg with a relative uncertainty of less than 0.05%. A description of the balance operation is presented. Results of preliminary measurements with a reference mass indicate relative standard deviations less than 0.5% for tens of tests and differ 0.54% or less from an independent measurement of the reference mass.
Berkay Bahadur et al 2024 Meas. Sci. Technol. 35 086317
This study presents the capability of the single-frequency (SF) variometric approach (VA) technique with low-cost GNSS observations to detect short-term dynamic behaviors. Harmonic oscillations with amplitudes between 5 and 20 mm and frequencies between 0.3 and 5.0 Hz were generated employing a single-axis shake table to investigate the performance of the SF-VA technique in the structural health monitoring (SHM) system. Besides, a Mw 6.9 Kobe, 1995 earthquake simulation was generated using the shake table to analyze the SF-VA performance for the earthquake early warning (EEW) system. A low-cost u-blox ZED-F9P GNSS receiver and ANN-MB-00 patch antenna were used to collect GNSS observations at a 20 Hz sampling rate during the experiments. The observations were processed using the MATLAB-based open-source PPPH-VA software in real-time (RT) mode, considering eight different satellite combinations. The capability of the SF-VA technique to detect horizontal dynamic behaviors in RT mode was investigated in the frequency and time domains, accepting the displacements from the linear variable differential transformer sensor as a reference. The results in the frequency domain demonstrate that the SF-VA technique with low-cost GNSS observations can successfully detect the peak frequency value of short-term harmonic oscillations up to 5 Hz. Moreover, time domain findings emphasize that the short-time dynamic oscillations can be determined with the SF-VA technique with an accuracy ranging from 0.8 to 6.4 mm. Earthquake simulation experiment results demonstrate that the strong ground motions caused by mega earthquakes can be determined at mm-level by the SF-VA method. The results of both experiments show that multi-GNSS observations contribute to the SF-VA technique considerably. Overall, the findings reveal that the SHM and EEW systems can be operated with low-cost GNSS receivers, and the natural frequency of the man-made structures and accurate displacement values of seismic waveforms can be determined in RT with the SF-VA technique.
Fang-Jun Qin et al 2024 Meas. Sci. Technol. 35 086316
Vertical information (vertical velocity and altitude) is important in three-dimensional navigation. In order to improve the vertical accuracy of strapdown inertial navigation system/global navigation satellite system integrated navigation (SINS/GNSS) while reducing costs, firstly, the propagation laws of vertical error sources in two types of SINS/GNSS (vertical velocity set to 0 and not set to 0) are systematically analyzed. Furthermore, a vertical accuracy improvement method considering the gravitational anomaly is proposed. In this method, the gravitational anomaly is considered as one of the vertical error sources. Then, the processing method of error sources in integrated navigation is referenced, and two processing modes of gravitational anomaly are designed. The first method is to represent the gravitational anomaly as one of the system states of SINS/GNSS. The number of dimensions for the system is expanded from 15 to 16. The corresponding mathematical model is derived to 'absorb' the vertical errors caused by the gravitational anomaly. The second is to represent the gravitational anomaly as one of the vertical system noises. Thereby, the Kalman filter is adjusted in real time using the above adaptive method to improve the accuracy of the estimated state. The corresponding errors are then suppressed. Field experiments show that both modes of the proposed method can effectively improve the vertical accuracy.
Zhongxi Yin et al 2024 Meas. Sci. Technol. 35 086132
Under high noise conditions and random impacts, which constitute strong interference, models often exhibit limited capability in capturing long-term dependencies, leading to lower accuracy in predicting the remaining useful life (RUL) of bearings. To address this issue, a spatiotemporal fusion network capable of ultra-long-term feature analysis is proposed to enhance the accuracy of bearing RUL prediction under substantial interference. This network utilizes a dilated convolution-based lightweight vision transformer encoder to extract spatial features reflecting the short-term degradation state of the bearing. Then, these features are sequentially fed into an adaptive tiered memory unit, based on the multiple attention mechanism and the neuron layering mechanism, to analyze temporal features indicative of long-term degradation. Subsequently, short-term spatial and long-term temporal features are fused for RUL prediction. To validate the robustness and predictive accuracy of the proposed approach under strong interference, a gearbox-rolling bearing accelerated platform is constructed, simulating high noise and random impact conditions. Experiments confirm the high robustness and predictive accuracy of the proposed method under strong interference conditions.
Open all abstracts, in this tab
Hanlin Guan et al 2024 Meas. Sci. Technol. 35 082001
Hydraulic component faults have the characteristics of nonlinear time-varying signal, strong concealment, and difficult feature extraction, etc. Timely and accurately fault diagnosis of hydraulic components is helpful to curb economic losses and accidents, so researches have carried out a lot of research on hydraulic components. Information fusion technology can combine multi-source data from multiple dimensions to mine fault data features, which effectively improves the accuracy and reliability of fault diagnosis results. However, there is currently a lack of a comprehensive and systematic review in this domain. Therefore, in this paper, the hydraulic components information fusion fault diagnosis technologies are summarized and analyzed, encompassing the main process information fusion fault diagnosis and the research status of information fusion fault diagnosis of hydraulic system. The methods and techniques involved in the fusion process, data source and fusion method of fault diagnosis of hydraulic components information fusion are elaborated and summarized. The problems of information fusion in fault diagnosis of hydraulic components are analyzed, the solutions are discussed, and the research ideas of improving information fusion fault diagnosis are put forward. Finally, digital twin (DT) technology is introduced, and the advantages and research status of intelligent fault diagnosis based on DT are summarized. On this basis, the intelligent fault diagnosis of hydraulic components based on information fusion is summarized, and the challenges and future research ideas of applying information fusion and DT to intelligent fault diagnosis of hydraulic components are put forward and analyzed comprehensively.
Xin Li et al 2024 Meas. Sci. Technol. 35 072002
The health condition of rolling bearings has a direct impact on the safe operation of rotating machinery. And their working environment is harsh and the working condition is complex, which brings challenges to fault diagnosis. With the development of computer technology, deep learning has been applied in the field of fault diagnosis and has rapidly developed. Among them, convolutional neural network (CNN) has received great attention from researchers due to its powerful data mining ability and feature adaptive learning ability. Based on recent research hotspots, the development history and trend of CNN is summarized and analyzed. Firstly, the basic structure of CNN is introduced and the important progress of classical CNN models for rolling bearing fault diagnosis in recent years is studied. The problems with the classic CNN algorithm have been pointed out. Secondly, to solve the above problems, combined with recent research achievements, various methods and principles for optimizing CNN are introduced and compared from the perspectives of deep feature extraction, hyperparameter optimization, network structure optimization. Although significant progress has been made in the research of fault diagnosis of rolling bearings based on CNN, there is still room for improvement and development in addressing issues such as low accuracy of imbalanced data, weak model generalization, and poor network interpretability. Therefore, the future development trend of CNN networks is discussed finally. And transfer learning models are introduced to improve the generalization ability of CNN and interpretable CNN is used to increase the interpretability of CNN networks.
Victor H R Cardoso et al 2024 Meas. Sci. Technol. 35 072001
This work addresses the historical development of techniques and methodologies oriented to the measurement of the internal diameter of transparent tubes since the original contributions of Anderson and Barr published in 1923 in the first issue of Measurement Science and Technology. The progresses on this field are summarized and highlighted the emergence and significance of the measurement approaches supported by the optical fiber.
Weiqing Liao et al 2024 Meas. Sci. Technol. 35 062002
Mechanical fault diagnosis is crucial for ensuring the normal operation of mechanical equipment. With the rapid development of deep learning technology, the methods based on big data-driven provide a new perspective for the fault diagnosis of machinery. However, mechanical equipment operates in the normal condition most of the time, resulting in the collected data being imbalanced, which affects the performance of mechanical fault diagnosis. As a new approach for generating data, generative adversarial network (GAN) can effectively address the issues of limited data and imbalanced data in practical engineering applications. This paper provides a comprehensive review of GAN for mechanical fault diagnosis. Firstly, the development of GAN-based mechanical fault diagnosis, the basic theory of GAN and various GAN variants (GANs) are briefly introduced. Subsequently, GANs are summarized and categorized from the perspective of labels and models, and the corresponding applications are outlined. Lastly, the limitations of current research, future challenges, future trends and selecting the GAN in the practical application are discussed.
Jianghong Zhou et al 2024 Meas. Sci. Technol. 35 062001
Predictive maintenance (PdM) is currently the most cost-effective maintenance method for industrial equipment, offering improved safety and availability of mechanical assets. A crucial component of PdM is the remaining useful life (RUL) prediction for machines, which has garnered increasing attention. With the rapid advancements in industrial internet of things and artificial intelligence technologies, RUL prediction methods, particularly those based on pattern recognition (PR) technology, have made significant progress. However, a comprehensive review that systematically analyzes and summarizes these state-of-the-art PR-based prognostic methods is currently lacking. To address this gap, this paper presents a comprehensive review of PR-based RUL prediction methods. Firstly, it summarizes commonly used evaluation indicators based on accuracy metrics, prediction confidence metrics, and prediction stability metrics. Secondly, it provides a comprehensive analysis of typical machine learning methods and deep learning networks employed in RUL prediction. Furthermore, it delves into cutting-edge techniques, including advanced network models and frontier learning theories in RUL prediction. Finally, the paper concludes by discussing the current main challenges and prospects in the field. The intended audience of this article includes practitioners and researchers involved in machinery PdM, aiming to provide them with essential foundational knowledge and a technical overview of the subject matter.
Open all abstracts, in this tab
Yang et al
In the application of vision measurement, the black light-absorbing object is difficult to reflect the structured light from infrared emitter of the RGB-D camera. Therefore, an image recognition algorithm based on reference environment information is proposed to acquire the spatial positioning information of black volutes in the depalletizing system. The hardware system of the depalletizing system is mainly constructed of an upper computer, a six-axis industrial robot, an RGB-D camera and an end adsorption device. Firstly, the horizontal position information of each volute placed on the cardboard is obtained by the depth differences between the cardboard and the volute. Then, the depth information of the volute is obtained by the upper cardboard depth through collecting the position of the end vacuum suction cup triggered by feedback signal from vacuum generator. Secondly, a regional planar hand-eye calibration method is developed to improve the calibration accuracy in two-dimensional coordinates. The regional calibration method divides the robot working area into four regions: upper left, lower left, upper right, and lower right. The transformation matrix of each region is calculated separately. Finally, the depalletizing experiment is conducted on the three types of volutes. It is concluded that the average positioning error of the grasping center point of each volute obtained by our method is 3.795 mm, and its standard deviation is 1.769 mm. The average value of regional planar hand-eye calibration error is 4.044 mm, and its standard deviation is 1.501 mm. Under a stack of materials with dimensions of 1350 mm × 1350 mm × 1500 mm, the maximum error is controlled within 15mm. Additionally, when combined with the end feedback compensation mechanism, the success rate for grasping all three volutes reaches 100%.
Zhang et al
Aiming at the problem that the dynamic balance process of a flexible rotor needs to start and stop frequently to add test weight, which is time-consuming and labor-consuming, and the balance accuracy is difficult to guarantee, a dynamic balance optimization method of flexible rotor based on Grey Wolf Optimizer(GWO) is proposed. In this paper, a virtual prototype model is established based on a power turbine rotor for a certain type of turboshaft engine, and a rotor test platform is built. The transfer function is used to find the relationship between unbalance and vibration response, and the equilibrium equations are established to solve the problem. In the process of solving the problem that the equilibrium equations are mostly contradictory, GWO is used to solve the contradictory equations to obtain the optimal counterweight scheme at the full working speed of the rotor. The results show that the method proposed in this paper eliminates the tedious test weight process of traditional dynamic balance, and the vibration reduction effect is better than the conventional on-site dynamic balance. The work of this paper can improve the efficiency and accuracy of flexible rotor dynamic balance and provide technical reference for the vibration control of aero-engine.
李 et al
In modern industrial systems, bearing failures account for 30–40% of industrial machinery faults. Traditional convolutional neural network suffers from gradient vanishing and overfitting, resulting in a poor diagnostic accuracy. To address the issues, a new bearing fault diagnosis approach was proposed based on an improved AlexNet neural network combined with transfer learning. After decomposition and noise-reduction, reconstructed vibration signals were transformed into 2D images, then input into the improved AlexNet for training and follow-up transfer learning. Program auto-tuning and image-enhancing techniques were employed to increase the diagnostic accuracy in this study. The approach was verified with the datasets from Case Western Reserve University (CWRU), Jiangnan University (JNU), and the Association for Mechanical Failure Prevention Technology (MFPT). The results showed that the diagnostic accuracies by normal learning were more than 97% for CWRU and JNU datasets, and 100% for MFPT dataset. After transfer learning, the accuracies all reached above 99.5%. The proposed approach was demonstrated to be able to effectively diagnose the bearing faults.
Wang et al
Cross-domain fault diagnosis is crucial for industrial applications with various and unknown operating conditions. However, due to the significant differences in the distribution of features in multiple source domains, it may lead to mutual interference of features between different domains and reduce the accuracy of diagnosis, which is a problem not considered by most current researches. In addition, most of the existing methods focus only on the extraction of low-frequency global information and cannot adequately deal with high-frequency local information. Consequently, this paper provides a multi-stage processing integrated dual-weight attention-based multi-source multi-stage aligned domain adaptation (DAMMADA) method. Global fault features that are shared by various subdomains are extracted by three domain-specific feature extractors from various domains. In a local feature extractor, the dual-weight attention module not only uses shared weights to aggregate local information, but it also uses contextual weights to improve local features. In terms of loss handling, multiple pseudo-labels are used to reduce the loss of the local maximum mean discrepancy (LMMD) in order to learn the domain-invariant characteristics after improving the high-frequency and low-frequency information extraction. To modify the classification boundaries, the pseudo-labels' mean square errors (MSE) are combined. Comprehensive experiments were carried out on two platforms for fault diagnosis of SCARA robots and bearings respectively, and the results demonstrated that DAMMADA is superior to other methods in terms of accuracy and its ability to suppress negative migration for cross-domain tasks.
Jiang et al
Big data-driven rotating machine intelligent diagnostic technology has gained widespread applications. In practice, however, fault data are limited as well as inconsistencies in fault categories among different domains are widespread. These make developing robust intelligent diagnostic models a challenge. To this end, this paper develops an enhanced meta-learning network with a sensitivity penalization mechanism (EMLN-SP) for few-shot fault diagnosis in severe domain bias. First, lightweight channel attention is introduced to establish an enhanced feature encoder under meta-learning framework, which elevates the key feature expression to facilitate the extraction of generalized diagnostic knowledge within limited samples. Second, a boundary-enhanced loss calculation method is designed, which boosts the focus for decision boundary information to prevent the model from the overfitting dilemma in the case of few-shot. Finally, a sensitivity penalty mechanism is constructed to adjust the optimization direction, which prevents the model from falling into a local optimum, to boost the generalization of the model performance. The effectiveness of EMLN-SP is validated by three cross-domain diagnostic cases with diverse domain offsets.
Open all abstracts, in this tab
Sven Schulze et al 2024 Meas. Sci. Technol. 35 085020
The 2019 redefinition of the kilogram not only changes the way mass is defined but also broadens the horizon for a direct realization of other standards. The true becquerel project at the national institute of standards and technology is creating a new paradigm for realization and dissemination of radionuclide activity. Standard reference materials for radioactivity are supplied as aqueous solutions of specific radionuclides which are characterized by massic activity in the units becquerel per gram of solution, Bq/g. The new method requires measuring the mass of a few milligrams of dispensed radionuclide liquid solution. An electrostatic force balance is used, due to its suitability for a milligram mass range. The goal is to measure the mass of dispensed fluid of 1 mg–5 mg with a relative uncertainty of less than 0.05%. A description of the balance operation is presented. Results of preliminary measurements with a reference mass indicate relative standard deviations less than 0.5% for tens of tests and differ 0.54% or less from an independent measurement of the reference mass.
Jing Yang et al 2024 Meas. Sci. Technol. 35 086130
Owing to the merits of automatic feature extraction and depth structure, intelligent fault diagnosis based on deep neural networks has become a great concern. However, the non-fault state monitoring data volume of actual industrial machinery is rich, whereas the fault state data volume is insufficient and weak. Furthermore, achieving multiple mixed-fault diagnoses using skewed data distributions is extremely difficult. A feature reconstruction and sparse auto-encoder (AE) model-based diagnosis method for multiple mixed faults is proposed in this study to bridge these gaps. Such a feature reconstruction algorithm is designed and employed to address the following issues: (1) expensive computing resulting from the long sequential features of vibration monitoring data and (2) the extraction problem caused by the submersion of scarce data features. Furthermore, an adaptive loss function was formulated, and a deep AE network was constructed to identify the health status and determine the fault level. Diagnoses of artificial and real faults verify the availability and superiority of the proposed scheme, demonstrating the adaptability and robustness of these hyperparameters.
Ahmad Satya Wicaksana et al 2024 Meas. Sci. Technol.
The ALICE experiment is one of the four experiments at the Large Hadron Collider (LHC) designed to investigate the status of matter under very high energy densities produced during heavy-ion collisions. The ALICE Inner Tracking System (ITS) consists of seven concentric cylindrical layers of monolithic silicon pixel sensors known as ALICE Pixel Detector (ALPIDE). The sensors are used to reconstruct the paths of charged particles generated in the collisions. The sensor alignment of the detector must be adjusted to a high precision standard. The adjustment objective is to obtain a detector that can undertake high-resolution measurements. This paper introduces a method for measuring the reference markers utilized in sensor alignment determination. Markers engraved at the chip corners have been detected using the Hough transform, Canny edge detection, and template matching techniques. The distances between two markers are measured to determine the accuracy of the pixel sensor alignment before and after assembly. The proposed methods exhibit an accuracy exceeding 99% and demonstrate high speed analysis. The average processing times for detecting the circle and cross markers are 105.9 ms/image and 113.8 ms/image, respectively. The sensor alignment of the detector must be adjusted to a high precision standard. However, recent studies have shown deviations of up to 5um above the desired value in the measured sensor position. Such deviations do not represent a major issue, nevertheless it is important to measure them in order to speed-up and make more accurate the recursive track-based alignment procedure used to reconstruct the position of each pixel sensor in the tracking detector. The proposed method offers a promising solution to deliver precise and rapid measurements for a large number of examined objects.
Robin Erik Aschan et al 2024 Meas. Sci. Technol.
We delve into theoretical and experimental considerations for determining the spectral bidirectional transmittance distribution function (BTDF) of thick samples across a broad viewing zenith angle range. Nominally, BTDF is defined as the ratio of transmitted radiance to incident irradiance measured from the same plane. However, when employing thick samples for BTDF measurements, the viewing plane of the transmitted beam may shift from the front to the rear surface of the sample, altering the measurement geometry compared to using the sample front surface as the reference plane. Consequently, the viewing zenith angle from the sample rear surface increases relative to the sample front surface, and the sample-to-detector-aperture distance decreases by an amount corresponding to the sample thickness. We introduce a method for determining the BTDF of thick samples, considering the transformation of practical measurement results to a scenario where the measurements are conducted at a very large distance from the sample. To validate the method, we utilize a
BTDF facility equipped with two instruments that significantly differ in their sample-to-detector-aperture distances. We evaluate the impact of a 2 mm sample thickness on the BTDF by assessing the ratio of transmitted and incident radiant fluxes as a function of viewing zenith angle relative to the sample rear surface. The evaluation is conducted in the wavelength range from 550 nm to 1450 nm in 300 nm steps, and in the viewing zenith angle range from -70° to 70° in 5° steps. Measurements are performed in-plane at an incident zenith angle of 0°. It is concluded that consistent determination of BTDF of a thick sample is possible by converting the experimental parameters of the real measurements at relatively short distances from the sample to correspond to those that would be obtained from measurements at very large distances from the sample.
Marcus Soter et al 2024 Meas. Sci. Technol. 35 085605
Platelets are activated immediately when contacting with non-physiological surfaces. Minimization of surface-induced platelet activation is important not only for platelet storage but also for other blood-contacting devices and implants. Chemical surface modification tunes the response of cells to contacting surfaces, but it requires a long process involving many regulatory challenges to transfer into a marketable product. Biophysical modification overcomes these limitations by modifying only the surface topography of already approved materials. The available large and random structures on platelet storage bags do not cause a significant impact on platelets because of their smallest size (only 1–3 μm) compared to other cells. We have recently demonstrated the feasibility of the mask-free nanoprint fluid force microscope (FluidFM) technology for writing dot-grid and hexanol structures. Here, we demonstrated that the technique allows the fabrication of nanostructures of varying features including grid, circle, triangle, and Pacman-like structures. Characteristics of nanostructures including height, width, and cross-line were analyzed and compared using atomic force microscopy imaging. Based on the results, we identified several technical issues, such as the printing direction and shape of structures that directly altered nanofeatures during printing. Importantly, both geometry and interspace governed the degree of platelet adhesion, especially, the structures with triangular shapes and small interspaces prevent platelet adhesion better than others. We confirmed that FluidFM is a powerful technique to precisely fabricate a variety of desired nanostructures for the development of platelet/blood-contacting devices if technical issues during printing are well controlled.
Tianyi Yu et al 2024 Meas. Sci. Technol.
In the prediction of bearing fault remaining useful life (RUL), the identification and feature extraction of early bearing faults are very important. In order to improve the accuracy of early fault RUL prediction, a bearing fault RUL prediction model based on weighted variable loss degradation characteristics is proposed. The model is composed of a stack denoising autoencoder (SDAE) module guided by variable loss, a signal-to-noise feature adaptive weighting module and a Long-short Term Memory (LSTM) degradation characteristics extraction and regression output module. Firstly, this model improves the ability of SDAE model to extract weak fault features by ascending dimension learning and variable loss function. Then, an adaptive weighting matrix is generated according to the test signal to modulate the weight vector of SDAE. Finally, the hidden layer features of SDAE were input into LSTM model to extract the bearing state degradation features and realize the RUL prediction of bearing faults. The experimental results show that the proposed model can accurately predict the RUL of the test data in the early fault stage and the fault development stage. The proposed model can give early fault warning to the bearing state.
Jiawei Liu et al 2024 Meas. Sci. Technol. 35 086314
Augmentation of the Global Navigation Satellite System by low earth orbit (LEO) satellites is a promising approach benefiting from the advantages of LEO satellites. This, however, requires errors and biases in the satellite downlink navigation signals to be calibrated, modeled, or eliminated. This contribution introduces an approach for in-orbit calibration of the phase center offsets (PCOs) and code hardware delays of the LEO downlink navigation signal transmitter/antenna. Using the satellite geometries of Sentinel-3B and Sentinel-6A as examples, the study analyzed the formal precision and bias influences for potential downlink antenna PCOs and hardware delays of LEO satellites under different ground network distributions, and processing periods. It was found that increasing the number of tracking stations and processing periods can improve the formal precision of PCOs and hardware delay. Less than 3.5 mm and 3 cm, respectively, can be achieved with 10 stations and 6 processing days. The bias projections of the real-time LEO satellite orbital and clock errors can reach below 3 mm in such a case. For near-polar LEO satellites, stations in polar areas are essential for strengthening the observation model.
Jan Krüger et al 2024 Meas. Sci. Technol. 35 085014
Accurate measurements of micro- and nanoscale features in optical microscopy demand comprehensive modelling approaches. In this study, we introduce an enhanced evaluation method, utilizing rigorous simulations based on a finite element method algorithm within an advanced Bayesian optimization framework. We provide an in-depth explanation of the measurement process, including the dimension reduction techniques applied to the acquired measurement data. Additionally, we employ Hopkins' approximation or also referred to as local Hopkins' methods for an efficient microscopic image simulation, resulting in a significant reduction of the computing time. We applied this method to measure the linewidths of six different chrome lines, nominally 300 nm–1000 nm wide, on a glass substrate. Our results show an excellent agreement with previous investigations conducted using various measurement systems, including atomic force microscopy, scanning electron microscopy, and optical microscopy in combination with different measurement evaluation techniques.
Aleksi Ahti Mattila et al 2024 Meas. Sci. Technol.
Spectral scatterometry is a technique that allows rapid measurements of diffraction efficiencies of diffractive optical elements (DOEs). The analysis of such diffraction efficiencies has traditionally been laborious and time consuming. However, machine learning can be employed to aid in the analysis of measured diffraction efficiencies. In this paper we describe a novel system for providing measurements of multiple measurands rapidly and concurrently using a spectral scatterometer and an Artificial Neural Network (ANN) which is trained utilizing transfer learning. The ANN provides values for the pitch, height, and line widths of the DOEs. In addition, an uncertainty evaluation was performed. In the majority of the studied cases, the discrepancies between the values obtained using a scanning electron microscope (SEM) and artificial neural network assisted spectral scatterometer (ANNASS) for the grating parameters were below 5 nm. Furthermore, independent reference samples were used to perform a metrological validation. An expanded uncertainty ($k = 2$) of 5.3 nm was obtained from the uncertainty evaluation for the measurand height. The height value measurements performed employing ANNASS and SEM are demonstrated to be in agreement within this uncertainty.
Celina Bozena Hellmich et al 2024 Meas. Sci. Technol.
A scalable wafer-based fabrication process for a new generation of 3D standards enabling the 3D calibration of optical microscopes is presented and validated. The 3D standards are based on step pyramids with several layers in the µm range for height calibration and a system of cylindrical knops distributed across the layers as marks for lateral calibration. This enables calibration for the three coordinate axes and the orthogonality error between them in a single measurement step. The requirements necessary for such a calibration, as optical non-transparency, reproducible flatness of the pyramid step heights and the lowest possible deviations of the lateral marks coordinates, are met by optimising the manufacturing process: The deviation of the height steps distributed over the wafer is ± 3.6 nm and is primarily caused by the layer deposition processes. The lateral manufacturing accuracy was determined using calibrated SEM and show a mean deviation of 20 or 60 nm, depending on the lateral size of the structures. The EBL process and the level of inaccuracy of the SEM standard have an influence on the lateral scaling accuracy. Based on the tactilely generated height values and the coordinates of the mark determined by a calibrated SEM, an example calibration of a CLSM was successfully performed and showed good conformity to conventional calibration techniques.