Physikalisch-Technische Bundesanstalt (PTB)
Recent innovations in Magnetic Resonance Imaging (MRI) hardware and software have reignited interest in low-field (<1\,\mathrm{T}) and ultra-low-field MRI (<0.1\,\mathrm{T}). These technologies offer advantages such as lower power consumption, reduced specific absorption rate, reduced field-inhomogeneities, and cost-effectiveness, presenting a promising alternative for resource-limited and point-of-care settings. However, low-field MRI faces inherent challenges like reduced signal-to-noise ratio and therefore, potentially lower spatial resolution or longer scan times. This chapter examines the challenges and opportunities of low-field and ultra-low-field MRI, with a focus on the role of machine learning (ML) in overcoming these limitations. We provide an overview of deep neural networks and their application in enhancing low-field and ultra-low-field MRI performance. Specific ML-based solutions, including advanced image reconstruction, denoising, and super-resolution algorithms, are discussed. The chapter concludes by exploring how integrating ML with low-field MRI could expand its clinical applications and improve accessibility, potentially revolutionizing its use in diverse healthcare settings.
We propose a hybrid quantum-classical algorithm for solving QUBO problems using an Imaginary Time Evolution-Mimicking Circuit (ITEMC). The circuit parameters are optimized to closely mimic imaginary time evolution, using only single- and two-qubit expectation values. This significantly reduces the measurement overhead by avoiding full energy evaluation. By updating the initial state based on results from last step iteratively, the algorithm quickly converges to the low-energy solutions. With a pre-sorting step that optimizes quantum gate ordering based on QUBO coefficients, the convergence is further improved. Our classical simulations achieve approximation ratios above 0.99 up to 150 qubits. Furthermore, the linear scaling of entanglement entropy with system size suggests that the circuit is challenging to simulate classically using tensor networks. We also demonstrate hardware runs on IBM's device for 40, 60, and 80 qubits, and obtain solutions compatible with that from simulated annealing.
Recently, interest in quantum computing has significantly increased, driven by its potential advantages over classical techniques. Quantum machine learning (QML) exemplifies one of the important quantum computing applications that are expected to surpass classical machine learning in a wide range of instances. This paper addresses the performance of QML in the context of high-energy physics (HEP). As an example, we focus on the top-quark tagging, for which classical convolutional neural networks (CNNs) have been effective but fall short in accuracy when dealing with highly energetic jet images. In this paper, we use a quantum convolutional neural network (QCNN) for this task and compare its performance with CNN using a classical noiseless simulator. We compare various setups for the QCNN, varying the convolutional circuit, type of encoding, loss function, and batch sizes. For every quantum setup, we design a similar setup to the corresponding classical model for a fair comparison. Our results indicate that QCNN with proper setups tend to perform better than their CNN counterparts, particularly when the convolution block has a lower number of parameters. For the higher parameter regime, the QCNN circuit was adjusted according to the dimensional expressivity analysis (DEA) to lower the parameter count while preserving its optimal structure. The DEA circuit demonstrated improved results over the comparable classical CNN model.
Clinical guidelines recommend performing left ventricular (LV) linear measurements in B-mode echocardiographic images at the basal level--typically at the mitral valve leaflet tips--and aligned perpendicular to the LV long axis along a virtual scanline (SL). However, most automated methods estimate landmarks directly from B-mode images for the measurement task, where even small shifts in predicted points along the LV walls can lead to significant measurement errors, reducing their clinical reliability. A recent semi-automatic method, EnLVAM, addresses this limitation by constraining landmark prediction to a clinician-defined SL and training on generated Anatomical Motion Mode (AMM) images to predict LV landmarks along the same. To enable full automation, a contour-aware SL placement approach is proposed in this work, in which the LV contour is estimated using a weakly supervised B-mode landmark detector. SL placement is then performed by inferring the LV long axis and the basal level- mimicking clinical guidelines. Building on this foundation, we introduce \textit{WiseLVAM}-- a novel, fully automated yet manually adaptable framework for automatically placing the SL and then automatically performing the LV linear measurements in the AMM mode. \textit{WiseLVAM} utilizes the structure-awareness from B-mode images and the motion-awareness from AMM mode to enhance robustness and accuracy with the potential to provide a practical solution for the routine clinical application. The source code is publicly available at this https URL.
We extend a recently introduced deep unrolling framework for learning spatially varying regularisation parameters in inverse imaging problems to the case of Total Generalised Variation (TGV). The framework combines a deep convolutional neural network (CNN) inferring the two spatially varying TGV parameters with an unrolled algorithmic scheme that solves the corresponding variational problem. The two subnetworks are jointly trained end-to-end in a supervised fashion and as such the CNN learns to compute those parameters that drive the reconstructed images as close to the ground truth as possible. Numerical results in image denoising and MRI reconstruction show a significant qualitative and quantitative improvement compared to the best TGV scalar parameter case as well as to other approaches employing spatially varying parameters computed by unsupervised methods. We also observe that the inferred spatially varying parameter maps have a consistent structure near the image edges, asking for further theoretical investigations. In particular, the parameter that weighs the first-order TGV term has a triple-edge structure with alternating high-low-high values whereas the one that weighs the second-order term attains small values in a large neighbourhood around the edges.
Morphometric measures derived from spinal cord segmentations can serve as diagnostic and prognostic biomarkers in neurological diseases and injuries affecting the spinal cord. While robust, automatic segmentation methods to a wide variety of contrasts and pathologies have been developed over the past few years, whether their predictions are stable as the model is updated using new datasets has not been assessed. This is particularly important for deriving normative values from healthy participants. In this study, we present a spinal cord segmentation model trained on a multisite (n=75)(n=75) dataset, including 9 different MRI contrasts and several spinal cord pathologies. We also introduce a lifelong learning framework to automatically monitor the morphometric drift as the model is updated using additional datasets. The framework is triggered by an automatic GitHub Actions workflow every time a new model is created, recording the morphometric values derived from the model's predictions over time. As a real-world application of the proposed framework, we employed the spinal cord segmentation model to update a recently-introduced normative database of healthy participants containing commonly used measures of spinal cord morphometry. Results showed that: (i) our model outperforms previous versions and pathology-specific models on challenging lumbar spinal cord cases, achieving an average Dice score of 0.95±0.030.95 \pm 0.03; (ii) the automatic workflow for monitoring morphometric drift provides a quick feedback loop for developing future segmentation models; and (iii) the scaling factor required to update the database of morphometric measures is nearly constant among slices across the given vertebral levels, showing minimum drift between the current and previous versions of the model monitored by the framework. The model is freely available in Spinal Cord Toolbox v7.0.
A compact cold atom gravimeter developed by LNE-SYRTE achieved a short-term sensitivity of 1.4 A 10 -8 g at 1s through a comprehensive phase noise analysis and a robust seismometer-based vibration compensation technique. This performance is comparable to larger atomic fountain gravimeters, despite the system's compact design and relatively short interrogation time.
Dark-matter-dominated dwarf galaxies provide an excellent laboratory for testing dark matter models at small scale and, in particular, the ultralight dark matter (ULDM) class of models. Within the framework of self-interacting bosonic dark matter, we use the observed velocity-dispersion pro- files of seven dwarf spheroidal galaxies to constrain the parameters of ULDM. In our modeling, we account for the impact of the baryonic component on the velocity dispersion and ULDM halo structure. We find that in the regime of repulsively interacting ULDM, the self-interaction, which fits the observations, is almost negligible, consistent with non-interacting ULDM with a boson mass of approximately 1.6*10^(-22) eV. In contrast, for attractively interacting ULDM, the best fit corre- sponds to a smaller boson mass of about 1.3*10^(-22) eV, with self-interaction playing a significant role in shaping the dark-matter halo and thereby influencing the interpretation of observations.
Spin-Exchange Relaxation-Free Optically Pumped Magnetometers (SERF-OPMs) are increasingly used in multichannel biomagnetic sensing, yet their timing performance remains poorly characterized. This contribution presents the first cross-platform study of time delay, group delay, intra-channel variability, and settling time across four commercial SERF-OPM systems: Neuro-1 and QZFM Gen.2 (QuSpin Inc.), and HEDscan and FLv2 (FieldLine Inc.). Measurements were performed inside a magnetically shielded room (BMSR-2.1, PTB, Berlin) using an already introduced test bench. Results show frequency-dependent delays of 1-10 ms, intra-channel spreads up to +-1 ms, group delays between 1-15 ms, and settling times of 2-55 ms. Clear differences in manufacturer strategies were observed: QuSpin minimizes intra-channel variability through digital delay equalization, whereas FieldLine employs per-sensor calibration to optimize bandwidth and phase matching. In all systems, the time delay deviation between channels is in the sub-millisecond range in the 20-140 Hz band, which is sufficient for magnetoencephalography source localization. However, longer settling times in some platforms limit performance for rapid stimulation protocols. These findings provide the first vendor-comprehensive benchmarks for timing parameters in commercial SERF-OPM systems. They highlight the trade-offs between bandwidth, delay, and calibration strategies, and underscore the need for rigorous timing characterization to ensure waveform fidelity. The results are directly relevant to applications such as stimulation-evoked responses, brain-computer interfaces, and closed-loop neuromodulation.
Parameter reconstructions are indispensable in metrology. Here, the objective is to to explain KK experimental measurements by fitting to them a parameterized model of the measurement process. The model parameters are regularly determined by least-square methods, i.e., by minimizing the sum of the squared residuals between the KK model predictions and the KK experimental observations, χ2\chi^2. The model functions often involve computationally demanding numerical simulations. Bayesian optimization methods are specifically suited for minimizing expensive model functions. However, in contrast to least-square methods such as the Levenberg-Marquardt algorithm, they only take the value of χ2\chi^2 into account, and neglect the KK individual model outputs. We present a Bayesian target-vector optimization scheme with improved performance over previous developments, that considers all KK contributions of the model function and that is specifically suited for parameter reconstruction problems which are often based on hundreds of observations. Its performance is compared to established methods for an optical metrology reconstruction problem and two synthetic least-squares problems. The proposed method outperforms established optimization methods. It also enables to determine accurate uncertainty estimates with very few observations of the actual model function by using Markov chain Monte Carlo sampling on a trained surrogate model.
Superconducting quantum interference devices (SQUIDs) are exceptionally sensitive magnetometers capable of detecting weak magnetic fields. Miniaturizing these devices and integrating them onto scanning probes enables high-resolution imaging at low-temperature. Here, we fabricate nanometer-scale niobium SQUIDs with inner-loop sizes down to 10 nm at the apex of individual planar silicon cantilevers via a combination of wafer-scale optical lithography and focused-ion-beam (FIB) milling. These robust SQUID-on-lever probes overcome many of the limitations of existing devices, achieving spatial resolution better than 100 nm, magnetic flux sensitivity of 0.3 μΦ0/Hz0.3~\mu\Phi_0/\sqrt{\rm{Hz}}, and operation in magnetic fields up to about 0.5 T at 4.2 K. Nanopatterning via Ne- or He-FIB allows for the incorporation of a modulation line for coupling magnetic flux into the SQUID or a third Josephson junction for shifting its phase. Such advanced functionality, combined with high spatial resolution, large magnetic field range, and the ease of use of a cantilever-based scanning probe, extends the applicability of scanning SQUID microscopy to a wide range of magnetic, normal conducting, superconducting, and quantum Hall systems. We demonstrate magnetic imaging of skyrmions at the surface of bulk Cu2_2OSeO3_3. Analysis of the point spread function determined from imaging a single skyrmion yields a full-width-half-maximum of 87 nm. Moreover, we image modulated magnetization patterns with a period of 65 nm.
The AMoRE (Advanced Mo-based Rare process Experiment) project is a series of experiments that use advanced cryogenic techniques to search for the neutrinoless double-beta decay of \mohundred. The work is being carried out by an international collaboration of researchers from eight countries. These searches involve high precision measurements of radiation-induced temperature changes and scintillation light produced in ultra-pure \Mo[100]-enriched and \Ca[48]-depleted calcium molybdate (48deplCa100MoO4\mathrm{^{48depl}Ca^{100}MoO_4}) crystals that are located in a deep underground laboratory in Korea. The \mohundred nuclide was chosen for this \zeronubb decay search because of its high QQ-value and favorable nuclear matrix element. Tests have demonstrated that \camo crystals produce the brightest scintillation light among all of the molybdate crystals, both at room and at cryogenic temperatures. 48deplCa100MoO4\mathrm{^{48depl}Ca^{100}MoO_4} crystals are being operated at milli-Kelvin temperatures and read out via specially developed metallic-magnetic-calorimeter (MMC) temperature sensors that have excellent energy resolution and relatively fast response times. The excellent energy resolution provides good discrimination of signal from backgrounds, and the fast response time is important for minimizing the irreducible background caused by random coincidence of two-neutrino double-beta decay events of \mohundred nuclei. Comparisons of the scintillating-light and phonon yields and pulse shape discrimination of the phonon signals will be used to provide redundant rejection of alpha-ray-induced backgrounds. An effective Majorana neutrino mass sensitivity that reaches the expected range of the inverted neutrino mass hierarchy, i.e., 20-50 meV, could be achieved with a 200~kg array of 48deplCa100MoO4\mathrm{^{48depl}Ca^{100}MoO_4} crystals operating for three years.
We present a laser system based on a 48 cm long optical glass resonator. The large size requires a sophisticated thermal control and optimized mounting design. A self balancing mounting was essential to reliably reach sensitivities to acceleration of below Δν/ν\Delta \nu / \nu < 2E-10 /g in all directions. Furthermore, fiber noise cancellations from a common reference point near the laser diode to the cavity mirror and to additional user points (Sr clock and frequency comb) are implemented. Through comparison to other cavity-stabilized lasers and to a strontium lattice clock an instability of below 1E-16 at averaging times from 1 s to 1000 s is revealed.
We propose an unrolled algorithm approach for learning spatially adaptive parameter maps in the framework of convolutional synthesis-based 1\ell_1 regularization. More precisely, we consider a family of pre-trained convolutional filters and estimate deeply parametrized spatially varying parameters applied to the sparse feature maps by means of unrolling a FISTA algorithm to solve the underlying sparse estimation problem. The proposed approach is evaluated for image reconstruction of low-field MRI and compared to spatially adaptive and non-adaptive analysis-type procedures relying on Total Variation regularization and to a well-established model-based deep learning approach. We show that the proposed approach produces visually and quantitatively comparable results with the latter approaches and at the same time remains highly interpretable. In particular, the inferred parameter maps quantify the local contribution of each filter in the reconstruction, which provides valuable insight into the algorithm mechanism and could potentially be used to discard unsuited filters.
BEC-based quantum sensors offer a huge, yet not fully explored potential in gravimetry and accelerometry. In this paper, we study a possible setup for such a device, which is a weakly interacting Bose gas trapped in a double-well potential. In such a trap, the gas is known to exhibit Josephson oscillations, which rely on the coherence between the potential wells. Applying the density matrix approach, we consider transitions between the coherent, partially incoherent, and fully incoherent states of the Bose gas. We show how, due to the presence of weak interactions, collisional decoherence causes the Josephson oscillations to decay with time. We further study the interplay of collisional interaction and external acceleration, which leads to shifts of the oscillation frequency. Finally, we investigate how this effect can be used to build a BEC double-well accelerometer and give analytical expressions for its expected sensitivity.
In this work, we propose an iterative reconstruction scheme (ALONE - Adaptive Learning Of NEtworks) for 2D radial cine MRI based on ground truth-free unsupervised learning of shallow convolutional neural networks. The network is trained to approximate patches of the current estimate of the solution during the reconstruction. By imposing a shallow network topology and constraining the L2L_2-norm of the learned filters, the network's representation power is limited in order not to be able to recover noise. Therefore, the network can be interpreted to perform a low dimensional approximation of the patches for stabilizing the inversion process. We compare the proposed reconstruction scheme to two ground truth-free reconstruction methods, namely a well known Total Variation (TV) minimization and an unsupervised adaptive Dictionary Learning (DIC) method. The proposed method outperforms both methods with respect to all reported quantitative measures. Further, in contrast to DIC, where the sparse approximation of the patches involves the solution of a complex optimization problem, ALONE only requires a forward pass of all patches through the shallow network and therefore significantly accelerates the reconstruction.
In non-destructive evaluation with X-rays light elements embedded in dense, heavy (or high-Z) matrices show little contrast and their structural details can hardly be revealed. Neutron radiography, on the other hand, provides a solution for those cases, in particular for hydrogenous materials, owing to the large neutron scattering cross section of hydrogen and uncorrelated dependency of neutron cross section on the atomic number. The majority of neutron imaging experiments at the present time is conducted with static objects mainly due to the limited flux intensity of neutron beamline facilities and sometimes due to the limitations of the detectors. However, some applications require the studies of dynamic phenomena and can now be conducted at several high intensity beamlines such as the recently rebuilt ANTARES beam line at the FRM-II reactor. In this paper we demonstrate the capabilities of time resolved imaging for repetitive processes, where different phases of the process can be imaged simultaneously and integrated over multiple cycles. A fast MCP/Timepix neutron counting detector was used to image the water distribution within a model steam engine operating at 10 Hz frequency. Within <10 minutes integration the amount of water was measured as a function of cycle time with a sub-mm spatial resolution, thereby demonstrating the capabilities of time-resolved neutron radiography for the future applications. The neutron spectrum of the ANTARES beamline as well as transmission spectra of a Fe sample were also measured with the Time Of Flight (TOF) technique in combination with a high resolution beam chopper. The energy resolution of our setup was found to be ~0.8% at 5 meV and ~1.7% at 25 meV.
Theoretical analysis of the interaction between superfluid dark matter and rotating supermassive black holes offers a promising framework for probing quantum effects in ultralight dark matter and its role in galactic structure. We study how black hole rotation influences the state of ultralight bosonic dark matter, focusing on the stability and dynamics of vortex lines. The gravitational effects of both dark matter and the black hole on the physical properties of these vortex lines, including their precession around the black hole, are analyzed.
This work reviews the concepts of an event used in micro- and nanodosimetry and analyzes how single event distributions could theoretically be derived from probability distributions related to interactions of the primary particle which produce secondary electrons. It is shown that the corresponding mathematical expressions of conditional ionization cluster size distributions are alike those for the single event frequency distribution of energy imparted, particularly when all tracks are considered which intersect the volume in which interactions of the primary particle can result in energy deposits in the site. Track structure simulations of proton with energies between 1 MeV and 100 MeV are used to study how the occurrence of events depends on site size, beam radius, and proton energy. The range of impact parameters of particle tracks that contribute to energy imparted in a site appears not to depend on whether any energy deposits or only energy deposits by ionizations are considered. Since there is no longer a one-to-one correspondence between tracks passing a site and the occurrence of an event, it is proposed to use the fluence for which on average one event occurs as a substitute for single events. For protons, the product of this fluence and the site cross section or the average number of tracks necessary for an event shows an interesting dependence on site size and particle energy with asymptotic values close to unity for large sites and proton energies below 10 MeV. For a proton energy of 1 MeV, a minimum of the number of tracks is observed for sites between 5 nm and 10 nm diameter. The relative differences between the numbers of track per event on average obtained with different options of Geant4-DNA are in the order of 10 % and illustrate the need for further investigations into cross-section datasets and their uncertainties.
There are no more papers matching your filters at the moment.