Darmstadt University of Technology
Access to the proper infrastructure is critical when performing medical image segmentation with Deep Learning. This requirement makes it difficult to run state-of-the-art segmentation models in resource-constrained scenarios like primary care facilities in rural areas and during crises. The recently emerging field of Neural Cellular Automata (NCA) has shown that locally interacting one-cell models can achieve competitive results in tasks such as image generation or segmentations in low-resolution inputs. However, they are constrained by high VRAM requirements and the difficulty of reaching convergence for high-resolution images. To counteract these limitations we propose Med-NCA, an end-to-end NCA training pipeline for high-resolution image segmentation. Our method follows a two-step process. Global knowledge is first communicated between cells across the downscaled image. Following that, patch-based segmentation is performed. Our proposed Med-NCA outperforms the classic UNet by 2% and 3% Dice for hippocampus and prostate segmentation, respectively, while also being 500 times smaller. We also show that Med-NCA is by design invariant with respect to image scale, shape and translation, experiencing only slight performance degradation even with strong shifts; and is robust against MRI acquisition artefacts. Med-NCA enables high-resolution medical image segmentation even on a Raspberry Pi B+, arguably the smallest device able to run PyTorch and that can be powered by a standard power bank.
10
Despite considerable success, large Denoising Diffusion Models (DDMs) with UNet backbone pose practical challenges, particularly on limited hardware and in processing gigapixel images. To address these limitations, we introduce two Neural Cellular Automata (NCA)-based DDMs: Diff-NCA and FourierDiff-NCA. Capitalizing on the local communication capabilities of NCA, Diff-NCA significantly reduces the parameter counts of NCA-based DDMs. Integrating Fourier-based diffusion enables global communication early in the diffusion process. This feature is particularly valuable in synthesizing complex images with important global features, such as the CelebA dataset. We demonstrate that even a 331k parameter Diff-NCA can generate 512x512 pathology slices, while FourierDiff-NCA (1.1m parameters) reaches a three times lower FID score of 43.86, compared to the four times bigger UNet (3.94m parameters) with a score of 128.2. Additionally, FourierDiff-NCA can perform diverse tasks such as super-resolution, out-of-distribution image synthesis, and inpainting without explicit training.
Data deduplication emerged as a powerful solution for reducing storage and bandwidth costs in cloud settings by eliminating redundancies at the level of chunks. This has spurred the development of numerous Content-Defined Chunking (CDC) algorithms over the past two decades. Despite advancements, the current state-of-the-art remains obscure, as a thorough and impartial analysis and comparison is lacking. We conduct a rigorous theoretical analysis and impartial experimental comparison of several leading CDC algorithms. Using four realistic datasets, we evaluate these algorithms against four key metrics: throughput, deduplication ratio, average chunk size, and chunk-size variance. Our analyses, in many instances, extend the findings of their original publications by reporting new results and putting existing ones into context. Moreover, we highlight limitations that have previously gone unnoticed. Our findings provide valuable insights that inform the selection and optimization of CDC algorithms for practical applications in data deduplication.
Medical image segmentation relies heavily on large-scale deep learning models, such as UNet-based architectures. However, the real-world utility of such models is limited by their high computational requirements, which makes them impractical for resource-constrained environments such as primary care facilities and conflict zones. Furthermore, shifts in the imaging domain can render these models ineffective and even compromise patient safety if such errors go undetected. To address these challenges, we propose M3D-NCA, a novel methodology that leverages Neural Cellular Automata (NCA) segmentation for 3D medical images using n-level patchification. Moreover, we exploit the variance in M3D-NCA to develop a novel quality metric which can automatically detect errors in the segmentation process of NCAs. M3D-NCA outperforms the two magnitudes larger UNet models in hippocampus and prostate segmentation by 2% Dice and can be run on a Raspberry Pi 4 Model B (2GB RAM). This highlights the potential of M3D-NCA as an effective and efficient alternative for medical image segmentation in resource-constrained environments.
The interest in post-quantum cryptography - classical systems that remain secure in the presence of a quantum adversary - has generated elegant proposals for new cryptosystems. Some of these systems are set in the random oracle model and are proven secure relative to adversaries that have classical access to the random oracle. We argue that to prove post-quantum security one needs to prove security in the quantum-accessible random oracle model where the adversary can query the random oracle with quantum states. We begin by separating the classical and quantum-accessible random oracle models by presenting a scheme that is secure when the adversary is given classical access to the random oracle, but is insecure when the adversary can make quantum oracle queries. We then set out to develop generic conditions under which a classical random oracle proof implies security in the quantum-accessible random oracle model. We introduce the concept of a history-free reduction which is a category of classical random oracle reductions that basically determine oracle answers independently of the history of previous queries, and we prove that such reductions imply security in the quantum model. We then show that certain post-quantum proposals, including ones based on lattices, can be proven secure using history-free reductions and are therefore post-quantum secure. We conclude with a rich set of open problems in this area.
23 Oct 2013
A general class of stochastic Runge-Kutta methods for the weak approximation of It\^o and Stratonovich stochastic differential equations with a multi-dimensional Wiener process is introduced. Colored rooted trees are used to derive an expansion of the solution process and of the approximation process calculated with the stochastic Runge-Kutta method. A theorem on general order conditions for the coefficients and the random variables of the stochastic Runge-Kutta method is proved by rooted tree analysis. This theorem can be applied for the derivation of stochastic Runge-Kutta methods converging with an arbitrarily high order.
A sparse recovery approach for direction finding in partly calibrated arrays composed of subarrays with unknown displacements is introduced. The proposed method is based on mixed nuclear norm and 1 norm minimization and exploits block-sparsity and low-rank structure in the signal model. For efficient implementation a compact equivalent problem reformulation is presented. The new technique is applicable to subarrays of arbitrary topologies and grid-based sampling of the subarray manifolds. In the special case of subarrays with a common baseline our new technique admits extension to a gridless implementation. As shown by simulations, our new block- and rank-sparse direction finding technique for partly calibrated arrays outperforms the state of the art method RARE in difficult scenarios of low sample numbers, low signal-to-noise ratio or correlated signals.
We study diffusion-driven pattern-formation in networks of networks, a class of multilayer systems, where different layers have the same topology, but different internal dynamics. Agents are assumed to disperse within a layer by undergoing random walks, while they can be created or destroyed by reactions between or within a layer. We show that the stability of homogeneous steady states can be analyzed with a master stability function approach that reveals a deep analogy between pattern formation in networks and pattern formation in continuous space.For illustration we consider a generalized model of ecological meta-foodwebs. This fairly complex model describes the dispersal of many different species across a region consisting of a network of individual habitats while subject to realistic, nonlinear predator-prey interactions. In this example the method reveals the intricate dependence of the dynamics on the spatial structure. The ability of the proposed approach to deal with this fairly complex system highlights it as a promising tool for ecology and other applications.
We analyze topographic scanning force microscopy images together with Kelvin probe images obtained on Pb islands and on the wetting layer on Si(111) for variable annealing times. Within the wetting layer we observe negatively charged Si-rich areas. We show evidence that these Si-rich areas result from islands that have disappeared by coarsening. We argue that the islands are located on Si-rich areas inside the wetting layer such that the Pb/Si interface of the islands is in line with the top of the wetting layer rather than with its interface to the substrate. We propose that the Pb island heights are one atomic layer smaller than previously believed. For the quantum size effect bilayer oscillations of the work function observed in this system, we conclude that for film thicknesses below 9 atomic layers large values of the work function correspond to even numbers of monolayers instead of odd ones. The atomically precise island height is important to understand ultrafast "explosive" island growth in this system.
In this paper, we prove that the generator of any bounded analytic semigroup in (θ,1)(\theta,1)-type real interpolation of its domain and underlying Banach space has maximal L1L^1-regularity, using a duality argument combined with the result of maximal continuous regularity. As an application, we consider maximal L1L^1-regularity of the Dirichlet-Laplacian and the Stokes operator in inhomogeneous Bq,1sB^s_{q,1}-type Besov spaces on domains of Rn\mathbb R^n, n2n\geq 2.
Consider the problem of minimizing the expected value of a (possibly nonconvex) cost function parameterized by a random (vector) variable, when the expectation cannot be computed accurately (e.g., because the statistics of the random variables are unknown and/or the computational complexity is prohibitive). Classical sample stochastic gradient methods for solving this problem may empirically suffer from slow convergence. In this paper, we propose for the first time a stochastic parallel Successive Convex Approximation-based (best-response) algorithmic framework for general nonconvex stochastic sum-utility optimization problems, which arise naturally in the design of multi-agent systems. The proposed novel decomposition enables all users to update their optimization variables in parallel by solving a sequence of strongly convex subproblems, one for each user. Almost surely convergence to stationary points is proved. We then customize our algorithmic framework to solve the stochastic sum rate maximization problem over Single-Input-Single-Output (SISO) frequency-selective interference channels, multiple-input-multiple-output (MIMO) interference channels, and MIMO multiple-access channels. Numerical results show that our algorithms are much faster than state-of-the-art stochastic gradient schemes while achieving the same (or better) sum-rates.
In this paper, we propose a successive pseudo-convex approximation algorithm to efficiently compute stationary points for a large class of possibly nonconvex optimization problems. The stationary points are obtained by solving a sequence of successively refined approximate problems, each of which is much easier to solve than the original problem. To achieve convergence, the approximate problem only needs to exhibit a weak form of convexity, namely, pseudo-convexity. We show that the proposed framework not only includes as special cases a number of existing methods, for example, the gradient method and the Jacobi algorithm, but also leads to new algorithms which enjoy easier implementation and faster convergence speed. We also propose a novel line search method for nondifferentiable optimization problems, which is carried out over a properly constructed differentiable function with the benefit of a simplified implementation as compared to state-of-the-art line search techniques that directly operate on the original nondifferentiable objective function. The advantages of the proposed algorithm are shown, both theoretically and numerically, by several example applications, namely, MIMO broadcast channel capacity computation, energy efficiency maximization in massive MIMO systems and LASSO in sparse signal recovery.
An analytical derivation of the buoyancy-induced initial acceleration of a spherical gas bubble in a host liquid is presented. The theory makes no assumptions further than applying the two-phase incompressible Navier-Stokes equations, showing that neither the classical approach using potential theory nor other simplifying assumptions are needed. The result for the initial bubble acceleration as a function of the gas and liquid densities, classically built on potential theory, is retained. The result is reproduced by detailed numerical simulations. The accelerated, although stagnant state of the bubble induces a pressure distribution on the bubble surface which is different from the result related to the Archimedean principle, emphasizing the importance of the non-equilibrium state for the force acting on the bubble.
Using e+ee^+e^- annihilation data corresponding to a total integrated luminosity of 7.33 fb1\rm fb^{-1} collected at center-of-mass energies between 4.128 and 4.226~GeV with the BESIII detector, we provide the first amplitude analysis and absolute branching fraction measurement of the hadronic decay Ds+KS0KL0π+D_{s}^{+} \to K_{S}^{0}K_{L}^{0}\pi^{+}. The branching fraction of $D_{s}^{+} \to K_{S}^{0}K_{L}^{0}\pi^{+}isdeterminedtobe is determined to be (1.86\pm0.06_{\rm stat}\pm0.03_{\rm syst})\%$. Combining the B(Ds+ϕ(KS0KL0)π+)\mathcal{B}(D_{s}^{+} \to \phi(\to K_{S}^0K_{L}^0) \pi^+) obtained in this work and the world average of $\mathcal{B}(D_{s}^{+} \to \phi(\to K^+K^-) \pi^+)$, we measure the relative branching fraction B(ϕKS0KL0)/B(ϕK+K)\mathcal{B}(\phi \to K_S^0K_L^0)/\mathcal{B}(\phi \to K^+K^-)=($0.597 \pm 0.023_{\rm stat} \pm 0.018_{\rm syst} \pm 0.016_{\rm PDG}$), which deviates from the PDG value by more than 3σ\sigma. Furthermore, the asymmetry of the branching fractions of Ds+KS0K(892)+D^+_s\to K_{S}^0K^{*}(892)^{+} and $D^+_s\to K_{L}^0K^{*}(892)^{+},, \frac{\mathcal{B}(D_{s}^{+} \to K_{S}^0K^{*}(892)^{+})-\mathcal{B}(D_{s}^{+} \to K_{L}^0K^{*}(892)^{+})}{\mathcal{B}(D_{s}^{+} \to K_{S}^0K^{*}(892)^{+})+\mathcal{B}(D_{s}^{+} \to K_{L}^0K^{*}(892)^{+})}$, is determined to be (13.4±5.0stat±3.4syst)%(-13.4\pm5.0_{\rm stat}\pm3.4_{\rm syst})\%.
We investigate linear theories of incompatible micromorphic elasticity, incompatible microstretch elasticity, incompatible micropolar elasticity and the incompatible dilatation theory of elasticity (elasticity with voids). The incompatibility conditions and Bianchi identities are derived and discussed. The Eshelby stress tensor (static energy momentum) is calculated for such inhomogeneous media with microstructure. Its divergence gives the driving forces for dislocations, disclinations, point defects and inhomogeneities which are called configurational forces.
We sketch the architecture of O'Mega, a new optimizing compiler for tree amplitudes in quantum field theory, and briefly describe its usage. O'Mega generates the most efficient code currently available for scattering amplitudes for many polarized particles in the Standard Model and its extensions.
We report an experimental study of proximity effect-induced superconductivity in crystalline Cu and Co nanowires and a nanogranular Co nanowire structure in contact with a superconducting W floating electrode which we call inducer. The nanowires were grown by electrochemical deposition in heavy-ion-track etched polycarbonate templates. The nanogranular Co structure was fabricated by focused electron beam induced deposition (FEBID), while the amorphous W inducer was obtained by focused ion beam induced deposition (FIBID). For electrical resistance measurements up to three pairs of Pt voltage leads were deposited by FIBID at different distances beside the inner inducer electrode, thus allowing us to probe the proximity effect over a length of 2-12 μ\mum. Relative R(T)R(T) drops of the same order of magnitude have been observed for the Co and Cu nanowires when sweeping the temperature below 5.2 K (TcT_c of the FIBID-deposited W inducer). By contrast, relative R(T)R(T) drops were found to be an order of magnitude smaller for the nanogranular Co nanowire structure. Our analysis of the resistance data shows that the superconducting proximity length in crystalline Cu and Co is about 1 μ\mum at low temperatures, attesting to a long-range proximity effect in the case of ferromagnetic Co. Moreover, this long-range proximity effect has been revealed to be insusceptible to magnetic fields up to 11 T, which is indicative of spin-triplet pairing. At the same time, in the nanogranular Co structure proximity-induced superconductivity is strongly suppressed due to the dominating Cooper pair scattering caused by the intrinsic microstructure of the FEBID deposit.
One of the most remarkable results to emerge from heavy-ion collisions over the past two decades is the striking regularity shown by particle yields at all energies. This has led to several very successful proposals describing particle yields over a very wide range of beam energies, reaching from 1 A GeV up to 200 A GeV, using only one or two parameters. A systematic comparison of these proposals is presented here. The conditions of fixed energy per particle, baryon+anti-baryon density, normalized entropy density as well as percolation model are investigated. The results are compared with the most recent chemical freeze-out parameters obtained in the thermal-statistical analysis of particle yields. The sensitivity and dependence of the results on parameters is analyzed and discussed. It is shown that in the energy range above the top AGS energy, within present accuracies, all chemical freeze-out criteria give a fairly good description of the particle yields. However, the low energy heavy-ion data favor the constant energy per particle as a unified condition of chemical particle freeze-out. This condition also shows the weakest sensitivity on model assumptions and parameters.
27 Oct 2006
In the present paper the classical point symmetry analysis is extended from partial differential to functional differential equations with functional derivatives. In order to perform the group analysis and deal with the functional derivatives we extend the quantities such as infinitesimal transformations, prolongations and invariant solutions. For the sake of example the procedure is applied to the continuum limit of the heat equation. The method can further lead to significant applications in statistical physics and fluid dynamics.
The Boolean satisfiability problem (SAT) can be solved efficiently with variants of the DPLL algorithm. For industrial SAT problems, DPLL with conflict analysis dependent dynamic decision heuristics has proved to be particularly efficient, e.g. in Chaff. In this work, algorithms that initialize the variable activity values in the solver MiniSAT v1.14 by analyzing the CNF are evolved using genetic programming (GP), with the goal to reduce the total number of conflicts of the search and the solving time. The effect of using initial activities other than zero is examined by initializing with random numbers. The possibility of countering the detrimental effects of reordering the CNF with improved initialization is investigated. The best result found (with validation testing on further problems) was used in the solver Actin, which was submitted to SAT-Race 2006.
There are no more papers matching your filters at the moment.