Institute of Applied Computing and Community Code (IAC3)
Erupting flux ropes play crucial role in powering a wide range of solar transients, including flares, jets, and coronal mass ejections. These events are driven by the release of stored magnetic energy, facilitated by the shear in the complex magnetic topologies. However, the mechanisms governing the formation and eruption of flux ropes, particularly the role of magnetic shear distribution in coronal arcades are not fully understood. We employ magnetohydrodynamic simulations incorporating nonadiabatic effects of optically thin radiative losses, magnetic field-aligned thermal conduction, and spatially varying (steady) background heating, to realistically model the coronal environment. A stratified solar atmosphere under gravity is initialized with a non-force-free field comprising sheared arcades. We study two different cases by varying the initial shear to analyze their resulting dynamics, and the possibility of flux rope formation and eruptions. Our results show that strong initial magnetic shear leads to spontaneous flux rope formation and eruption via magnetic reconnection, driven by Lorentz force. The shear distribution infers the non-potentiality distributed along arcades and demonstrates its relevance in identifying sites prone to eruptive activity. The evolution of mean shear and the relative strength between guide to reconnection fields during the pre- and post-eruption phases are explored, with implications of bulk heating for the ``hot onset'' phenomena in flares, and particle acceleration. On the other hand, the weaker shear case does not lead to formation of any flux ropes. Our findings highlight the limitations of relying solely on foot point shear and underscore the need for coronal scale diagnostics. These results are relevant for understanding eruptive onset conditions and can promote a better interpretation of coronal observations from current and future missions.
Understanding the dynamics of large-scale brain models remains a central challenge due to the inherent complexity of these systems. In this work, we explore the emergence of complex spatiotemporal patterns in a large scale-brain model composed of 90 interconnected brain regions coupled through empirically derived anatomical connectivity. An important aspect of our formulation is that the local dynamics of each brain region are described by a next-generation neural mass model, which explicitly captures the macroscopic gamma activity of coupled excitatory and inhibitory neural populations (PING mechanism). We first identify the system's homogeneous states-both resting and oscillatory-and analyze their stability under uniform perturbations. Then, we determine the stability against non-uniform perturbations by obtaining dispersion relations for the perturbation growth rate. This analysis enables us to link unstable directions of the homogeneous solutions to the emergence of rich spatiotemporal patterns, that we characterize by means of Lyapunov exponents and frequency spectrum analysis. Our results show that, compared to previous studies with classical neural mass models, next-generation neural mass models provide a broader dynamical repertoire, both within homogeneous states and in the heterogeneous regime. Additionally, we identify a key role for anatomical connectivity in cross-frequency coupling, allowing for the emergence of gamma oscillations with amplitude modulated by slower rhythms. These findings suggest that such models are not only more biophysically grounded but also particularly well-suited to capture the full complexity of large-scale brain dynamics. Overall, our study advances the analytical understanding of emerging spatiotemporal patterns in whole-brain models.
Denoising is one of the fundamental steps of the processing pipeline that converts data captured by a camera sensor into a display-ready image or video. It is generally performed early in the pipeline, usually before demosaicking, although studies swapping their order or even conducting them jointly have been proposed. With the advent of deep learning, the quality of denoising algorithms has steadily increased. Even so, modern neural networks still have a hard time adapting to new noise levels and scenes, which is indispensable for real-world applications. With those in mind, we propose a self-similarity-based denoising scheme that weights both a pre- and a post-demosaicking denoiser for Bayer-patterned CFA video data. We show that a balance between the two leads to better image quality, and we empirically find that higher noise levels benefit from a higher influence pre-demosaicking. We also integrate temporal trajectory prefiltering steps before each denoiser, which further improve texture reconstruction. The proposed method only requires an estimation of the noise model at the sensor, accurately adapts to any noise level, and is competitive with the state of the art, making it suitable for real-world videography.
There are no more papers matching your filters at the moment.