Federal University of ABC
Current electroencephalogram (EEG) decoding models are typically trained on small numbers of subjects performing a single task. Here, we introduce a large-scale, code-submission-based competition comprising two challenges. First, the Transfer Challenge asks participants to build and test a model that can zero-shot decode new tasks and new subjects from their EEG data. Second, the Psychopathology factor prediction Challenge asks participants to infer subject measures of mental health from EEG data. For this, we use an unprecedented, multi-terabyte dataset of high-density EEG signals (128 channels) recorded from over 3,000 child to young adult subjects engaged in multiple active and passive tasks. We provide several tunable neural network baselines for each of these two challenges, including a simple network and demographic-based regression models. Developing models that generalise across tasks and individuals will pave the way for ML network architectures capable of adapting to EEG data collected from diverse tasks and individuals. Similarly, predicting mental health-relevant personality trait values from EEG might identify objective biomarkers useful for clinical diagnosis and design of personalised treatment for psychological conditions. Ultimately, the advances spurred by this challenge could contribute to the development of computational psychiatry and useful neurotechnology, and contribute to breakthroughs in both fundamental neuroscience and applied clinical research.
3
Objective gait assessment in Parkinson's Disease (PD) is limited by the absence of large, diverse, and clinically annotated motion datasets. We introduce CARE-PD, the largest publicly available archive of 3D mesh gait data for PD, and the first multi-site collection spanning 9 cohorts from 8 clinical centers. All recordings (RGB video or motion capture) are converted into anonymized SMPL meshes via a harmonized preprocessing pipeline. CARE-PD supports two key benchmarks: supervised clinical score prediction (estimating Unified Parkinson's Disease Rating Scale, UPDRS, gait scores) and unsupervised motion pretext tasks (2D-to-3D keypoint lifting and full-body 3D reconstruction). Clinical prediction is evaluated under four generalization protocols: within-dataset, cross-dataset, leave-one-dataset-out, and multi-dataset in-domain adaptation. To assess clinical relevance, we compare state-of-the-art motion encoders with a traditional gait-feature baseline, finding that encoders consistently outperform handcrafted features. Pretraining on CARE-PD reduces MPJPE (from 60.8mm to 7.5mm) and boosts PD severity macro-F1 by 17 percentage points, underscoring the value of clinically curated, diverse training data. CARE-PD and all benchmark code are released for non-commercial research at this https URL.
Many promising approaches to symbolic regression have been presented in recent years, yet progress in the field continues to suffer from a lack of uniform, robust, and transparent benchmarking standards. In this paper, we address this shortcoming by introducing an open-source, reproducible benchmarking platform for symbolic regression. We assess 14 symbolic regression methods and 7 machine learning methods on a set of 252 diverse regression problems. Our assessment includes both real-world datasets with no known model form as well as ground-truth benchmark problems, including physics equations and systems of ordinary differential equations. For the real-world datasets, we benchmark the ability of each method to learn models with low error and low complexity relative to state-of-the-art machine learning methods. For the synthetic problems, we assess each method's ability to find exact solutions in the presence of varying levels of noise. Under these controlled experiments, we conclude that the best performing methods for real-world regression combine genetic algorithms with parameter estimation and/or semantic search drivers. When tasked with recovering exact equations in the presence of noise, we find that deep learning and genetic algorithm-based approaches perform similarly. We provide a detailed guide to reproducing this experiment and contributing new methods, and encourage other researchers to collaborate with us on a common and living symbolic regression benchmark.
Níckolas de Aguiar Alves and André G. S. Landulfo introduce Asymptotic (Conformal) Killing Horizons (A(C)KHs) as a unified, theory-agnostic geometric framework for defining asymptotic symmetries on null hypersurfaces. This approach naturally incorporates superdilations for asymptotically flat spacetimes, clarifies the relationship between BMS and DMP symmetry groups, and delineates the geometric conditions underlying these symmetries.
The interplay between thermodynamics, general relativity and quantum mechanics has long intrigued researchers. Recently, important advances have been obtained in thermodynamics, mainly regarding its application to the quantum domain through fluctuation theorems. In this letter, we apply Fermi normal coordinates to report a fully general relativistic detailed quantum fluctuation theorem based on the two point measurement scheme. We demonstrate how the spacetime curvature can produce entropy in a localized quantum system moving in a general spacetime. The example of a quantum harmonic oscillator living in an expanding universe is presented. This result implies that entropy production is strongly observer dependent and deeply connects the arrow of time with the causal structure of the spacetime.
These are the extended lecture notes for a minicourse presented at the I S\~ao Paulo School on Gravitational Physics discussing the Bondi--Metzner--Sachs (BMS) group, the group of symmetries at null infinity on asymptotically flat spacetimes. The BMS group has found many applications in classical gravity, quantum field theory in flat and curved spacetimes, and quantum gravity. These notes build the BMS group from its most basic prerequisites (such as group theory, symmetries in differential geometry, and asymptotic flatness) up to modern developments. These include its connections to the Weinberg soft graviton theorem, the memory effect, its use to construct Hadamard states in quantum field theory in curved spacetimes, and other ideas. Advanced sections briefly discuss the main concepts behind the infrared triangle in electrodynamics, superrotations, and the Dappiaggi--Moretti--Pinamonti group in expanding universes with cosmological horizons. New contributions by the author concerning asymptotic (conformal) Killing horizons are discussed at the end.
Researchers introduce Alljoined1, a large-scale, multi-participant EEG dataset comprising responses to 10,000 unique natural images per participant, designed to enable more robust EEG-to-image decoding. The dataset demonstrates consistent neural activity and high signal-to-noise ratio in visual processing regions, confirming its quality for advanced decoding tasks.
Symbolic Regression (SR) is a powerful technique for discovering interpretable mathematical expressions. However, benchmarking SR methods remains challenging due to the diversity of algorithms, datasets, and evaluation criteria. In this work, we present an updated version of SRBench. Our benchmark expands the previous one by nearly doubling the number of evaluated methods, refining evaluation metrics, and using improved visualizations of the results to understand the performances. Additionally, we analyze trade-offs between model complexity, accuracy, and energy consumption. Our results show that no single algorithm dominates across all datasets. We propose a call for action from SR community in maintaining and evolving SRBench as a living benchmark that reflects the state-of-the-art in symbolic regression, by standardizing hyperparameter tuning, execution constraints, and computational resource allocation. We also propose deprecation criteria to maintain the benchmark's relevance and discuss best practices for improving SR algorithms, such as adaptive hyperparameter tuning and energy-efficient implementations.
Light-flavor baryon resonances in the JP=3/2+J^P=3/2^+, JP=5/2+J^P=5/2^+, and JP=5/2J^P=5/2^- families are investigated in a soft-wall AdS/QCD model at finite temperature, including the zero temperature limit. Regge-like trajectories relating the configurational entropy underlying these resonances to both the radial quantum number and the baryon mass spectra are constructed, allowing for the extrapolation of the higher-spin light-flavor baryonic mass spectra to higher values of the radial quantum number. The configurational entropy is shown to increase drastically with temperature, in the range beyond T ~ 38 MeV. The mass spectra of baryon families are analyzed, supporting a phase transition nearly above the Hagedorn
Achieving high-fidelity and temporally smooth 3D human motion generation remains a challenge, particularly within resource-constrained environments. We introduce FlowMotion, a novel method leveraging Conditional Flow Matching (CFM). FlowMotion incorporates a training objective within CFM that focuses on more accurately predicting target motion in 3D human motion generation, resulting in enhanced generation fidelity and temporal smoothness while maintaining the fast synthesis times characteristic of flow-matching-based methods. FlowMotion achieves state-of-the-art jitter performance, achieving the best jitter in the KIT dataset and the second-best jitter in the HumanML3D dataset, and a competitive FID value in both datasets. This combination provides robust and natural motion sequences, offering a promising equilibrium between generation quality and temporal naturalness.
15 Sep 2025
We obtain the Green's function GG for any flat rhombic torus TT, always with numerical values of significant digits up to the fourth decimal place (noting that GG is unique for T=1|T|=1 and TGdA=0\int_TGdA=0). This precision is guaranteed by the strategies we adopt, which include theorems such as the Legendre Relation, properties of the Weierstraß\,P-Function, and also the algorithmic control of numerical errors. Our code uses complex integration routines developed by H. Karcher, who also introduced the symmetric P-Weierstraß\,function, and these resources simplify the computation of elliptic functions considerably.
We introduce Quantum Register Algebra (QRA) as an efficient tool for quantum computing. We show the direct link between QRA and Dirac formalism. We present GAALOP (Geometric Algebra Algorithms Optimizer) implementation of our approach. Using the QRA basis vectors definitions given in Section 4 and the framework based on the de Witt basis presented in Section 5, we are able to fully describe and compute with QRA in GAALOP using the geometric product. We illustrate the intuitiveness of this computation by presenting the QRA form for the well known SWAP operation on a two qubit register.
I analyze the decoherence of a π\pi-junction qubit encoded by two co-located Majorana modes. Although not topologically protected, the qubit leverages distinct spatial profiles to couple to two independent environmental baths, realizing the phenomenon of quantum this http URL mechanism is tested against the threat of quasiparticle poisoning (QP). I show that frustration is effective against Ohmic noise (s=1s=1) and has some protection for $0.76
The paper by Níckolas de Aguiar Alves and Bruno Arderucio Costa explores how the concept of mass transforms from a simple scalar in Newtonian physics to a complex, non-local property deeply intertwined with spacetime geometry in general relativity. It details various definitions like Komar, ADM, and Bondi mass, addressing their implications, especially concerning gravitational radiation and the challenges posed by quantum effects.
When combined with In-Context Learning, a technique that enables models to adapt to new tasks by incorporating task-specific examples or demonstrations directly within the input prompt, autoregressive language models have achieved good performance in a wide range of tasks and applications. However, this combination has not been properly explored in the context of named entity recognition, where the structure of this task poses unique challenges. We propose RENER (Retrieval-Enhanced Named Entity Recognition), a technique for named entity recognition using autoregressive language models based on In-Context Learning and information retrieval techniques. When presented with an input text, RENER fetches similar examples from a dataset of training examples that are used to enhance a language model to recognize named entities from this input text. RENER is modular and independent of the underlying language model and information retrieval algorithms. Experimental results show that in the CrossNER collection we achieve state-of-the-art performance with the proposed technique and that information retrieval can increase the F-score by up to 11 percentage points.
High energy collisions at the High-Luminosity Large Hadron Collider (LHC) produce a large number of particles along the beam collision axis, outside of the acceptance of existing LHC experiments. The proposed Forward Physics Facility (FPF), to be located several hundred meters from the ATLAS interaction point and shielded by concrete and rock, will host a suite of experiments to probe Standard Model (SM) processes and search for physics beyond the Standard Model (BSM). In this report, we review the status of the civil engineering plans and the experiments to explore the diverse physics signals that can be uniquely probed in the forward region. FPF experiments will be sensitive to a broad range of BSM physics through searches for new particle scattering or decay signatures and deviations from SM expectations in high statistics analyses with TeV neutrinos in this low-background environment. High statistics neutrino detection will also provide valuable data for fundamental topics in perturbative and non-perturbative QCD and in weak interactions. Experiments at the FPF will enable synergies between forward particle production at the LHC and astroparticle physics to be exploited. We report here on these physics topics, on infrastructure, detector, and simulation studies, and on future directions to realize the FPF's physics potential.
Quantum entanglement is a foundational resource in quantum information science, underpinning applications across physics. However, detecting and quantifying entanglement remains a significant challenge. Here, we introduce a variational quantum algorithm inspired by Uhlmann's theorem to quantify the Bures entanglement of general quantum states, a method that naturally extends to other quantum resources, including genuine multipartite entanglement, quantum discord, quantum coherence, and total correlations, while also enabling reconstruction of the closest free states. The algorithm requires a polynomial number of ancillary qubits and circuit depth relative to the system size, dimensionality, and free state cardinality, making it scalable for practical implementations. Thus, it provides a versatile and efficient framework for quantifying quantum resources, demonstrated through several applications.
We compute the four-photon (F4F^4) operators generated by loops of charged particles of spin 00, 12\frac{1}{2}, 11 in the presence of gravity and in any spacetime dimension dd. To this end, we expand the one-loop effective action via the heat kernel coefficients, which capture both the gravity-induced renormalization of the F4F^4 operators and the low-energy Einstein-Maxwell effective field theory (EFT) produced by massive charged particles. We set positivity bounds on the F4F^4 operators using standard arguments from extremal black holes (for d4d\geq 4) and from infrared (IR) consistency of four-photon scattering (for d3d\geq 3). We find that both approaches yield nearly equivalent results, even though in the amplitudes we discard the graviton tt-channel pole and use the vanishing of the Gauss-Bonnet term at quadratic order for any dd. The positivity bounds constrain the charge-to-mass ratio of the heavy particles. If the Planckian F4F^4 operators are sufficiently small or negative, such bounds produce a version of the dd-dimensional Weak Gravity Conjecture (WGC) in most, but not all, dimensions. In the special case of d=6d=6, the gravity-induced beta functions of F4F^4 operators from charged particles of any spin are positive, leading to WGC-like bounds with a logarithmic enhancement. In d=9,10d=9,10, the WGC fails to guarantee extremal black hole decay in the infrared EFT, thereby requiring the existence of sufficiently large Planckian F4F^4 operators.
We describe and analyze algorithms for shape-constrained symbolic regression, which allows the inclusion of prior knowledge about the shape of the regression function. This is relevant in many areas of engineering -- in particular whenever a data-driven model obtained from measurements must have certain properties (e.g. positivity, monotonicity or convexity/concavity). We implement shape constraints using a soft-penalty approach which uses multi-objective algorithms to minimize constraint violations and training error. We use the non-dominated sorting genetic algorithm (NSGA-II) as well as the multi-objective evolutionary algorithm based on decomposition (MOEA/D). We use a set of models from physics textbooks to test the algorithms and compare against earlier results with single-objective algorithms. The results show that all algorithms are able to find models which conform to all shape constraints. Using shape constraints helps to improve extrapolation behavior of the models.
As the statistical power of galaxy weak lensing reaches percent level precision, large, realistic and robust simulations are required to calibrate observational systematics, especially given the increased importance of object blending as survey depths increase. To capture the coupled effects of blending in both shear and photometric redshift calibration, we define the effective redshift distribution for lensing, nγ(z)n_{\gamma}(z), and describe how to estimate it using image simulations. We use an extensive suite of tailored image simulations to characterize the performance of the shear estimation pipeline applied to the Dark Energy Survey (DES) Year 3 dataset. We describe the multi-band, multi-epoch simulations, and demonstrate their high level of realism through comparisons to the real DES data. We isolate the effects that generate shear calibration biases by running variations on our fiducial simulation, and find that blending-related effects are the dominant contribution to the mean multiplicative bias of approximately 2%-2\%. By generating simulations with input shear signals that vary with redshift, we calibrate biases in our estimation of the effective redshfit distribution, and demonstrate the importance of this approach when blending is present. We provide corrected effective redshift distributions that incorporate statistical and systematic uncertainties, ready for use in DES Year 3 weak lensing analyses.
There are no more papers matching your filters at the moment.