University of Delhi
The main power of quantum sensors is achieved when the probe is composed of several particles. In this situation, quantum features such as entanglement contribute to enhancing the precision of quantum sensors beyond the capacity of classical sensors. Originally, quantum sensing was formulated for non-interacting particles that are prepared in a special form of maximally entangled states. These probes are extremely sensitive to decoherence, and any interaction between particles is detrimental to their performance. An alternative framework for quantum sensing has been developed exploiting quantum many-body systems, where the interaction between particles plays a crucial role. In this review, we investigate different aspects of the latter approach for quantum metrology and sensing. Many-body probes have been used in both equilibrium and non-equilibrium scenarios. Quantum criticality has been identified as a resource for achieving quantum-enhanced sensitivity in both scenarios. In equilibrium, various types of criticalities, such as first-order, second-order, topological, and localization phase transitions, have been exploited for sensing purposes. In non-equilibrium scenarios, quantum-enhanced sensitivity has been discovered for Floquet, dissipative, and time crystal phase transitions. While each type of these criticalities has its own characteristics, the presence of one feature is crucial for achieving quantum-enhanced sensitivity: the energy/quasi-energy gap closing. In non-equilibrium quantum sensing, time is another parameter that can affect the sensitivity of the probe. Typically, the sensitivity enhances as the probe evolves in time. In general, a more complete understanding of resources for non-equilibrium quantum sensors is now rapidly evolving. In this review, we provide an overview of recent progress in quantum metrology and sensing using many-body systems.
This Letter reports measurements of muon-neutrino disappearance and electron-neutrino appearance and the corresponding antineutrino processes between the two NOvA detectors in the NuMI neutrino beam. These measurements use a dataset with double the neutrino mode beam exposure that was previously analyzed, along with improved simulation and analysis techniques. A joint fit to these samples in the three-flavor paradigm results in the most precise single-experiment constraint on the atmospheric neutrino mass-splitting, Δm322=2.4310.034+0.036(2.4790.036+0.036)×103\Delta m^2_{32}= 2.431^{+0.036}_{-0.034} (-2.479^{+0.036}_{-0.036}) \times 10^{-3}~eV2^2 if the mass ordering is Normal (Inverted). In both orderings, a region close to maximal mixing with sin2θ23=0.550.02+0.06\sin^2\theta_{23}=0.55^{+0.06}_{-0.02} is preferred. The NOvA data show a mild preference for the Normal mass ordering with a Bayes factor of 2.4 (corresponding to 70\% of the posterior probability), indicating that the Normal ordering is 2.4 times more probable than the Inverted ordering. When incorporating a 2D Δm322sin22θ13\Delta m^2_{32}\textrm{--}\sin^2 2\theta_{13} constraint based on Daya Bay data, this preference strengthens to a Bayes factor of 6.6 (87\%).
We formalize the structure of a class of mathematical models of growing-dividing autocatalytic systems demonstrating that self-reproduction emerges only if the system's 'growth dynamics' and 'division strategy' are mutually compatible. Using various models in this class (the linear Hinshelwood cycle and nonlinear coarse-grained models of protocells and bacteria), we show that depending on the chosen division mechanism, the same chemical system can exhibit either (i) balanced exponential growth, (ii) balanced nonexponential growth, or (iii) system death (where the system either explodes to infinity or collapses to zero in successive generations). We identify the class of division processes that lead to these three outcomes, offering strategies to stabilize or destabilize growing-dividing systems. Our work provides a geometric framework to further explore growing-dividing systems and will aid in the design of self-reproducing synthetic cells.
As large reasoning models (LRMs) grow more capable, chain-of-thought (CoT) reasoning introduces new safety challenges. Existing SFT-based safety alignment studies dominantly focused on filtering prompts with safe, high-quality responses, while overlooking hard prompts that always elicit harmful outputs. To fill this gap, we introduce UnsafeChain, a safety alignment dataset constructed from hard prompts with diverse sources, where unsafe completions are identified and explicitly corrected into safe responses. By exposing models to unsafe behaviors and guiding their correction, UnsafeChain enhances safety while preserving general reasoning ability. We fine-tune three LRMs on UnsafeChain and compare them against recent SafeChain and STAR-1 across six out-of-distribution and five in-distribution benchmarks. UnsafeChain consistently outperforms prior datasets, with even a 1K subset matching or surpassing baseline performance, demonstrating the effectiveness and generalizability of correction-based supervision. We release our dataset and code at this https URL
The Solar Ultraviolet Imaging Telescope(SUIT) onboard Aditya-L1 is an imager that observes the solar photosphere and chromosphere through observations in the wavelength range of 200-400 nm. A comprehensive understanding of the plasma and thermodynamic properties of chromospheric and photospheric morphological structures requires a large sample statistical study, necessitating the development of automatic feature detection methods. To this end, we develop the feature detection algorithm SPACE-SUIT: Solar Phenomena Analysis and Classification using Enhanced vision techniques for SUIT, to detect and classify the solar chromospheric features to be observed from SUIT's Mg II k filter. Specifically, we target plage regions, sunspots, filaments, and off-limb structures. SPACE uses You Only Look Once(YOLO), a neural network-based model to identify regions of interest. We train and validate SPACE using mock-SUIT images developed from Interface Region Imaging Spectrometer(IRIS) full-disk mosaic images in Mg II k line, while we also perform detection on Level-1 SUIT data. SPACE achieves an approximate precision of 0.788, recall 0.863 and MAP of 0.874 on the validation mock SUIT FITS dataset. Given the manual labeling of our dataset, we perform "self-validation" by applying statistical measures and Tamura features on the ground truth and predicted bounding boxes. We find the distributions of entropy, contrast, dissimilarity, and energy to show differences in the features. These differences are qualitatively captured by the detected regions predicted by SPACE and validated with the observed SUIT images, even in the absence of labeled ground truth. This work not only develops a chromospheric feature extractor but also demonstrates the effectiveness of statistical metrics and Tamura features for distinguishing chromospheric features, offering independent validation for future detection schemes.
Strong gravitational lensing time-delay measurements, together with the distance sum rule (DSR), offer a model-independent approach to probe the geometry and expansion of the universe without relying on a fiducial cosmological model. In this work, we perform a cosmographic analysis by combining the latest Type Ia supernova datasets (PantheonPlus, DESY5, and Union3), baryon acoustic oscillation data from DESI-DR2, and updated time-delay distances from strong lensing systems. The analyses using SGL with individual SNIa datasets (SGL+PantheonPlus, SGL+DESY5, and SGL+Union3) indicate a preference for an open universe, though they remain consistent with spatially flat universe at the 9595% confidence level. When DESI-DR2 data is included in each combination, the constraints tighten and shift slightly toward a closed universe, while flatness remains supported at the 6868% confidence level. The best-fit values of q0q_0 and j0j_0 agree with Λ\LambdaCDM expectations within 9595% or 9999% confidence depending on the dataset, whereas s0s_0 remains weakly constrained in all cases. This work is the first in a series of two companion papers on cosmography with DESI-DR2 and strong lensing.
In this investigation we examine the astrophysical consequences of the influence of pressure anisotropy on the physical properties of observed pulsars within the background of f(Q,T)f(Q,T) gravity by choosing a specific form f(Q,T)=ψ1Q+ψ2Tf(Q, T)=\psi_1\, Q + \psi_2 T, where ψ1\psi_1 and ψ2\psi_2 are the model parameters. Initially, we solve the modified field equations for anisotropic stellar configurations by assuming the physically valid metric potential along with anisotropic function for the distribution of the interior matter. We test the derived gravitational model subject to various stability conditions to confirm physically existence of compact stars within the f(Q,T)f(Q,T) gravity context. We analyze thoroughly the influence of anisotropy on the effective density, pressure and mass-radius relation of the stars. The present inspection of the model implies that the current gravitational models are non-singular and able to justify for the occurrence of observed pulsars with masses exceeding 2 MM_{\odot} as well as masses fall in the {\em mass gap} regime, in particular merger events like GW190814. The predicted radii for the observed stars of different masses fall within the range \{10.5 km, 14.5 km\} for ψ11.05\psi_1\leq 1.05 whereas the radius of PSR J074+6620 is predicted to fall within \{13.09 km, 14.66 km\} which is in agreement with the predicted radii range \{11.79 km, 15.01 km\} as can be found in the recent literature.
We study McKean--Vlasov Stochastic Differential Equations (MV-SDEs) whose drift and diffusion coefficients are of superlinear growth in \textit{all} their variables thus also superlinear in the measure component (the meaning is specified in the body of the paper). We address the finite and infinite time horizon case. Our contribution is fourfold. (a) We establish well-posedness for this class of equations and the corresponding interacting particle system. (b) We prove two propagation of chaos results with explicit L2L^2-convergence rates: the first, is a general one where the rate degrades as the system's dimension dd increases; the second, attains the sharp rate N1/2N^{-1/2} (in particle number NN) uniformly over the dimension dd at the cost of a Vlasov kernel structure that is general and of superlinear growth for the measure dependency -- the latter's proof fully avoids the Kantorovich-Rubinstein duality argument. (c) Unlike existing works -- based on semi-implicit schemes or truncated Euler schemes -- we propose a fully explicit tamed Euler scheme that has reduced computational cost (comparatively). The explicit scheme is shown to converge in strong LpL^p-sense with rate 1/21/2 (in timestep). (d) Lastly, we establish exponential ergodicity properties and long-time behavior for the MV-SDE, the corresponding interacting particle system, and the tamed scheme. The latter result is, to the best of our knowledge, fully novel.
University of Washington logoUniversity of WashingtonMichigan State University logoMichigan State UniversityUniversity of CanterburyDESYGeorgia Institute of Technology logoGeorgia Institute of TechnologySungkyunkwan UniversityUniversity of California, Irvine logoUniversity of California, IrvineUniversity of Copenhagen logoUniversity of CopenhagenOhio State UniversityPennsylvania State UniversityColumbia University logoColumbia UniversityAarhus UniversityUniversity of Pennsylvania logoUniversity of PennsylvaniaUniversity of Maryland logoUniversity of MarylandUniversity of Wisconsin-Madison logoUniversity of Wisconsin-MadisonUniversity of Alberta logoUniversity of AlbertaUniversity of RochesterMIT logoMITChiba UniversityUniversity of GenevaKarlsruhe Institute of Technology logoKarlsruhe Institute of TechnologyUniversity of DelhiUniversität OldenburgNiels Bohr InstituteUniversity of AlabamaUniversity of South DakotaUniversity of California BerkeleyRuhr-Universität BochumUniversity of AdelaideKobe UniversityTechnische Universität DortmundUniversity of KansasUniversity of California, Santa Cruz logoUniversity of California, Santa CruzUniversity of California RiversideUniversity of WürzburgUniversität MünsterErlangen Centre for Astroparticle PhysicsUniversity of MainzUniversity of Alaska AnchorageSouthern University and A&M CollegeBartol Research InstituteNational Chiao Tung UniversityUniversität WuppertalDelaware State UniversityOskar Klein CentreTHOUGHTHere's my plan:THINK:1. Scan the list of authors and their numerical affiliations.2. Look at the numbered list of affiliations at the end of the author list (it's cut off, but I'll process what's available).3. Identify the distinct organization names from these affiliations.4. Ensure these are actual organizations and not departments or general terms.Universit Libre de BruxellesRWTH Aachen University":Vrije Universiteit Brussel
The LIGO/Virgo collaboration published the catalogs GWTC-1, GWTC-2.1 and GWTC-3 containing candidate gravitational-wave (GW) events detected during its runs O1, O2 and O3. These GW events can be possible sites of neutrino emission. In this paper, we present a search for neutrino counterparts of 90 GW candidates using IceCube DeepCore, the low-energy infill array of the IceCube Neutrino Observatory. The search is conducted using an unbinned maximum likelihood method, within a time window of 1000 s and uses the spatial and timing information from the GW events. The neutrinos used for the search have energies ranging from a few GeV to several tens of TeV. We do not find any significant emission of neutrinos, and place upper limits on the flux and the isotropic-equivalent energy emitted in low-energy neutrinos. We also conduct a binomial test to search for source populations potentially contributing to neutrino emission. We report a non-detection of a significant neutrino-source population with this test.
The advent of the Internet led to the easy availability of digital data like images, audio, and video. Easy access to multimedia gives rise to the issues such as content authentication, security, copyright protection, and ownership identification. Here, we discuss the concept of digital image watermarking with a focus on the technique used in image watermark embedding and extraction of the watermark. The detailed classification along with the basic characteristics, namely visual imperceptibility, robustness, capacity, security of digital watermarking is also presented in this work. Further, we have also discussed the recent application areas of digital watermarking such as healthcare, remote education, electronic voting systems, and the military. The robustness is evaluated by examining the effect of image processing attacks on the signed content and the watermark recoverability. The authors believe that the comprehensive survey presented in this paper will help the new researchers to gather knowledge in this domain. Further, the comparative analysis can enkindle ideas to improve upon the already mentioned techniques.
We consider a model of non-canonical scalar-tensor theory in which the kinetic term in the Brans-Dicke action is replaced by a non-canonical scalar field Lagrangian L(X,ϕ)=λXαϕβV(ϕ)\mathcal{L}(X, \phi)= \lambda X^\alpha \phi^\beta - V(\phi) where X=(1/2)μϕμϕX = (1/2) \partial_{\mu} \phi \partial^{\mu} \phi and α\alpha, β\beta and λ\lambda are parameters of the model. This can be considered as a simple non-canonical generalization of the Brans-Dicke theory with a potential term which corresponds to a special case of this model with the values of the parameter α=1\alpha = 1, β=1\beta = -1 and λ=2wBD\lambda = 2w_{_{BD}} where wBDw_{_{BD}} is the Brans-Dicke parameter. Considering a spatially flat Friedmann-Robertson-Walker Universe with scale factor a(t)a(t), it is shown that, in the matter free Universe, the kinetic term λXαϕβ\lambda X^\alpha \phi^\beta can lead to a power law solution a(t)tna(t)\propto t^{n} but the maximum possible value of nn turns out to be (1+3)/40.683(1+\sqrt{3})/4 \approx 0.683. When α18\alpha \geq 18, this model can lead to a solution a(t)t2/3a(t)\propto t^{2/3}, thereby mimicking the evolution of scale factor in a cold dark matter dominated epoch with Einstein's General Relativity (GR). With the addition of a linear potential term V(ϕ)=V0ϕV(\phi) = V_{0}\phi, it is shown that this model mimics the standard Λ\LambdaCDM model type evolution of the Universe. The larger the value of α\alpha, the closer the evolution of a(t)a(t) in this model to that in the Λ\LambdaCDM model based on Einstein's GR. The purpose of this paper is to demonstrate that this model with a linear potential can mimic the GR based Λ\LambdaCDM model. However, with an appropriate choice of the potential V(ϕ)V(\phi), this model can provide a unified description of both dark matter and dynamical dark energy, as if it were based on Einstein's GR.
A new spectral Chebyshev collocation method precisely solves the stiff, nonlinear background equations of f(R) gravity, enabling robust parameter constraints for Hu Sawicki and Starobinsky models that align with observed cosmic expansion. This method provides stable and globally accurate solutions, facilitating efficient Bayesian inference for cosmological parameters and f(R) model-specific parameters.
In this paper, cosmic distance duality relation is probed without considering any background cosmological model. The only \textit{a priori} assumption is that the Universe is described by the Friedmann-Lemai^\hat{i}tre-Robertson-Walker (FLRW) metric The strong gravitational lensing (SGL) data is used to construct the dimensionless co-moving distance function d(z)d(z) and latest type Ia supernovae (SNe Ia) Pantheon+ data is used to estimate luminosity distances at the corresponding redshifts zz. Using the distance sum rule along null geodesics of the FLRW metric, the CDDR violation is probed in both flat and non-flat space-time by considering two parametrizations for η(z)\eta(z), the function generally used to probe the possible deviations from CDDR. The results show that, CDDR is compatible with the observations at a very high level of confidence for linear parametrization in flat Universe. In non-flat Universe too, CDDR is valid within 1σ1\sigma confidence interval with a mild dependence of η\eta on the curvature density parameter ΩK\Omega_{K} . The results for non-linear parametrization also show no significant deviation from CDDR.
Radiative feedback from massive stars plays a central role in the evolution of molecular clouds and the interstellar medium. This paper presents a multi-wavelength analysis of the bright-rimmed cloud, BRC 44, which is located at the periphery of the Hii region Sh2-145 and is excited by the massive stars in the region. We use a combination of archival and newly obtained infrared data, along with new optical observations, to provide a census of young stellar objects (YSOs) in the region and to estimate stellar parameters such as age, mass etc. The spatial distribution of YSOs visible in the optical wavelength suggests that they are distributed in separate clumps compared to the embedded YSOs and are relatively older. Near-Infrared (NIR) spectroscopy of four YSOs in this region using the TANSPEC mounted on the 3.6m Devasthal Optical Telescope (DOT) confirms their youth. From Spectral Energy Distribution (SED) fitting, most of the embedded YSO candidates are in their early stage of evolution, with the majority of them in their Class II and some in Class I stage. The relative proper motions of the YSOs with respect to the ionizing source are indicative of the rocket effect in the BRC. The 12CO, 13CO, and C18O observations with the Purple Mountain Observatory are used to trace the distribution of molecular gas in the region. A comparison of the cold molecular gas distribution with simple analytical model calculations shows that the cloud is in the compression stage, and massive stars may be influencing the formation of young embedded stars in the BRC region due to radiative feedback.
University of MississippiCalifornia Institute of Technology logoCalifornia Institute of TechnologyKyungpook National UniversitySLAC National Accelerator LaboratoryImperial College London logoImperial College LondonUniversity of Notre Dame logoUniversity of Notre DameUniversity of Chicago logoUniversity of ChicagoGhent UniversityNanjing University logoNanjing UniversityUniversity of BonnUniversity of Michigan logoUniversity of MichiganUniversity of MelbourneCornell University logoCornell UniversityBoston University logoBoston UniversityKansas State UniversityRutherford Appleton LaboratoryCERN logoCERNArgonne National Laboratory logoArgonne National LaboratoryUniversity of Minnesota logoUniversity of MinnesotaBrookhaven National Laboratory logoBrookhaven National LaboratoryUniversity of ColoradoPurdue University logoPurdue UniversityUniversity of HelsinkiUniversity of California, Davis logoUniversity of California, DavisUniversity of Massachusetts AmherstUniversity of IowaFermi National Accelerator LaboratoryMIT logoMITPrinceton University logoPrinceton UniversityUniversity of DelhiUniversity of New MexicoUniversity of OregonLawrence Livermore National LaboratoryMoscow State UniversityUniversity of MontenegroEwha Womans UniversityUniversity of California, Santa Cruz logoUniversity of California, Santa CruzGSIUniversity of Hawai’iMax Planck Institute for PhysicsUniversity of BarcelonaCEA SaclayNorthern Illinois UniversityLouisiana Tech UniversityLPNHEBristol UniversitySUNY, Stony BrookLaboratoire d’Annecy-le-Vieux de Physique des ParticulesInstitute of Microelectronics of Barcelona, IMB-CNM (CSIC)Institute of Nuclear Research (Atomki)University of IndianaMolecular Biology ConsortiumIPPPGomel State Technical UniversityInstituto de Fisica Corpuscular (IFIC)IHEP BeijingInstituto de Fisica de Cantabria (IFCA)Institute of Physics, PragueObninsk State Technical University for Nuclear Power EngineeringBirla Institute for Technology and Science, PilaniIPHC-IN2P3/CNRSUniversite´ Pierre et Marie Curie
Letter of intent describing SiD (Silicon Detector) for consideration by the International Linear Collider IDAG panel. This detector concept is founded on the use of silicon detectors for vertexing, tracking, and electromagnetic calorimetry. The detector has been cost-optimized as a general-purpose detector for a 500 GeV electron-positron linear collider.
Video Anomaly Detection (VAD) automates the identification of unusual events, such as security threats in surveillance videos. In real-world applications, VAD models must effectively operate in cross-domain settings, identifying rare anomalies and scenarios not well-represented in the training data. However, existing cross-domain VAD methods focus on unsupervised learning, resulting in performance that falls short of real-world expectations. Since acquiring weak supervision, i.e., video-level labels, for the source domain is cost-effective, we conjecture that combining it with external unlabeled data has notable potential to enhance cross-domain performance. To this end, we introduce a novel weakly-supervised framework for Cross-Domain Learning (CDL) in VAD that incorporates external data during training by estimating its prediction bias and adaptively minimizing that using the predicted uncertainty. We demonstrate the effectiveness of the proposed CDL framework through comprehensive experiments conducted in various configurations on two large-scale VAD datasets: UCF-Crime and XD-Violence. Our method significantly surpasses the state-of-the-art works in cross-domain evaluations, achieving an average absolute improvement of 19.6% on UCF-Crime and 12.87% on XD-Violence.
This project investigates traffic congestion within North Campus, Delhi University (DU), using continuous time simulations implemented in UXSim to model vehicle movement and interaction. The study focuses on several key intersections, identifies recurring congestion points, and evaluates the effectiveness of conventional traffic management measures. Implementing signal timing optimization and modest intersection reconfiguration resulted in measurable improvements in simulated traffic flow. The results provide practical insights for local traffic management and illustrate the value of continuous time simulation methods for informing short-term interventions and longer-term planning.
Prior studies have shown that distinguishing text generated by large language models (LLMs) from human-written one is highly challenging, and often no better than random guessing. To verify the generalizability of this finding across languages and domains, we perform an extensive case study to identify the upper bound of human detection accuracy. Across 16 datasets covering 9 languages and 9 domains, 19 annotators achieved an average detection accuracy of 87.6\%, thus challenging previous conclusions. We find that major gaps between human and machine text lie in concreteness, cultural nuances, and diversity. Prompting by explicitly explaining the distinctions in the prompts can partially bridge the gaps in over 50\% of the cases. However, we also find that humans do not always prefer human-written text, particularly when they cannot clearly identify its source.
The standard quantum limit (SQL), also known as the shot-noise limit, defines how quantum fluctuations of light constrain measurement precision. In a benchmark experiment using the Mach-Zehnder interferometer (MZI), where a coherent state with the average photon number n\langle n\rangle is combined with an ordinary vacuum input, the SQL for the phase uncertainty is given by the well-known relation ΔφSQL=1/n\Delta\varphi_{\text{SQL}} = 1/\langle n\rangle. Using a single photon-added coherent state and a weak coherent state as inputs, we report an enhanced phase sensitivity in MZI surpassing the SQL. In stark contrast to other approaches, we focus on the low-photon-number regime, \langle n\rangle < 10, and demonstrate that our scheme offers better phase sensitivity compared to the SQL. Beating the SQL at low photon numbers paves the way for the new generation of devices employed in \textquotedblleft photon-starved\textquotedblright quantum sensing, spectroscopy, and metrology.
The shape of erythrocytes or red blood cells is altered in several pathological conditions. Therefore, identifying and quantifying different erythrocyte shapes can help diagnose various diseases and assist in designing a treatment strategy. Machine Learning (ML) can be efficiently used to identify and quantify distorted erythrocyte morphologies. In this paper, we proposed a customized deep convolutional neural network (CNN) model to classify and quantify the distorted and normal morphology of erythrocytes from the images taken from the blood samples of patients suffering from Sickle cell disease ( SCD). We chose SCD as a model disease condition due to the presence of diverse erythrocyte morphologies in the blood samples of SCD patients. For the analysis, we used 428 raw microscopic images of SCD blood samples and generated the dataset consisting of 10, 377 single-cell images. We focused on three well-defined erythrocyte shapes, including discocytes, oval, and sickle. We used 18 layered deep CNN architecture to identify and quantify these shapes with 81% accuracy, outperforming other models. We also used SHAP and LIME for further interpretability. The proposed model can be helpful for the quick and accurate analysis of SCD blood samples by the clinicians and help them make the right decision for better management of SCD.
There are no more papers matching your filters at the moment.