IISER Thiruvananthapuram
Instance-level recognition (ILR) focuses on identifying individual objects rather than broad categories, offering the highest granularity in image classification. However, this fine-grained nature makes creating large-scale annotated datasets challenging, limiting ILR's real-world applicability across domains. To overcome this, we introduce a novel approach that synthetically generates diverse object instances from multiple domains under varied conditions and backgrounds, forming a large-scale training set. Unlike prior work on automatic data synthesis, our method is the first to address ILR-specific challenges without relying on any real images. Fine-tuning foundation vision models on the generated data significantly improves retrieval performance across seven ILR benchmarks spanning multiple domains. Our approach offers a new, efficient, and effective alternative to extensive data collection and curation, introducing a new ILR paradigm where the only input is the names of the target domains, unlocking a wide range of real-world applications.
Graph-based learning is a cornerstone for analyzing structured data, with node classification as a central task. However, in many real-world graphs, nodes lack informative feature vectors, leaving only neighborhood connectivity and class labels as available signals. In such cases, effective classification hinges on learning node embeddings that capture structural roles and topological context. We introduce a fast semi-supervised embedding framework that jointly optimizes three complementary objectives: (i) unsupervised structure preservation via scalable modularity approximation, (ii) supervised regularization to minimize intra-class variance among labeled nodes, and (iii) semi-supervised propagation that refines unlabeled nodes through random-walk-based label spreading with attention-weighted similarity. These components are unified into a single iterative optimization scheme, yielding high-quality node embeddings. On standard benchmarks, our method consistently achieves classification accuracy at par with or superior to state-of-the-art approaches, while requiring significantly less computational cost.
We explore the relationship between quantum Fisher information (QFI) and the second derivative of concurrence with respect to the coupling between two qubits, referred to as the curvature of entanglement (CoE). For a two-qubit quantum probe used to estimate the coupling constant appearing in a simple interaction Hamiltonian, we show that at certain times CoE = -QFI; these times can be associated with the concurrence, viewed as a function of the coupling parameter, being a maximum. We examine the time evolution of the concurrence of the eigenstates of the symmetric logarithmic derivative and show that, for both initially separable and initially entangled states, simple product measurements suffice to saturate the quantum Cram\'er-Rao bound when CoE = -QFI, while otherwise, in general, entangled measurements are required giving an operational significance to the points in time when CoE = -QFI.
Hardy's argument constitutes an elegantly logical test for identifying nonlocal features of multipartite correlations. In this paper, we investigate Hardy's nonlocal behavior within a broad class of operational theories, including the qubit state space as a specific case. Specifically, we begin by examining a wider range of operational models with state space descriptions in the form of regular polygons. First, we present a systematic method to characterize the possible forms of entangled states within bipartite compositions of these models. Then, through explicit examples, we identify the classes of entangled states that exhibit Hardy-type nonlocality. Remarkably, our findings highlight a closer analogy between odd polygon models and the qubit state space in terms of their bipartite Hardy nonlocal behavior compared to even-sided polygons. Furthermore, we demonstrate that the emergence of mixed-state Hardy nonlocality in any operational model is determined by a specific symmetry inherent in its dynamic description. Finally, our results uncover an unexplored class of almost-quantum correlations that can be associated with an explicit operational model.
This study introduces a computational approach leveraging Physics-Informed Neural Networks (PINNs) for the efficient computation of arterial blood flows, particularly focusing on solving the incompressible Navier-Stokes equations by using the domain decomposition technique. Unlike conventional computational fluid dynamics methods, PINNs offer advantages by eliminating the need for discretized meshes and enabling the direct solution of partial differential equations (PDEs). In this paper, we propose the weighted Extended Physics-Informed Neural Networks (WXPINNs) and weighted Conservative Physics-Informed Neural Networks (WCPINNs), tailored for detailed hemodynamic simulations based on generalized space-time domain decomposition techniques. The inclusion of multiple neural networks enhances the representation capacity of the weighted PINN methods. Furthermore, the weighted PINNs can be efficiently trained in parallel computing frameworks by employing separate neural networks for each sub-domain. We show that PINNs simulation results circumvent backflow instabilities, underscoring a notable advantage of employing PINNs over traditional numerical methods to solve such complex blood flow models. They naturally address such challenges within their formulations. The presented numerical results demonstrate that the proposed weighted PINNs outperform traditional PINNs settings, where sub-PINNs are applied to each subdomain separately. This study contributes to the integration of deep learning methodologies with fluid mechanics, paving the way for accurate and efficient high-fidelity simulations in biomedical applications, particularly in modeling arterial blood flow.
GraphVL is a framework for Generalized Category Discovery (GCD) that integrates Vision-Language Models (CLIP) with Graph Neural Networks (GNNs) and multi-faceted metric learning. It creates an unbiased and discriminative semantic space, achieving state-of-the-art performance across diverse benchmarks, particularly for discovering novel categories.
The rapid dissemination of misinformation through online social networks poses a growing threat to public understanding and societal stability. Prebunking, a proactive strategy based on inoculation theory, has recently emerged as an effective intervention to build cognitive resilience against misinformation before exposure. In this work, we investigate the impact of prebunking on misinformation dynamics using a compartmental modeling framework. We first analyze the classical Ignorant-Spreader-Stifler (ISR) model, its parameters are determined using empirical rumor data from Twitter. We then propose an extended model, the Ignorant-Prebunked-Spreader-Stifler (IPSR) model, which incorporates prebunking as a preventive state and includes a forgetting mechanism to account for the decay of cognitive immunity over time. Using mean-field approximations, we derive steady-state solutions and examine the effect of prebunking on the spreading of misinformation. We further investigate the robustness of the IPSR model by varying network size and average degree. In addition, we analyze the model's behavior on Watts-Strogatz and Barabasi-Albert networks to assess the role of small-world and scale-free structures in shaping intervention outcomes. Our results show that the inclusion of prebunking significantly reduces the scale of misinformation outbreaks across different network structures. These findings highlight the efficacy of prebunking as a scalable intervention strategy and underscore the utility of compartmental models in understanding and mitigating information-based contagion in complex networks.
Although there is no complete theory of high temperature superconductivity, the importance of CuO2_2 planes in cuprate superconductors is confirmed from both theory and experiments. Strong Coulomb repulsion between electrons on the CuO2_2 plane makes the resultant electron system highly correlated and a difficult problem to solve since exact solutions of many-body Hamiltonian in two dimensions do not exist. If however, superconductivity can arise in structures having chains rather than planes and having a high critical temperature, then the high temperature superconductivity problem could become more tractable since exact solutions in one dimension do exist. In this paper, we report the observation of bulk superconductivity in single crystals of a cuprate Srx_xCa1x_{1-x}CuO2_2 at very high critical temperature, Tc_c, of \sim 90 K whose structure reveals the presence of infinite double chains of Cu-O-Cu-O instead of CuO2_2 planes, thus, ensuring quasi-one dimensional superconductivity. Bulk superconducting behaviour was observed in \textit{dc} magnetisation, \textit{ac} susceptibility as well as resistance measurements. The observation of bulk superconductivity in Srx_xCa1x_{1-x}CuO2_2 having chains of Cu-O-Cu-O rather than planes of CuO2_2 at a high Tc_c of 90 K is expected to profoundly impact our understanding of high temperature superconductivity.
We analyze utility of communication channels in absence of any short of quantum or classical correlation shared between the sender and the receiver. To this aim, we propose a class of two-party communication games, and show that the games cannot be won given a noiseless 11-bit classical channel from the sender to the receiver. Interestingly, the goal can be perfectly achieved if the channel is assisted with classical shared randomness. This resembles an advantage similar to the quantum superdense coding phenomenon where pre-shared entanglement can enhance the communication utility of a perfect quantum communication line. Quite surprisingly, we show that a qubit communication without any assistance of classical shared randomness can achieve the goal, and hence establishes a novel quantum advantage in the simplest communication scenario. In pursuit of a deeper origin of this advantage, we show that an advantageous quantum strategy must invoke quantum interference both at the encoding step by the sender and at the decoding step by the receiver. We also study communication utility of a class of non-classical toy systems described by symmetric polygonal state spaces. We come up with communication tasks that can be achieved neither with 11-bit of classical communication nor by communicating a polygon system, whereas 11-qubit communication yields a perfect strategy, establishing quantum advantage over them. To this end, we show that the quantum advantages are robust against imperfect encodings-decodings, making the protocols implementable with presently available quantum technologies.
Going beyond the standard quantum limit in noisy quantum metrology is an important and challenging task. Here we show how Dicke states can be used to surpass the standard quantum limit and achieve the Heisenberg limit in open quantum systems. The system we study has qubits symmetrically coupled to a resonator and our objective is to estimate the coupling between the qubits and the resonator. The time-dependent quantum Fisher information with respect to the coupling is studied for this open quantum system where the same decay rates are assumed on all qubits. We show that when the system is initialized to a Dicke state with an optimal excitation number one can go beyond the standard quantum limit and achieve the Heisenberg limit even for finite values of the decays on the qubit and the resonator, particularly when the qubits and resonator are strongly coupled. We compare our results against the highly entangled GHZ state and a completely separable state and show that the GHZ state performs quite poorly whereas under certain noise conditions the separable state is able to go beyond the standard quantum limit due to subsequent interactions with a resonator.
We delve into Open Domain Generalization (ODG), marked by domain and category shifts between training's labeled source and testing's unlabeled target domains. Existing solutions to ODG face limitations due to constrained generalizations of traditional CNN backbones and errors in detecting target open samples in the absence of prior knowledge. Addressing these pitfalls, we introduce ODG-CLIP, harnessing the semantic prowess of the vision-language model, CLIP. Our framework brings forth three primary innovations: Firstly, distinct from prevailing paradigms, we conceptualize ODG as a multi-class classification challenge encompassing both known and novel categories. Central to our approach is modeling a unique prompt tailored for detecting unknown class samples, and to train this, we employ a readily accessible stable diffusion model, elegantly generating proxy images for the open class. Secondly, aiming for domain-tailored classification (prompt) weights while ensuring a balance of precision and simplicity, we devise a novel visual stylecentric prompt learning mechanism. Finally, we infuse images with class-discriminative knowledge derived from the prompt space to augment the fidelity of CLIP's visual embeddings. We introduce a novel objective to safeguard the continuity of this infused semantic intel across domains, especially for the shared classes. Through rigorous testing on diverse datasets, covering closed and open-set DG contexts, ODG-CLIP demonstrates clear supremacy, consistently outpacing peers with performance boosts between 8%-16%. Code will be available at this https URL.
Instance-level recognition (ILR) focuses on identifying individual objects rather than broad categories, offering the highest granularity in image classification. However, this fine-grained nature makes creating large-scale annotated datasets challenging, limiting ILR's real-world applicability across domains. To overcome this, we introduce a novel approach that synthetically generates diverse object instances from multiple domains under varied conditions and backgrounds, forming a large-scale training set. Unlike prior work on automatic data synthesis, our method is the first to address ILR-specific challenges without relying on any real images. Fine-tuning foundation vision models on the generated data significantly improves retrieval performance across seven ILR benchmarks spanning multiple domains. Our approach offers a new, efficient, and effective alternative to extensive data collection and curation, introducing a new ILR paradigm where the only input is the names of the target domains, unlocking a wide range of real-world applications.
Herein, we report a case study in which we saw the spontaneous conversion of commercial bulk graphite into LaB6 decorated carbon nanotubes (CNTs) under normal atmospheric conditions. The feedstock graphite was used as a hollow cylindrical anode filled with LaB6 powder and partially eroded in a DC electric-arc plasma reactor in pure nitrogen atmosphere. An unusual and spontaneous deformation of the plasma-treated residual anode into a fluffy powder was seen to continue for months when left to ambient atmospheric conditions. The existence of LaB6 decorated multi-walled CNTs at large quantity was confirmed in the as-generated powder by using electron microscopy, Raman spectroscopy and x-ray diffraction. The as-synthesized CNT-based large-area field emitter showed promising field-emitting properties with a low turn-on electric field of ~1.5 V per micrometer, and a current density of ~1.17 mA per square cm at an applied electric field of 3.24 V per micrometer.
We simulate the dynamics of systems with NN = 1-20 qubits coupled to a cavity in order to assess their potential for quantum metrology of a parameter in the open systems limit. The qubits and the cavity are both allowed to have losses and the system is studied under various coupling strength regimes. The focus is primarily on the coupling between the qubits using the quantum Fisher information as the measured parameter. Some results on estimating the qubit-cavity detuning parameter are also presented. We investigate the scaling of the uncertainty in the estimate of the qubit-cavity coupling with the number of qubits and for different initial states of the qubits that act as the quantum probe. As initial probe states, we consider Dicke states with varying excitation numbers, the GHZ state, and separable X-polarized states. It is shown that in the strong coupling regime, i.e., when the coupling between the qubits and the cavity is greater than the decay parameters of both the qubits and the cavity, Dicke states with a large excitation number can achieve the Heisenberg limit, with the precision scaling improving as the excitation number increases. A particularly intriguing finding of our study is that in the weak coupling regime, as well as in situations where either the qubit or cavity decay parameters exceed the coupling, the separable XX-polarized state is the best in terms of scaling and is even able to achieve the Heisenberg limit in these lossy regimes for the range of NN considered.
We implement a simulation of a quantum field theory in 1+1 space-time dimensions on a gate-based quantum computer using the light front formulation of the theory. The nonperturbative simulation of the Yukawa model field theory is verified on IBM's simulator and is also demonstrated on a small-scale IBM circuit-based quantum processor, on the cloud, using IBM Qiskit. The light front formulation allows for controlling the resource requirement and complexity of the computation with commensurate trade-offs in accuracy and detail by modulating a single parameter, namely the harmonic resolution. Qubit operators for the bosonic excitations were also created and were used along with the fermionic ones already available, to simulate the theory involving all of these particles. With the restriction on the number of logical qubits available on the existent gate-based Noisy Intermediate-Scale Quantum (NISQ) devices, the trotterization approximation is also used. We show that experimentally relevant quantities like cross-sections for various processes, survival probabilities of various states, etc. can be computed. We also explore the inaccuracies introduced by the bounds on achievable harmonic resolution and Trotter steps placed by the limited number of qubits and circuit depth supported by present-day NISQ devices.
Missing feature values are a significant hurdle for downstream machine-learning tasks such as classification. However, imputation methods for classification might be time-consuming for high-dimensional data, and offer few theoretical guarantees on the preservation of the data distribution and imputation quality, especially for not-missing-at-random mechanisms. First, we propose an imputation approach named F3I based on the iterative improvement of a K-nearest neighbor imputation, where neighbor-specific weights are learned through the optimization of a novel concave, differentiable objective function related to the preservation of the data distribution on non-missing values. F3I can then be chained to and jointly trained with any classifier architecture. Second, we provide a theoretical analysis of imputation quality and data distribution preservation by F3I for several types of missing mechanisms. Finally, we demonstrate the superior performance of F3I on several imputation and classification tasks, with applications to drug repurposing and handwritten-digit recognition data.
Positive spectral lags are commonly observed in gamma-ray burst (GRB) prompt phase where soft photons lag behind hard ones in their spectral studies. Opposite to this pattern, a fraction of GRBs show a negative spectral lag where hard photons arrive later compared to soft photons. Similarly, recent Fermi-LAT observations show a late onset of high-energy photons in most GRB observations. A fraction of GRBs show a transition from positive to negative lags. Such negative lags and the spectral lag transition have no convincing explanation. We show that a structured GRB jet with velocity shear naturally produces both positive and negative spectral lags. s gain energy from repeated scattering with shearing layers and subsequently escape from higher altitudes. Hence, these photons are delayed compared to soft photons producing a negative spectral lag. The inner jet has no shear and a positive lag appears providing a unified picture of spectral lags in GRBs. The theory predicts a flip in spectral lag from positive to negative within the evolution of the prompt phase. Comparison of the observed lags with the prediction of the theory limits the possible range of GRB jet Lorentz factors to be a few tens.
We present the science case for the proposed Daksha high energy transients mission. Daksha will comprise of two satellites covering the entire sky from 1~keV to >1>1~MeV. The primary objectives of the mission are to discover and characterize electromagnetic counterparts to gravitational wave source; and to study Gamma Ray Bursts (GRBs). Daksha is a versatile all-sky monitor that can address a wide variety of science cases. With its broadband spectral response, high sensitivity, and continuous all-sky coverage, it will discover fainter and rarer sources than any other existing or proposed mission. Daksha can make key strides in GRB research with polarization studies, prompt soft spectroscopy, and fine time-resolved spectral studies. Daksha will provide continuous monitoring of X-ray pulsars. It will detect magnetar outbursts and high energy counterparts to Fast Radio Bursts. Using Earth occultation to measure source fluxes, the two satellites together will obtain daily flux measurements of bright hard X-ray sources including active galactic nuclei, X-ray binaries, and slow transients like Novae. Correlation studies between the two satellites can be used to probe primordial black holes through lensing. Daksha will have a set of detectors continuously pointing towards the Sun, providing excellent hard X-ray monitoring data. Closer to home, the high sensitivity and time resolution of Daksha can be leveraged for the characterization of Terrestrial Gamma-ray Flashes.
We explore the flow of quantum correlations in cluster states defined on ladder type graphs as measurements are done on qubits located on the nodes of the cluster. We focus on three qubits at the end of the ladder and compute the non-classical correlations between two of the three qubits as measurements are done on the remaining qubits. We compute both the entanglement between the two qubits as well as the quantum discord between them after the measurements. We see that after all but three qubits are measured, the non-classical correlations developed between two of them show a trend of being stronger with the length of the ladder. It is also seen that measurements on to the basis states of operators belonging to the Clifford group do not produce such correlations or entanglement. The non-classical correlations produced depend only on the number, location and nature of preceding non-Clifford measurements. Our results not only throw light on the dynamics of quantum correlations while an algorithm proceeds step-by-step in the Measurement-based Based Quantum Computing (MBQC) model but it also reveals how the last two qubits, treated as an open quantum system, can have increasing entanglement or other non-classical correlations as its immediate environment is interrogated through random measurements.
A real symmetric matrix AA is said to be completely positive if it can be written as BBtBB^t for some (not necessarily square) nonnegative matrix BB. A simple graph GG is called a completely positive graph if every doubly nonnegative matrix realization of GG is a completely positive matrix. Our aim in this manuscript is to compute the determinant and inverse (when it exists) of the distance matrix of a class of completely positive graphs. Similar to trees, we obtain a relation for the inverse of the distance matrix of a class of completely positive graphs involving the Laplacian matrix, a rank one matrix and a matrix R\mathcal{R}. We also determine the eigenvalues of some principal submatrices of matrix R\mathcal{R}.
There are no more papers matching your filters at the moment.