Indian Institute of Technology Goa
Kartik Anand at IIT Goa constructs a family of local fermionic Hamiltonians for which Gaussian states cannot approximate the ground state energy beyond a constant precision. The work establishes that for these Hamiltonians, any Gaussian state has an energy expectation at least epsilon higher than the true ground state energy.
We investigate the structure of quantum proof systems by establishing collapse results that reveal simplifications in their complexity landscape. By extending classical theorems such as the Karp-Lipton theorem to quantum settings and analyzing uniqueness in quantum-classical PCPs, we clarify how various constraints influence computational power. Our main contributions are: (1) We show that restricting quantum-classical PCPs to unique proofs does not reduce their power: UniqueQCPCP=QCPCP\mathsf{UniqueQCPCP} = \mathsf{QCPCP} under BQ\mathsf{BQ}-operator and randomized reductions. This parallels the known UniqueQCMA=QCMA\mathsf{UniqueQCMA} = \mathsf{QCMA} result, indicating robustness of uniqueness even in quantum PCP-type systems. (2) We prove a non-uniform quantum analogue of the Karp-Lipton theorem: if QMABQP/qpoly\mathsf{QMA} \subseteq \mathsf{BQP}/\mathsf{qpoly}, then QPHQΣ2/qpoly\mathsf{QPH} \subseteq \mathsf{Q\Sigma}_2/\mathsf{qpoly}. This conditional collapse suggests limits on quantum advice for QMA\mathsf{QMA}-complete problems. (3) We define a bounded-entanglement version of the quantum polynomial hierarchy, BEQPH\mathsf{BEQPH}, and prove that it collapses above the fourth level. We also introduce the separable hierarchy SepQPH\mathsf{SepQPH} (zero entanglement), for which the same collapse result holds. These collapses stem not from entanglement, as in prior work, but from the convex structure of the protocols, which renders higher levels tractable. Collectively, these results offer new insights into the structure of quantum proof systems and the role of entanglement, uniqueness, and advice in defining their complexity.
We study the viability of light thermal dark matter (DM) in sub-GeV mass range in view of the stringent new DAMIC-M limits on DM-electron scattering. Considering a Dirac fermion singlet DM charged under a new Abelian gauge symmetry U(1)U(1), we outline two possibilities: (i) family non-universal U(1)U(1) gauge coupling with resonantly enhanced DM annihilation into standard model (SM) fermions and (ii) family universal dark U(1)U(1) gauge symmetry where relic is set by DM annihilation into light gauge bosons. As an illustrative example of the first class of models, we consider a gauged LμLτL_\mu-L_\tau extension of the SM having interesting detection prospects at several experiments. While both of these class of models lead to observed DM relic and consistency with DAMIC-M together with other experimental limits, the second class of models also lead to strong DM self-interactions, potentially solving the small-scale structure issues of cold dark matter. While a vast part of the parameter space in both the models is already ruled out, the current allowed region of parameter space can be further probed at ongoing or future experiments keeping the models testable.
Deep learning models trained on finite data lack a complete understanding of the physical world. On the other hand, physics-informed neural networks (PINNs) are infused with such knowledge through the incorporation of mathematically expressible laws of nature into their training loss function. By complying with physical laws, PINNs provide advantages over purely data-driven models in limited-data regimes and present as a promising route towards Physical AI. This feature has propelled them to the forefront of scientific machine learning, a domain characterized by scarce and costly data. However, the vision of accurate physics-informed learning comes with significant challenges. This work examines PINNs in terms of model optimization and generalization, shedding light on the need for new algorithmic advances to overcome issues pertaining to the training speed, precision, and generalizability of today's PINN models. Of particular interest are gradient-free evolutionary algorithms (EAs) for optimizing the uniquely complex loss landscapes arising in PINN training. Methods synergizing gradient descent and EAs for discovering bespoke neural architectures and balancing multiple terms in physics-informed learning objectives are positioned as important avenues for future research. Another exciting track is to cast EAs as a meta-learner of generalizable PINN models. To substantiate these proposed avenues, we further highlight results from recent literature to showcase the early success of such approaches in addressing the aforementioned challenges in PINN optimization and generalization.
YOLORe-IDNet, an efficient multi-camera system, identifies and tracks individuals across different camera feeds in real-time without needing pre-existing suspect images. It achieves 100% precision and 91% IOU on a new multi-camera dataset while operating at 18 frames per second, demonstrating robust performance even with occlusions.
This work introduces UnityAI-Guard, a framework for binary toxicity classification targeting low-resource Indian languages. While existing systems predominantly cater to high-resource languages, UnityAI-Guard addresses this critical gap by developing state-of-the-art models for identifying toxic content across diverse Brahmic/Indic scripts. Our approach achieves an impressive average F1-score of 84.23% across seven languages, leveraging a dataset of 567k training instances and 30k manually verified test instances. By advancing multilingual content moderation for linguistically diverse regions, UnityAI-Guard also provides public API access to foster broader adoption and application.
The mechanical environment of a substrate plays a key role in influencing the behavior of adherent biological cells. Traditional tunable substrates have limitations as their mechanical properties cannot be dynamically altered in-situ during cell culture. We present an alternate approach by using compliant mechanisms that enable realization of tunable substrate properties, specifically, invertible Poisson's ratio and tunable stiffness. These mechanisms transition between positive and negative Poisson's effects with tunable magnitude through a bistable Engaging-Disengaging Compliant Mechanism (EDCM). EDCM allows stiffness between two points of the substrate to switch between zero and theoretically infinite. In the stiffened state, lateral deformation reverses under a constant axial load, while in the zero-stiffness state, the deformation direction remains outward as that of re-entrant structure. EDCM in conjunction with an offset mechanism also allows tuning of the effective stiffness of the entire mechanism. We present analytical models correlating geometric parameters to displacement ratios in both bistable states and through illustrative design cases, demonstrate their potential for designing dynamic and reconfigurable cell culture substrates.
This paper presents a novel rate-splitting sparse code multiple access (RS-SCMA) framework, where common messages are transmitted using quadrature phase-shift keying (QPSK) modulation, while private messages are sent using SCMA encoding. A key feature of RS-SCMA is its ability to achieve a tunable overloading factor by adjusting the splitting factor. This flexibility enables an optimal trade-off, ensuring the system maintains superior performance across varying levels of overloading factor. We present a detailed transceiver design and analyze the influence of rate-splitting on the overloading factor. Extensive simulation results, both with and without low-density parity-check (LDPC) codes, highlight RS-SCMA's potential as a strong candidate for next-generation multiple access technologies.
In the early stages of relativistic heavy-ion collisions, the momentum distribution of the quark-gluon plasma is anisotropic, leading to instabilities in the system due to chromomagnetic plasma modes. In this work, we consider the anisotropic momentum distribution of the medium constituents to investigate its effects on heavy quark dynamics using the nonperturbative Gribov resummation approach within the framework of the Fokker-Planck equation. Specifically, we study the influence of nonperturbative effects and weak anisotropies on the heavy quark transport coefficients, taking into account the angular dependence between the anisotropy vector and the direction of heavy quark motion. Furthermore, the calculated drag and diffusion coefficients are employed to estimate the energy loss of heavy quarks and the nuclear modification factor, incorporating both elastic collisions and inelastic processes. Our findings indicate that momentum anisotropy, angular dependence, and nonperturbative effects-captured through the scattering amplitudes-play a significant role in determining the transport properties of heavy quarks.
We propose a machine learning-driven optimisation framework for analog circuit design in this paper. The primary objective is to determine the device sizes for the optimal performance of analog circuits for a given set of specifications. Our methodology entails employing machine learning models and spice simulations to direct the optimisation algorithm towards achieving the optimal design for analog circuits. Machine learning based global offline surrogate models, with the circuit design parameters as the input, are built in the design space for the analog circuits under study and is used to guide the optimisation algorithm, resulting in faster convergence and a reduced number of spice simulations. Multi-layer perceptron and random forest regressors are employed to predict the required design specifications of the analog circuit. Since the saturation condition of transistors is vital in the proper working of analog circuits, multi-layer perceptron classifiers are used to predict the saturation condition of each transistor in the circuit. The feasibility of the candidate solutions is verified using machine learning models before invoking spice simulations. We validate the proposed framework using three circuit topologies--a bandgap reference, a folded cascode operational amplifier, and a two-stage operational amplifier. The simulation results show better optimum values and lower standard deviations for fitness functions after convergence. Incorporating the machine learning-based predictions proposed in the optimisation method has resulted in the reduction of spice calls by 56%, 59%, and 83% when compared with standard approaches in the three test cases considered in the study.
Maintaining research-related information in an organized manner can be challenging for a researcher. In this paper, we envision personal research knowledge graphs (PRKGs) as a means to represent structured information about the research activities of a researcher. PRKGs can be used to power intelligent personal assistants, and personalize various applications. We explore what entities and relations could be potentially included in a PRKG, how to extract them from various sources, and how to share a PRKG within a research group.
We introduce the pTp_T-differential radial flow v0(pT)v_0(p_T) in the heavy-quark sector. Within an event-by-event Langevin framework, we show that this observable exhibits a strong sensitivity to the heavy quark-bulk interaction. It provides a powerful and novel tool to constrain the transport coefficients of heavy quarks in the QGP and, more generally, to assess the strength of the interaction of a Brownian particle in an expanding bulk medium. The results further indicate that heavy quarks exhibit collective behavior driven by the isotropic expansion of the QGP in heavy-ion collisions and, at low pTp_T, it offers a marked signature of the heavy quark hadronization mechanism.
In this work, the perturbative and non-perturbative contributions to the heavy quark (HQ) momentum (κ\kappa) as well as spatial (DsD_s) diffusion coefficients are computed in a weak background magnetic field. The formalism adopted here involves calculation of the in-medium potential of the HQ in a weak magnetic field, which then serves as a proxy for the resummed gluon propagator in the calculation of HQ self-energy (Σ\Sigma). The self-energy determines the scattering rate of HQs with light thermal partons, which is subsequently used to evaluate κ\kappa and DsD_s. It is observed that non-perturbative effects play a dominant role at low temperature. The spatial diffusion coefficient 2πTDs2\pi T D_s, exhibits good agreement with recent LQCD results. These findings can be applied to calculate the heavy quark directed flow at RHIC and LHC energies. An extension of this formalism to the case of finite HQ momentum has also been attempted.
One primary focus of next generation wireless communication networks is the millimeterwave (mmWave) spectrum, typically considered in the 30 GHz to 300 GHz frequency range. Despite their promise of high data rates, mmWaves suffer from severe attenuation while passing through obstacles. Unmanned aerial vehicles (UAVs) have been proposed to offset this limitation on account of their additional degrees of freedom, which can be leveraged to provide line of sight (LoS) transmission paths. While some prior works have proposed analytical frameworks to compute the LoS probability for static ground users and a UAV, the same is lacking for mobile users on the ground. In this paper, we consider the popular Manhattan point line process (MPLP) to model an urban environment, within which a ground user moves with a known velocity for a small time interval along the roads. We derive an expression for the expected duration of LoS between a static UAV in the air and a mobile ground user, and validate the same through simulations. To demonstrate the efficacy of the proposed analysis, we propose a simple user association algorithm that greedily assigns the UAVs to users with the highest expected LoS time, and show that it outperforms the existing benchmark schemes that assign the users to the nearest UAVs with LoS without considering the user mobility.
An automated sizing approach for analog circuits using evolutionary algorithms is presented in this paper. A targeted search of the search space has been implemented using a particle generation function and a repair-bounds function that has resulted in faster convergence to the optimal solution. The algorithms are tuned and modified to converge to a better optimal solution with less standard deviation for multiple runs compared to standard versions. Modified versions of the artificial bee colony optimisation algorithm, genetic algorithm, grey wolf optimisation algorithm, and particle swarm optimisation algorithm are tested and compared for the optimal sizing of two operational amplifier topologies. An extensive performance evaluation of all the modified algorithms showed that the modifications have resulted in consistent performance with improved convergence for all the algorithms. The implementation of parallel computation in the algorithms has reduced run time. Among the considered algorithms, the modified artificial bee colony optimisation algorithm gave the most optimal solution with consistent results across multiple runs.
Millions of people use platforms such as YouTube, Facebook, Twitter, and other mass media. Due to the accessibility of these platforms, they are often used to establish a narrative, conduct propaganda, and disseminate misinformation. This work proposes an approach that uses state-of-the-art NLP techniques to extract features from video captions (subtitles). To evaluate our approach, we utilize a publicly accessible and labeled dataset for classifying videos as misinformation or not. The motivation behind exploring video captions stems from our analysis of videos metadata. Attributes such as the number of views, likes, dislikes, and comments are ineffective as videos are hard to differentiate using this information. Using caption dataset, the proposed models can classify videos among three classes (Misinformation, Debunking Misinformation, and Neutral) with 0.85 to 0.90 F1-score. To emphasize the relevance of the misinformation class, we re-formulate our classification problem as a two-class classification - Misinformation vs. others (Debunking Misinformation and Neutral). In our experiments, the proposed models can classify videos with 0.92 to 0.95 F1-score and 0.78 to 0.90 AUC ROC.
Orthogonal Time Frequency Space (OTFS) is a 2-D\text{2-D} modulation technique that has the potential to overcome the challenges faced by orthogonal frequency division multiplexing (OFDM) in high Doppler environments. The performance of OTFS in a multi-user scenario with orthogonal multiple access (OMA) techniques has been impressive. Due to the requirement of massive connectivity in 5G and beyond, it is immensely essential to devise and examine the OTFS system with the existing Non-orthogonal Multiple Access (NOMA) techniques. In this paper, we propose a multi-user OTFS system based on a code-domain NOMA technique called Sparse Code Multiple Access (SCMA). This system is referred to as the OTFS-SCMA model. The framework for OTFS-SCMA is designed for both downlink and uplink. First, the sparse SCMA codewords are strategically placed on the delay-Doppler plane such that the overall overloading factor of the OTFS-SCMA system is equal to that of the underlying basic SCMA system. The receiver in downlink performs the detection in two sequential phases: first, the conventional OTFS detection using the method of linear minimum mean square error (LMMSE), and then the conventional SCMA detection. For uplink, we propose a single-phase detector based on message-passing algorithm (MPA) to detect the multiple users' symbols. The performance of the proposed OTFS-SCMA system is validated through extensive simulations both in downlink and uplink. We consider delay-Doppler planes of different parameters and various SCMA systems of overloading factor up to 200%\%. The performance of OTFS-SCMA is compared with those of existing OTFS-OMA techniques. The comprehensive investigation demonstrates the usefulness of OTFS-SCMA in future wireless communication standards.
As electronically stored data grow in daily life, obtaining novel and relevant information becomes challenging in text mining. Thus people have sought statistical methods based on term frequency, matrix algebra, or topic modeling for text mining. Popular topic models have centered on one single text collection, which is deficient for comparative text analyses. We consider a setting where one can partition the corpus into subcollections. Each subcollection shares a common set of topics, but there exists relative variation in topic proportions among collections. Including any prior knowledge about the corpus (e.g. organization structure), we propose the compound latent Dirichlet allocation (cLDA) model, improving on previous work, encouraging generalizability, and depending less on user-input parameters. To identify the parameters of interest in cLDA, we study Markov chain Monte Carlo (MCMC) and variational inference approaches extensively, and suggest an efficient MCMC method. We evaluate cLDA qualitatively and quantitatively using both synthetic and real-world corpora. The usability study on some real-world corpora illustrates the superiority of cLDA to explore the underlying topics automatically but also model their connections and variations across multiple collections.
Glutamate and glycine are important neurotransmitters in the brain. An action potential prop- agating in the terminal of a presynatic neuron causes the release of glutamate and glycine in the synapse by vesicles fusing with the cell membrane, which then activate various receptors on the cell membrane of the post synaptic neuron. Entry of Ca2+ through the activated NMDA receptors leads to a host of cellular processes of which long term potentiation is of crucial importance because it is widely considered to be one of the major mechanisms behind learning and memory. By analysing the readout of glutamate concentration by the post synaptic neurons during Ca2+ signaling, we find that the average receptor density in hippocampal neurons has evolved to allow for accurate measurement of the glutamate concentration in the synaptic cleft.
We lift metrics over words to metrics over word-to-word transductions, by defining the distance between two transductions as the supremum of the distances of their respective outputs over all inputs. This allows to compare transducers beyond equivalence. Two transducers are close (resp. kk-close) with respect to a metric if their distance is finite (resp. at most kk). Over integer-valued metrics computing the distance between transducers is equivalent to deciding the closeness and kk-closeness problems. For common integer-valued edit distances such as, Hamming, transposition, conjugacy and Levenshtein family of distances, we show that the closeness and the kk-closeness problems are decidable for functional transducers. Hence, the distance with respect to these metrics is also computable. Finally, we relate the notion of distance between functions to the notions of diameter of a relation and index of a relation in another. We show that computing edit distance between functional transducers is equivalent to computing diameter of a rational relation and both are a specific instance of the index problem of rational relations.
There are no more papers matching your filters at the moment.