The advent of transformers with attention mechanisms and associated
pre-trained models have revolutionized the field of Natural Language Processing
(NLP). However, such models are resource-intensive due to highly complex
architecture. This limits their application to resource-constrained
environments. While choosing an appropriate NLP model, a major trade-off exists
over choosing accuracy over efficiency and vice versa. This paper presents a
commentary on the evolution of NLP and its applications with emphasis on their
accuracy as-well-as efficiency. Following this, a survey of research
contributions towards enhancing the efficiency of transformer-based models at
various stages of model development along with hardware considerations has been
conducted. The goal of this survey is to determine how current NLP techniques
contribute towards a sustainable society and to establish a foundation for
future research.
Deep learning (DL) models have provided state-of-the-art performance in
various medical imaging benchmarking challenges, including the Brain Tumor
Segmentation (BraTS) challenges. However, the task of focal pathology
multi-compartment segmentation (e.g., tumor and lesion sub-regions) is
particularly challenging, and potential errors hinder translating DL models
into clinical workflows. Quantifying the reliability of DL model predictions in
the form of uncertainties could enable clinical review of the most uncertain
regions, thereby building trust and paving the way toward clinical translation.
Several uncertainty estimation methods have recently been introduced for DL
medical image segmentation tasks. Developing scores to evaluate and compare the
performance of uncertainty measures will assist the end-user in making more
informed decisions. In this study, we explore and evaluate a score developed
during the BraTS 2019 and BraTS 2020 task on uncertainty quantification
(QU-BraTS) and designed to assess and rank uncertainty estimates for brain
tumor multi-compartment segmentation. This score (1) rewards uncertainty
estimates that produce high confidence in correct assertions and those that
assign low confidence levels at incorrect assertions, and (2) penalizes
uncertainty measures that lead to a higher percentage of under-confident
correct assertions. We further benchmark the segmentation uncertainties
generated by 14 independent participating teams of QU-BraTS 2020, all of which
also participated in the main BraTS segmentation task. Overall, our findings
confirm the importance and complementary value that uncertainty estimates
provide to segmentation algorithms, highlighting the need for uncertainty
quantification in medical image analyses. Finally, in favor of transparency and
reproducibility, our evaluation code is made publicly available at:
this https URL
An updated, transit-only analysis of the NASA Exoplanet Archive demonstrates that the exoplanet radius valley is not a universal feature, but varies significantly with the host star's spectral type. The valley is most distinct for G and K dwarfs, weaker and slightly shifted for M dwarfs, and less clear for F dwarfs.
We present the notion of continuous controlled K-g-fusion frame in Hilbert space which is the generalization of discrete controlled K-g-fusion frame. We discuss some characterizations of continuous controlled K-g-fusion frame. Relationship between continuous controlled K-g-fusion frame and quotient operator is being studied. Finally, stability of continuous controlled g-fusion frame has been described.
Diseases in plants cause significant danger to productive and secure agriculture. Plant diseases can be detected early and accurately, reducing crop losses and pesticide use. Traditional methods of plant disease identification, on the other hand, are generally time-consuming and require professional expertise. It would be beneficial to the farmers if they could detect the disease quickly by taking images of the leaf directly. This will be a time-saving process and they can take remedial actions immediately. To achieve this a novel feature extraction approach for detecting tomato plant illnesses from leaf photos using low-cost computing systems such as mobile phones is proposed in this study. The proposed approach integrates various types of Deep Learning techniques to extract robust and discriminative features from leaf images. After the proposed feature extraction comparisons have been made on five cutting-edge deep learning models: AlexNet, ResNet50, VGG16, VGG19, and MobileNet. The dataset contains 10,000 leaf photos from ten classes of tomato illnesses and one class of healthy leaves. Experimental findings demonstrate that AlexNet has an accuracy score of 87%, with the benefit of being quick and lightweight, making it appropriate for use on embedded systems and other low-processing devices like smartphones.
Transmission spectroscopy is an effective technique for probing exoplanetary atmospheres. While most observations have relied on space facilities such as HST and JWST, ground-based high-resolution transmission spectroscopy (HRTS) has also provided valuable insights by resolving individual atomic features. In this work, we present an initial performance assessment and feasibility test of the Hanle Echelle Spectrograph (HESP) on the 2 m Himalayan Chandra Telescope (HCT) for HRTS. As a benchmark, we observed the hot Jupiter HD 209458b during a single transit at a resolution of R = 30,000. We developed a Python-based, semi-automated data reduction and analysis pipeline that includes corrections for telluric contamination and stellar radial velocity shifts. The final achieved signal-to-noise ratio and spectral stability allow us to probe for features at the 0.1% level. This work establishes a methodology and demonstrates the operational capability of the HESP-HCT for obtaining high-resolution transmission spectra.
The exponential emergence of Field Programmable Gate Array (FPGA) has
accelerated the research of hardware implementation of Deep Neural Network
(DNN). Among all DNN processors, domain specific architectures, such as,
Google's Tensor Processor Unit (TPU) have outperformed conventional GPUs.
However, implementation of TPUs in reconfigurable hardware should emphasize
energy savings to serve the green computing requirement. Voltage scaling, a
popular approach towards energy savings, can be a bit critical in FPGA as it
may cause timing failure if not done in an appropriate way. In this work, we
present an ultra low power FPGA implementation of a TPU for edge applications.
We divide the systolic-array of a TPU into different FPGA partitions, where
each partition uses different near threshold (NTC) biasing voltages to run its
FPGA cores. The biasing voltage for each partition is roughly calculated by the
proposed static schemes. However, further calibration of biasing voltage is
done by the proposed runtime scheme. Four clustering algorithms based on the
minimum slack value of different design paths of Multiply Accumulates (MACs)
study the partitioning of FPGA. To overcome the timing failure caused by NTC,
the MACs which have higher minimum slack are placed in lower voltage partitions
and the MACs have lower minimum slack path are placed in higher voltage
partitions. The proposed architecture is simulated in a commercial platform :
Vivado with Xilinx Artix-7 FPGA and academic platform VTR with 22nm, 45nm,
130nm FPGAs. The simulation results substantiate the implementation of voltage
scaled TPU in FPGAs and also justifies its power efficiency.
We address that a single-band tight-binding Hamiltonian defined on a self-similar corral substrate can give rise to a set of non-diffusive localized modes that follow the same hierarchical distribution. As the lattice, the spatial extent of quantum prison containing a cluster of atomic sites is dependent on the generation of fractal structure. Apart from the quantum imprisonment of the excitation, a magnetic flux threading each elementary plaquette is shown to destroy the boundedness and generate an absolutely continuous sub-band populated by resonant eigen functions. Flux induced engineering of quantum states is corroborated through the evaluation of inverse participation ratio and quantum transport. Moreover, the robustness of the extended states has been checked in presence of diagonal disorder and off-diagonal anisotropy. Flux modulated single-particle mobility edge is characterized through mutlifractal analysis. Quantum interference is the essential issue, reported here, that manipulates the kinematics of the excitation and this is manifested by the workout of persistent current.
In an SU(2) gauge theory, if the gauge bosons turn out to be degenerate after
spontaneous symmetry breaking, obviously these mass terms are invariant under a
global SU(2) symmetry that is unbroken. The pure gauge terms are also invariant
under this symmetry. This symmetry is called the {\em custodial symmetry} (CS).
In SU(2)×U(1) gauge theories, CS implies a mass relation between
the W and the Z bosons. The Standard Model (SM), as well as various
extensions of it in the scalar sector, possess such a symmetry. In this paper,
we critically examine the notion of CS and show that there may be three
different classes of CS, depending on the gauge couplings and self-couplings of
the scalars. Among old models that preserve CS, we discuss the Two-Higgs
Doublet Model and the one doublet plus two triplet model by Georgi and
Machacek. We show that for two-triplet extensions, the Georgi-Machacek model is
not the most general possibility with CS. Rather, we find, as the most general
extension, a new model with more parameters and hence a richer phenomenology.
Some of the consequences of this new model have also been discussed.
Segmentation and labeling of vertebrae in MRI images of the spine are critical for the diagnosis of illnesses and abnormalities. These steps are indispensable as MRI technology provides detailed information about the tissue structure of the spine. Both supervised and unsupervised segmentation methods exist, yet acquiring sufficient data remains challenging for achieving high accuracy. In this study, we propose an enhancing approach based on modified attention U-Net architecture for panoptic segmentation of 3D sliced MRI data of the lumbar spine. Our method achieves an impressive accuracy of 99.5\% by incorporating novel masking logic, thus significantly advancing the state-of-the-art in vertebral segmentation and labeling. This contributes to more precise and reliable diagnosis and treatment planning.
SN 2024aecx is a nearby (∼11 Mpc) Type IIb SN discovered within ∼1 d after explosion. In this paper we report high-cadence photometric and spectroscopic follow-up observations, conducted from as early as 0.27 d post discovery out to the nebular phase at 158.4 d. We analyze the environment of SN 2024aecx and derive a new distance, metallicity and host extinction. The light curve exhibits a hot and luminous shock-cooling peak at the first few days, followed by a main peak with very rapid post-maximum decline. The earliest spectra are blue and featureless, while from 2.3 d after discovery prominent P-Cygni profiles emerge. At nebular phase, the emission lines exhibit asymmetric and double-peaked profiles, indicating asphericity and/or early dust formation in the ejecta. We simulated the progenitor and explosion using a two-component model of shock cooling and radioactive 56Ni heating; our model favors an extended, low-mass H-rich envelope withMe=0.08−0.03+0.02M⊙ and a low ejecta mass of Mej=2.65−0.73+1.21M⊙.The comprehensive monitoring of SN 2024aecx, coupled with the detailed characterization of its local environment, establishes it as a benchmark event for probing the progenitors and explosion mechanisms of Type IIb SNe.
Researchers at the Saha Institute of Nuclear Physics and Homi Bhabha National Institute developed a fast hybrid numerical model to estimate discharge probability in single and triple Gas Electron Multiplier (GEM) detectors. The model qualitatively reproduces experimental trends for discharge occurrence, including the reduction in discharge probability observed with asymmetric voltage settings in triple GEMs, while quantitatively overestimating discharge rates.
Environmental Sound Classification is an important problem of sound
recognition and is more complicated than speech recognition problems as
environmental sounds are not well structured with respect to time and
frequency. Researchers have used various CNN models to learn audio features
from different audio features like log mel spectrograms, gammatone spectral
coefficients, mel-frequency spectral coefficients, generated from the audio
files, over the past years. In this paper, we propose a new methodology :
Two-Level Classification; the Level 1 Classifier will be responsible to
classify the audio signal into a broader class and the Level 2 Classifiers will
be responsible to find the actual class to which the audio belongs, based on
the output of the Level 1 Classifier. We have also shown the effects of
different audio filters, among which a new method of Audio Crop is introduced
in this paper, which gave the highest accuracies in most of the cases. We have
used the ESC-50 dataset for our experiment and obtained a maximum accuracy of
78.75% in case of Level 1 Classification and 98.04% in case of Level 2
Classifications.
Materials that have zero-energy flat band states on the surface are likely to
show surface superconductivity. Here we report a theoretical observation that a
Hamiltonian describing a thin slab of topological nodal line semimetal, has
zero energy eigenstate spanning the entire surface under certain conditions,
namely (i) the hopping probability of fermions in the direction of thickness is
more than that in other directions (ii) the onsite interaction should be less
than some limiting value determined by the hopping probability. Our claim is
substantiated by analytic and numerical approach. We also report new phase
transitions in a region of parameter space and indicate that the Hamiltonian
can also be realised by stacked layers described by Kitaev model on honeycomb
lattice.
The Polyakov−Nambu−Jona-Lasinio model has been quite successful in describing various qualitative features of observables for strongly interacting matter, that are measurable in heavy-ion collision experiments. The question still remains on the quantitative uncertainties in the model results. Such an estimation is possible only by contrasting these results with those obtained from first principles using the lattice QCD framework. Recently a variety of lattice QCD data were reported in the realistic continuum limit. Here we make a first attempt at reparametrizing the model so as to reproduce these lattice data. We find excellent quantitative agreement for the equation of state. Certain discrepancies in the charge and strangeness susceptibilities as well as baryon-charge correlation still remain. We discuss their causes and outline possible directions to remove them.
We propose a model of continuous opinion dynamics, where mutual interactions can be both positive and negative. Different types of distributions for the interactions, all characterized by a single parameter p denoting the fraction of negative interactions, are considered. Results from exact calculation of a discrete version and numerical simulations of the continuous version of the model indicate the existence of a universal continuous phase transition at p=p_c below which a consensus is reached. Although the order-disorder transition is analogous to a ferromagnetic-paramagnetic phase transition with comparable critical exponents, the model is characterized by some distinctive features relevant to a social system.
A physical data (such as astrophysical, geophysical, meteorological etc.) may appear as an output of an experiment or it may come out as a signal from a dynamical system or it may contain some sociological, economic or biological information. Whatever be the source of a time series data some amount of noise is always expected to be embedded in it. Analysis of such data in presence of noise may often fail to give accurate information. The method of filtering a time series data is a tool to clean these errors as possible as we can just to make the data compatible for further analysis. Here we made an attempt to develop an adaptive approach of filtering a time series and we have shown analytically that the present model can fight against the propagation of error and can maintain the positional importance in the time series very efficiently.
Atmospheric characterization of exoplanets has traditionally relied on Low-Resolution Transmission Spectroscopy (LRTS), obtained from both space- and ground-based facilities, as well as on High-Resolution Transmission Spectroscopy (HRTS). Although HRTS can resolve individual spectral lines, it is subject to normalization degeneracies that limit the accurate retrieval of key atmospheric parameters such as pressure, abundance, and cloud opacity. A promising strategy to mitigate this issue is to combine ground-based HRTS with space-based LRTS. However, this approach depends on two separate datasets, thereby requiring two independent observations. In this study, we explore the feasibility of Multi-Object High-Resolution Transmission Spectroscopy (Mo-HRTS) as a means to constrain atmospheric parameters in retrievals using a single dataset. Through simulations based on existing spectrograph specifications for a well-studied target, we demonstrate that low-resolution broadband transmission spectra can be extracted from Mo-HRTS data.
In this paper, we explore some classical and quantum aspects of the nonlinear Liénard equation x¨+kxx˙+ω2x+(k2/9)x3=0, where x=x(t) is a real variable and k,ω∈R. We demonstrate that such an equation could be derived from an equation of the Levinson-Smith kind which is of the form z¨+J(z)z˙2+F(z)z˙+G(z)=0, where z=z(t) is a real variable and {J(z),F(z),G(z)} are suitable functions to be specified. It can further be mapped to the harmonic oscillator by making use of a nonlocal transformation, establishing its isochronicity. Computations employing the Jacobi last multiplier reveal that the system exhibits a bi-Hamiltonian character, i.e., there are two distinct types of Hamiltonians describing the system. For each of these, we perform a canonical quantization in the momentum representation and explore the possibility of bound states. While one of the Hamiltonians is seen to exhibit an equispaced spectrum with an infinite tower of states, the other one exhibits branching but can be solved exactly for certain choices of the parameters.
Measurement-induced phase transitions are often studied in random quantum circuits, with local measurements performed with a certain probability. We present here a model where a global measurement is performed with certainty at every time-step of the measurement protocol. Each time step, therefore, consists of evolution under the transverse Ising Hamiltonian for a time τ, followed by a measurement that provides a ``yes/no'' answer to the question, ``Are all spins up?''. The survival probability after n time-steps is defined as the probability that the answer is ``no'' in all the n time-steps. For various τ values, we compute the survival probability, entanglement in bipartition, and the generalized geometric measure, a genuine multiparty entanglement, for a chain of size L∼26, and identify a transition at τc∼0.2 for field strength h=1/2. We then analytically derive a recursion relation that enables us to calculate the survival probability for system sizes up to 1000, which provides evidence of a scaling τc∼1/L. The transition at finite τc for L∼28 seems therefore to recede to τc=0 in the thermodynamic limit. Additionally, at large time-steps, survival probability decays logarithmically only when the ground state of the Hamiltonian is paramagnetic. Such decay is not present when the ground state is ferromagnetic.
There are no more papers matching your filters at the moment.