Montclair State University
This study quantifies the gravitational wave memory from binary neutron star mergers, for the first time incorporating previously neglected contributions from magnetic fields, neutrino emission, and baryonic ejecta using state-of-the-art GRMHD simulations. It finds that these phenomena can non-monotonically influence the total memory and may be detectable by next-generation observatories, offering a unique probe of extreme astrophysical conditions.
This research identifies and quantifies a "measurement imbalance" in agentic AI evaluation, where technical metrics are disproportionately emphasized over human-centered, temporal, and contextual factors. A four-axis evaluation framework is introduced to provide a more holistic assessment, aiming to align industry claims with actual real-world deployment value and mitigate adoption failures.
Researchers developed a Graph Neural Network (GNN) based recommendation algorithm incorporating initial residual connections and identity mapping to mitigate over-smoothing and enhance information flow. This approach achieved superior Recall@20 and NDCG@20 scores compared to other GNN and traditional baselines across Gowalla, Yelp-2018, and Amazon-Book datasets.
Split Federated Learning (SFL) is a distributed machine learning paradigm that combines federated learning and split learning. In SFL, a neural network is partitioned at a cut layer, with the initial layers deployed on clients and remaining layers on a training server. There are two main variants of SFL: SFL-V1 where the training server maintains separate server-side models for each client, and SFL-V2 where the training server maintains a single shared model for all clients. While existing studies have focused on algorithm development for SFL, a comprehensive quantitative analysis of how the cut layer selection affects model performance remains unexplored. This paper addresses this gap by providing numerical and theoretical analysis of SFL performance and convergence relative to cut layer selection. We find that SFL-V1 is relatively invariant to the choice of cut layer, which is consistent with our theoretical results. Numerical experiments on four datasets and two neural networks show that the cut layer selection significantly affects the performance of SFL-V2. Moreover, SFL-V2 with an appropriate cut layer selection outperforms FedAvg on heterogeneous data.
Precision measurements of space and time, like those made by the detectors of the Laser Interferometer Gravitational-wave Observatory (LIGO), are often confronted with fundamental limitations imposed by quantum mechanics. The Heisenberg uncertainty principle dictates that the position and momentum of an object cannot both be precisely measured, giving rise to an apparent limitation called the Standard Quantum Limit (SQL). Reducing quantum noise below the SQL in gravitational-wave detectors, where photons are used to continuously measure the positions of freely falling mirrors, has been an active area of research for decades. Here we show how the LIGO A+ upgrade reduced the detectors' quantum noise below the SQL by up to 3 dB while achieving a broadband sensitivity improvement, more than two decades after this possibility was first presented.
The generalized Ramsey number r(G,H,q)r(G, H, q) is the minimum number of colors needed to color the edges of GG such that every isomorphic copy of HH has at least qq colors. In this note, we improve the upper and lower bounds on r(Kn,n,C2k,3)r(K_{n, n}, C_{2k}, 3). Our upper bound answers a question of Lane and Morrison. For k=3k=3 we obtain the asymptotically sharp estimate r(Kn,n,C6,3)=720n+o(n)r(K_{n, n}, C_6, 3) = \frac{7}{20} n + o(n).
The inspiral-merger-ringdown (IMR) consistency test checks the consistency of the final mass and final spin of a binary black hole merger remnant, independently inferred via the inspiral and merger-ringdown parts of the waveform. As binaries are expected to be nearly circularized when entering the frequency band of ground-based detectors, tests of general relativity (GR) currently employ quasicircular waveforms. We quantify the effect of residual orbital eccentricity on the IMR consistency test. We find that eccentricity causes a significant systematic bias in the inferred final mass and spin of the remnant black hole at an orbital eccentricity (defined at 1010 Hz) of e00.1e_0 \gtrsim 0.1 in the LIGO band (for a total binary mass in the range 6565-200M200 \,M_{\odot}). For binary black holes observed by Cosmic Explorer (CE), the systematic bias becomes significant for e00.015e_0 \gtrsim 0.015 (for 200200-600M600 \,M_{\odot} systems). This eccentricity-induced bias on the final mass and spin leads to an apparent inconsistency in the IMR consistency test, manifesting as a false violation of GR. Hence, eccentric corrections to waveform models are important for constructing a robust test of GR, especially for third-generation detectors. We also estimate the eccentric corrections to the relationship between the inspiral parameters and the final mass and final spin; they are shown to be quite small.
The next generation of rare-event search experiments in nuclear and particle physics demand structural materials combining exceptional mechanical strength with ultra-low levels of radioactive contamination. This study evaluates chemical vapor deposition (CVD) nickel as a candidate structural material for such applications. Manufacturer-supplied CVD Ni grown on aluminum substrates underwent tensile testing before and after welding alongside standard Ni samples. CVD Ni exhibited a planar tensile strength of ~600 MPa, significantly surpassing standard nickel. However, welding and heat treatment were found to reduce the tensile strength to levels comparable to standard Ni, with observed porosity in the welds likely contributing to this reduction. Material assay via inductively coupled plasma mass spectrometry (ICP-MS) employing isotope-dilution produced measured bulk concentration of 232-Th, 238-U, and nat-K at the levels of ~70 ppq, <100 ppq, and ~900 ppt, respectively, which is the lowest reported in nickel. Surface-etch profiling uncovered higher concentrations of these contaminants extending ~10 micrometer beneath the surface, likely associated with the aluminum growth substrate. The results reported are compared to the one other well documented usage of CVD Ni in a low radioactive background physics research experiment and a discussion is provided on how the currently reported results may arise from changes in CVD fabrication or testing process. These results establish CVD Ni as a promising low-radioactivity structural material, while outlining the need for further development in welding and surface cleaning techniques to fully realize its potential in large-scale, low radioactive background rare-event search experiments.
Brain stroke is a leading cause of mortality and long-term disability worldwide, underscoring the need for precise and rapid prediction techniques. Computed Tomography (CT) scan is considered one of the most effective methods for diagnosing brain strokes. Most stroke classification techniques use a single slice-level prediction mechanism, requiring radiologists to manually select the most critical CT slice from the original CT volume. Although clinical evaluations are often used in traditional diagnostic procedures, machine learning (ML) has opened up new avenues for improving stroke diagnosis. To supplement traditional diagnostic techniques, this study investigates machine learning models for early brain stroke prediction using CT scan images. This research proposes a novel machine learning approach to brain stroke detection, focusing on optimizing classification performance with pre-trained deep learning models and advanced optimization strategies. Pre-trained models, including DenseNet201, InceptionV3, MobileNetV2, ResNet50, and Xception, are used for feature extraction. Feature engineering techniques, including BFO, PCA, and LDA, further enhance model performance. These features are then classified using machine learning algorithms, including SVC, RF, XGB, DT, LR, KNN, and GNB. Our experiments demonstrate that the combination of MobileNetV2, LDA, and SVC achieved the highest classification accuracy of 97.93%, significantly outperforming other model-optimizer-classifier combinations. The results underline the effectiveness of integrating lightweight pre-trained models with robust optimization and classification techniques for brain stroke diagnosis.
·
African languages have far less in-language content available digitally, making it challenging for question answering systems to satisfy the information needs of users. Cross-lingual open-retrieval question answering (XOR QA) systems -- those that retrieve answer content from other languages while serving people in their native language -- offer a means of filling this gap. To this end, we create AfriQA, the first cross-lingual QA dataset with a focus on African languages. AfriQA includes 12,000+ XOR QA examples across 10 African languages. While previous datasets have focused primarily on languages where cross-lingual QA augments coverage from the target language, AfriQA focuses on languages where cross-lingual answer content is the only high-coverage source of answer content. Because of this, we argue that African languages are one of the most important and realistic use cases for XOR QA. Our experiments demonstrate the poor performance of automatic translation and multilingual retrieval methods. Overall, AfriQA proves challenging for state-of-the-art QA models. We hope that the dataset enables the development of more equitable QA technology.
Idioms are figurative expressions whose meanings often cannot be inferred from their individual words, making them difficult to process computationally and posing challenges for human experimental studies. This survey reviews datasets developed in psycholinguistics and computational linguistics for studying idioms, focusing on their content, form, and intended use. Psycholinguistic resources typically contain normed ratings along dimensions such as familiarity, transparency, and compositionality, while computational datasets support tasks like idiomaticity detection/classification, paraphrasing, and cross-lingual modeling. We present trends in annotation practices, coverage, and task framing across 53 datasets. Although recent efforts expanded language coverage and task diversity, there seems to be no relation yet between psycholinguistic and computational research on idioms.
1
A study explores the long-run and short-run impacts of AI innovation, economic growth, energy use, foreign direct investment, and urbanization on CO2 emissions in the United States from 1990-2022. It reveals AI innovation significantly reduces CO2 emissions (e.g., a 1% rise in AI investment leads to a 0.054% long-run decrease), while economic growth, energy consumption, FDI, and urbanization contribute to their increase.
The detection of ~50 coalescing compact binaries with the Advanced LIGO and Virgo detectors has allowed us to test general relativity, constrain merger rates, and look for evidence of tidal effects, compact object spins, higher waveform modes, and black hole ringdowns. An effect that has not yet been confidently detected is binary eccentricity, which might be present in a small fraction of binaries formed dynamically. Here we discuss general limits on eccentricity that can, in-principle, be placed on all types of compact object binaries by a detector operating at the design sensitivity of Advanced LIGO. Using a post-Newtonian model for gravitational-wave phasing valid in the small eccentricity regime, we assess the relative measurement error for eccentricity for a variety of spinning and non-spinning binaries. Errors and correlations involving the mass and spin parameters are also investigated. We find that decreasing the low frequency limit of a detector's observational frequency band is one of the key design factors for increasing the odds of measuring binary eccentricity. We also introduce and analytically explore the eccentric chirp mass parameter, which replaces the chirp mass as the key measurable parameter combination in eccentric gravitational waveform models. The eccentric chirp mass parameter explains a degeneracy between the chirp mass and the eccentricity. This degeneracy leads to a bias in the standard chirp mass parameter. We also investigate the systematic parameter bias that arises when eccentric systems are recovered using circular waveform templates. We use both Fisher matrix and Bayesian-inference-based Markov Chain Monte Carlo (MCMC) methods to investigate these parameter estimation issues, and we find good agreement between the two approaches (for both statistical and systematic errors) in the appropriate signal-to-noise ratio regime. (abridged)
Given the fact that climate change has become one of the most pressing problems in many countries in recent years, specialized research on how to mitigate climate change has been adopted by many countries. Within this discussion, the influence of advanced technologies in achieving carbon neutrality has been discussed. While several studies investigated how AI and Digital innovations could be used to reduce the environmental footprint, the actual influence of AI in reducing CO2 emissions (a proxy measuring carbon footprint) has yet to be investigated. This paper studies the role of advanced technologies in general, and Artificial Intelligence (AI) and ICT use in particular, in advancing carbon neutrality in the United States, between 2021. Secondly, this paper examines how Stock Market Growth, ICT use, Gross Domestic Product (GDP), and Population affect CO2 emissions using the STIRPAT model. After examining stationarity among the variables using a variety of unit root tests, this study concluded that there are no unit root problems across all the variables, with a mixed order of integration. The ARDL bounds test for cointegration revealed that variables in this study have a long-run relationship. Moreover, the estimates revealed from the ARDL model in the short- and long-run indicated that economic growth, stock market capitalization, and population significantly contributed to the carbon emissions in both the short-run and long-run. Conversely, AI and ICT use significantly reduced carbon emissions over both periods. Furthermore, findings were confirmed to be robust using FMOLS, DOLS, and CCR estimations. Furthermore, diagnostic tests indicated the absence of serial correlation, heteroscedasticity, and specification errors and, thus, the model was robust.
In this paper, we present a demo of an intelligent personal agent called Hey Dona (or just Dona) with virtual voice assistance in student course registration. It is a deployed project in the theme of AI for education. In this digital age with a myriad of smart devices, users often delegate tasks to agents. While pointing and clicking supersedes the erstwhile command-typing, modern devices allow users to speak commands for agents to execute tasks, enhancing speed and convenience. In line with this progress, Dona is an intelligent agent catering to student needs by automated, voice-operated course registration, spanning a multitude of accents, entailing task planning optimization, with some language translation as needed. Dona accepts voice input by microphone (Bluetooth, wired microphone), converts human voice to computer understandable language, performs query processing as per user commands, connects with the Web to search for answers, models task dependencies, imbibes quality control, and conveys output by speaking to users as well as displaying text, thus enabling human-AI interaction by speech cum text. It is meant to work seamlessly on desktops, smartphones etc. and in indoor as well as outdoor settings. To the best of our knowledge, Dona is among the first of its kind as an intelligent personal agent for voice assistance in student course registration. Due to its ubiquitous access for educational needs, Dona directly impacts AI for education. It makes a broader impact on smart city characteristics of smart living and smart people due to its contributions to providing benefits for new ways of living and assisting 21st century education, respectively.
Casting a ballot from a phone or laptop sounds appealing, but only if voters can be confident their choice remains secret and results cannot be altered in the dark. This paper proposes a hybrid blockchain-based voting model that stores encrypted votes on a private blockchain maintained by election organizers and neutral observers, while periodically anchoring hashes of these votes onto a public blockchain as a tamper-evident seal. The system issues voters one-time blind-signed tokens to protect anonymity, and provides receipts so they can confirm their vote was counted. We implemented a live prototype using common web technologies (this http URL, React, Firebase) to demonstrate end-to-end functionality, accessibility, and cost efficiency. Our contributions include developing a working demo, a complete election workflow, a hybrid blockchain design, and a user-friendly interface that balances privacy, security, transparency, and practicality. This research highlights the feasibility of secure, verifiable, and scalable online voting for organizations ranging from small groups to larger institutions.
417
Asymmetric emission of gravitational waves during a compact binary coalescence results in the loss of linear momentum and a corresponding "kick" or recoil on the binary's center of mass. This leads to a direction-dependent Doppler shift of the ringdown gravitational waveform. We quantify the measurability of the kick imparted to the remnant black hole in a binary black hole merger. Future ground- and space-based gravitational-wave detectors will measure this effect to within 2%\sim 2\% to 30%\sim 30\% for a subset of their expected observed sources. Certain binary configurations in the LISA band may allow a sub-percent-level measurement of this effect. This direct measurement of black hole kicks can also facilitate a novel test of general relativity based on linear momentum balance. We formulate this kick consistency test via measurement of a null variable that quantifies the difference between the inferred kick (using numerical relativity) and that observed via the Doppler-shifted ringdown signal. This null variable can be constrained (at 90\% confidence) to 10%\sim 10\% to 30%30\% with Cosmic Explorer and to 3%\sim 3\% to 12%12\% with LISA.
Black holes (BHs) with masses between 35M\sim 3-5M_{\odot}, produced by a binary neutron star (BNS) merger, can further pair up with a neutron star or BH and merge again within a Hubble time. However, the astrophysical environments in which this can happen and the rate of such mergers are open questions in astrophysics. Gravitational waves may play an important role in answering these questions. In this context, we discuss the possibility that the primary of the recent LIGO-Virgo-KAGRA binary GW230529_181500 (GW230529, in short) is the product of a previous BNS merger. Invoking numerical relativity (NR)-based fitting formulas that map the binary constituents' masses and tidal deformabilities to the mass, spin, and kick velocity of the remnant BH, we investigate the potential parents of GW230529's primary. Our calculations using NR fits based on BNS simulations reveal that the remnant of a high-mass BNS merger similar to GW190425 is consistent with the primary of GW230529. This argument is further strengthened by the gravitational wave-based merger rate estimation of GW190425-like and GW230529-like populations. We show that around 18% (median) of the GW190425-like remnants could become the primary component in GW230529-like mergers. The dimensionless tidal deformability parameter of the heavier neutron star in the parent binary is constrained to 6761+16367^{+163}_{-61} at 90% credibility. Using estimates of the gravitational-wave kick imparted to the remnant, we also discuss the astrophysical environments in which these types of mergers can take place and the implications for their future observations.
This study proposes an automated data mining framework based on autoencoders and experimentally verifies its effectiveness in feature extraction and data dimensionality reduction. Through the encoding-decoding structure, the autoencoder can capture the data's potential characteristics and achieve noise reduction and anomaly detection, providing an efficient and stable solution for the data mining process. The experiment compared the performance of the autoencoder with traditional dimensionality reduction methods (such as PCA, FA, T-SNE, and UMAP). The results showed that the autoencoder performed best in terms of reconstruction error and root mean square error and could better retain data structure and enhance the generalization ability of the model. The autoencoder-based framework not only reduces manual intervention but also significantly improves the automation of data processing. In the future, with the advancement of deep learning and big data technology, the autoencoder method combined with a generative adversarial network (GAN) or graph neural network (GNN) is expected to be more widely used in the fields of complex data processing, real-time data analysis and intelligent decision-making.
·
African languages are severely under-represented in NLP research due to lack of datasets covering several NLP tasks. While there are individual language specific datasets that are being expanded to different tasks, only a handful of NLP tasks (e.g. named entity recognition and machine translation) have standardized benchmark datasets covering several geographical and typologically-diverse African languages. In this paper, we develop MasakhaNEWS -- a new benchmark dataset for news topic classification covering 16 languages widely spoken in Africa. We provide an evaluation of baseline models by training classical machine learning models and fine-tuning several language models. Furthermore, we explore several alternatives to full fine-tuning of language models that are better suited for zero-shot and few-shot learning such as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern exploiting training (PET), prompting language models (like ChatGPT), and prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API). Our evaluation in zero-shot setting shows the potential of prompting ChatGPT for news topic classification in low-resource African languages, achieving an average performance of 70 F1 points without leveraging additional supervision like MAD-X. In few-shot setting, we show that with as little as 10 examples per label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of full supervised training (92.6 F1 points) leveraging the PET approach.
19
There are no more papers matching your filters at the moment.