Colgate University
We present \textbf{VADER} (Variational Autoencoder for Disks Embedded with Rings), for inferring both planet mass and global disk properties from high-resolution ALMA dust continuum images of protoplanetary disks (PPDs). VADER, a probabilistic deep learning model, enables uncertainty-aware inference of planet masses, α\alpha-viscosity, dust-to-gas ratio, Stokes number, flaring index, and the number of planets directly from protoplanetary disk images. VADER is trained on over 100{,}000 synthetic images of PPDs generated from \texttt{FARGO3D} simulations post-processed with \texttt{RADMC3D}. Our trained model predicts physical planet and disk parameters with R2>0.9R^2 > 0.9 from dust continuum images of PPDs. Applied to 23 real disks, VADER's mass estimates are consistent with literature values and reveal latent correlations that reflect known disk physics. Our results establish VAE-based generative models as robust tools for probabilistic astrophysical inference, with direct applications to interpreting protoplanetary disk substructures in the era of large interferometric surveys.
With the proliferation of large language model (LLM) applications since 2022, their use in education has sparked both excitement and concern. Recent studies consistently highlight students' (mis)use of LLMs can hinder learning outcomes. This work aims to teach students how to effectively prompt LLMs to improve their learning. We first proposed pedagogical prompting, a theoretically-grounded new concept to elicit learning-oriented responses from LLMs. To move from concept design to a proof-of-concept learning intervention in real educational settings, we selected early undergraduate CS education (CS1/CS2) as the example context. We began with a formative survey study with instructors (N=36) teaching early-stage undergraduate-level CS courses to inform the instructional design based on classroom needs. Based on their insights, we designed and developed a learning intervention through an interactive system with scenario-based instruction to train pedagogical prompting skills. Finally, we evaluated its instructional effectiveness through a user study with CS novice students (N=22) using pre/post-tests. Through mixed methods analyses, our results indicate significant improvements in learners' LLM-based pedagogical help-seeking skills, along with positive attitudes toward the system and increased willingness to use pedagogical prompts in the future. Our contributions include (1) a theoretical framework of pedagogical prompting; (2) empirical insights into current instructor attitudes toward pedagogical prompting; and (3) a learning intervention design with an interactive learning tool and scenario-based instruction leading to promising results on teaching LLM-based help-seeking. Our approach is scalable for broader implementation in classrooms and has the potential to be integrated into tools like ChatGPT as an on-boarding experience to encourage learning-oriented use of generative AI.
The James Webb Space Telescope (JWST) begun to revolutionize our view of the Cosmos. The discovery of Blue Monsters (i.e. ultracompact yet very bright high-z galaxies) and the Little Red Dots (i.e. very compact dustless strong Balmer break cosmic dawn sources) pose significant challenges to pre-JWST era models of the assembly of first stars and galaxies. In addition, JWST data further strengthen the problem posed by the origin of the supermassive black holes that power the most distant quasars observed. Stars powered by Dark Matter annihilation (i.e. Dark Stars) can form out of primordial gas clouds during the cosmic dawn era and subsequently might grow via accretion and become supermassive. In this paper we argue that Supermassive Dark Stars (SMDSs) offer natural solutions to the three puzzles mentioned above. Moreover, we present the best evidence so far, for the existence of SMDSs: the identification of a He~IIλ\lambda2511~Å absorption feature at S/N4S/N\sim4 in the spectrum of JADES-GS-z13-0.
Clumpy galaxies in the GEMS and GOODS fields are examined for clues to their evolution into modern spirals. The magnitudes of the clumps and the surface brightnesses of the interclump regions are measured and fitted to models of stellar age and mass. There is an evolutionary trend from clump clusters with no evident interclump emission to clump clusters with faint red disks, to spiral galaxies of the flocculent or grand design types. Along this sequence, the interclump surface density increases and the mass surface density contrast between the clumps and the interclump regions decreases, suggesting a gradual dispersal of clumps to form disks. Also along this sequence, the bulge-to-clump mass ratios and age ratios increase, suggesting a gradual formation of bulges. All of these morphological types occur in the same redshift range, indicating that the clump cluster morphology is not the result of bandshifting. Comparisons to local galaxies with the same rest wavelength and spatial resolution show that clump clusters resemble local dwarf Irregulars. This resemblance is consistent with a model in which the clumpy morphology comes from gravitational instabilities in gas with a high turbulent speed compared to the rotation speed and a high mass fraction compared to the stars.
Linear Programming (LP) is a foundational optimization technique with widespread applications in finance, energy trading, and supply chain logistics. However, traditional Central Processing Unit (CPU)-based LP solvers often struggle to meet the latency and scalability demands of dynamic, high-dimensional industrial environments, creating a significant computational challenge. This project addresses these limitations by accelerating linear programming on AMD Graphics Processing Units (GPUs), leveraging the ROCm open-source platform and PyTorch. The core of this work is the development of a robust, high-performance, open-source implementation of the Primal-Dual Hybrid Gradient (PDHG) algorithm, engineered specifically for general LP problems on AMD hardware. Performance is evaluated against standard LP test sets and established CPU-based solvers, with a particular focus on challenging real- world instances including the Security-Constrained Economic Dispatch (SCED) to guide hyperparameter tuning. Our results show a significant improvement, with up to a 36x speedup on GPU over CPU for large-scale problems, highlighting the advantages of GPU acceleration in solving complex optimization tasks.
Researchers from Tumult Labs identified a new class of "precision-based attacks" that exploit floating-point arithmetic to compromise privacy in existing differentially private implementations like diffprivlib, SmartNoise Core, and OpenDP. They developed "interval refining," a robust technique ensuring provable privacy and minimal error, which can efficiently generate Laplace samples at a rate of 30,000 per second with most samples terminating in a single iteration.
A framework leverages large language models to automate the annotation of privacy policies with Governing Knowledge Commons and Contextual Integrity (GKC-CI) parameters. The system achieves 90.65% accuracy, rivaling human experts, while reducing annotation cost to as little as $0.23 per policy and completing the task in under a minute.
This work presents a systematic benchmark of differentially private synthetic data generation algorithms that can generate tabular data. Utility of the synthetic data is evaluated by measuring whether the synthetic data preserve the distribution of individual and pairs of attributes, pairwise correlation as well as on the accuracy of an ML classification model. In a comprehensive empirical evaluation we identify the top performing algorithms and those that consistently fail to beat baseline approaches.
Divergences that occur in density matrices of decay and scattering processes are shown to be regularized by tracing and unitarity or the optical theorem. These divergences are regularized by the lifetime of the decaying particle or the total scattering cross section. Also, this regularization is shown to give the expected helicities of final particles. The density matrix is derived for the weak decay of a polarized muon at rest, μνμ(eνˉe)\mu^- \rightarrow \nu_{\mu} (e^- \bar \nu_e), with Lorentz invariant density matrix entries and unitarity upheld at tree level. The electron's von Neumann entanglement entropy distributions are calculated with respect to both the electron's emission angle and energy. The angular entropy distribution favors an electron emitted backwards with respect to the muon's polarization given a minimum volume regularization. The kinematic entropy distribution is maximal at half the muon's rest mass energy. These results are similar to the electron's angular and kinematic decay rate distributions. Both the density matrix and entanglement entropy can be cast either in terms of ratios of areas or volumes.
In this work we explore the potential for Neutron Stars (NSs) at the Galactic center and Population~III stars to constrain Asymmetric Dark Matter (ADM). We demonstrate that for NSs in an environment of sufficiently high DM density (\rhox109\unitGeV/cm3\rhox\gtrsim10^{9}\unit{GeV/cm^3}), the effects of both multiscatter capture and DM evaporation cannot be neglected. If a Bose Einstein Condensate (BEC) forms from ADM, then its low temperature and densely cored profile render evaporation from the BEC negligible, strengthening detectability of low-mass DM. Because of this, we find that the most easily observable Population III stars could be highly effective at constraining high-σ\sigma low-\mx\mx DM, maintaining efficacy below \mx=1015\unitGeV\mx=10^{-15}\unit{GeV} thanks to their far lower value of \mx\mx at which capture saturates to the geometric limit. Finally, we derive closed-form approximations for the evaporation rate of DM from arbitrary polytropic objects.
Maxwell's demon (MD) has proven an instructive vehicle by which to explore the relationship between information theory and thermodynamics, fueling the possibility of information driven machines. A long standing debate has been the concern of entropy violation, now resolved by the introduction of a quantum MD, but this theoretical suggestion has proven experimentally challenging. Here, we use classical vectorially structured light that is non-separable in spin and orbital angular momentum to emulate a quantum MD experiment. Our classically entangled light fields have all the salient properties necessary of their quantum counterparts but without the experimental complexity of controlling quantum entangled states. We use our experiment to show that the demon's entropy increases during the process while the system's entropy decreases, so that the total entropy is conserved through an exchange of information, confirming the theoretical prediction. We show that our MD is able to extract useful work from the system in the form of orbital angular momentum, opening a path to information driven optical spanners for the mechanical rotation of objects with light. Our synthetic dimensions of angular momentum can easily be extrapolated to other degrees of freedom, for scalable and robust implementations of MDs at both the classical and quantum realms, enlightening the role of a structured light MD and its capability to control and measure information.
Magnetic fields in pre main sequence stars regulate angular momentum evolution, drive magnetic activity, and modify stellar structure, yet their surface distributions remain poorly constrained. Traditional single component Zeeman broadening analyses typically yield mean field strengths of 2-4 kG, sometimes exceeding the photospheric equipartition limit, and assume complete magnetic coverage. These assumptions conflict with evidence that strong fields are concentrated in cool starspots. Here we present the first systematic separation of photospheric and starspot magnetic field strengths in PMS stars, using high resolution R=45000 H and K band spectra from the Raw and Reduced IGRINS Archive. By modeling temperature and magnetic field strength simultaneously for a vetted sample of 33 Class II-III young stellar objects, we find median photospheric field strengths of 1.2 kG and median spot field strengths over two times stronger at 2.56 kG, resolving the apparent super equipartition tension and removing the need for a unity magnetic filling factor. Our results show that PMS surfaces are permeated by concentrated, kG strength spot fields covering 27-83% of the visible hemisphere. This two component framework offers a physically motivated means to reconcile spectroscopic and imaging based magnetic diagnostics and enables large scale magnetic population studies across young clusters and star forming regions.
We report on a theoretical study showing that the leak conductance density, \GL\GL, in the squid giant axon appears to be optimal for the action potential firing frequency. More precisely, the standard assumption that the leak current is composed of chloride ions leads to the result that the experimental value for \GL\GL is very close to the optimal value in the Hodgkin-Huxley model which minimizes the absolute refractory period of the action potential, thereby maximizing the maximum firing frequency under stimulation by sharp, brief input current spikes to one end of the axon. The measured value of \GL\GL also appears to be close to optimal for the frequency of repetitive firing caused by a constant current input to one end of the axon, especially when temperature variations are taken into account. If, by contrast, the leak current is assumed to be composed of separate voltage-independent sodium and potassium currents, then these optimizations are not observed.
Spectroscopic analysis of four high-redshift objects observed by the James Webb Space Telescope indicates they are candidates for Supermassive Dark Stars, with their spectra matching theoretical models and JADES-GS-z14-0 showing a tentative He II absorption feature. The research, led by Katherine Freese, provides observational evidence for these exotic objects, which are powered by dark matter annihilation.
One powerful technique to solve NP-hard optimization problems in practice is branch-and-reduce search---which is branch-and-bound that intermixes branching with reductions to decrease the input size. While this technique is known to be very effective in practice for unweighted problems, very little is known for weighted problems, in part due to a lack of known effective reductions. In this work, we develop a full suite of new reductions for the maximum weight independent set problem and provide extensive experiments to show their effectiveness in practice on real-world graphs of up to millions of vertices and edges. Our experiments indicate that our approach is able to outperform existing state-of-the-art algorithms, solving many instances that were previously infeasible. In particular, we show that branch-and-reduce is able to solve a large number of instances up to two orders of magnitude faster than existing (inexact) local search algorithms---and is able to solve the majority of instances within 15 minutes. For those instances remaining infeasible, we show that combining kernelization with local search produces higher-quality solutions than local search alone.
The recent discovery of examples of intermediate-mass helium stars have offered new insights into interacting binaries. These observations will allow significant improvements in our understanding of helium stars. However, in the creation of these stars their companions may accrete a significant amount of helium-rich stellar material. These creates stars with unusual composition profiles -- stars with helium-rich cores, hydrogen-rich lower envelopes and a helium-rich outer envelope. Thus the mean molecular weight reaches a minimum in the the middle of the star rather than continuously decreasing outwards in mass. To demonstrate this structure we present Cambridge STARS model calculations of an example interacting binary systems where the helium-rich material is transferred, and compare it to one where the composition of the accreted mass is fixed to the companion's surface composition. We show that the helium-rich material leads to the accretor being 0.2 dex hotter and 0.15 dex more luminous than models where the composition is not helium rich. We use a simple BPASS v2.2 population model to estimate that helium-rich mass transfer occurs in 23 per cent of massive binaries that undergo mass transfer. This suggests this is a common process. This binary process has implications for the discrepancy between spectroscopic and gravitational masses of stars, the production of ionizing photons and possibly the modelling of high redshift galaxies.
Regularized quantum information metrics are calculated for the scattering process ee+γ,Zμμ+e^-e^+ \rightarrow \gamma,Z\rightarrow \mu^-\mu^+ that has a witness photon entangled with the initial electron-positron state. Unitarity implies the correct regularization of divergences that appear in both the final density matrix and von Neumann entanglement entropies. The entropies are found to quantify uncertainty or randomness. The variation of information, entanglement entropy, and correlation between the muon's and witness photon's helicities are found to convey equivalent information. The magnitude of the muon's expected helicity rises (falls) as the helicity entropy falls (rises). Area, or the scattering cross section, is a source of entropy for the muon's helicity entropy and momentum entropy. The muon's differential angular entropy distribution is similar to the differential angular cross section distribution, capturing the forward-backward asymmetry at high center of mass energies.
It is well established that supermassive black hole (SMBH) feedback is crucial for regulating the evolution of massive, if not all, galaxies. However, modelling the interplay between SMBHs and their host galaxies is challenging due to the vast dynamic range. Previous simulations have utilized simple subgrid models for SMBH accretion, while recent advancements track the properties of the unresolved accretion disc, usually based on the thin α\alpha-disc model. However, this neglects accretion in the radiatively inefficient regime, expected to occur through a thick disc for a significant portion of an SMBH's lifetime. To address this, we present a novel 'unified' accretion disc model for SMBHs, harnessing results from the analytical advection-dominated inflow-outflow solution (ADIOS) model and state-of-the-art GR(R)MHD simulations. Going from low to high Eddington ratios, our model transitions from an ADIOS flow to a thin α\alpha-disc via a truncated disc, incorporating self-consistently SMBH spin evolution due to Lense-Thirring precession. Utilizing the moving mesh code AREPO, we perform simulations of single and binary SMBHs within gaseous discs to validate our model and assess its impact. The disc state significantly affects observable luminosities, and we predict markedly different electromagnetic counterparts in SMBH binaries. Crucially, the assumed disc model shapes SMBH spin magnitudes and orientations, parameters that gravitational wave observatories like LISA and IPTA are poised to constrain. Our simulations emphasize the importance of accurately modelling SMBH accretion discs and spin evolution, as they modulate the available accretion power, profoundly shaping the interaction between SMBHs and their host galaxies.
This paper aims to initiate new conversations about the use of physiological indicators when assessing the welfare of dogs. There are significant concerns about construct validity - whether the measures used accurately reflect welfare. The goal is to provide recommendations for future inquiry and encourage debate. We acknowledge that the scientific understanding of animal welfare has evolved and bring attention to the shortcomings of commonly used biomarkers like cortisol. These indicators are frequently used in isolation and with limited salient dog descriptors, so fail to reflect the canine experience adequately. Using a systems approach, we explore various physiological systems and alternative indicators, such as heart rate variability and oxidative stress, to address this limitation. It is essential to consider factors like age, body weight, breed, and sex when interpreting these biomarkers correctly, and researchers should report on these in their studies. This discussion identifies possible indicators for both positive and negative experiences. In conclusion, we advocate for a practical, evidence-based approach to assessing indicators of canine welfare, including non-invasive collection methods. We acknowledge the complexity of evaluating experiential responses in dogs across different situations and the need for continued work to improve practices and refine terminology. This will enhance our ability to accurately understand welfare and improve the wellbeing of dogs, serving to inform standards of animal welfare assessment. We hope this will promote more fundamental research in canine physiology to improve construct validity, leading to better practices, ultimately improving the lives of dogs.
Structural priming is a widely used psycholinguistic paradigm to study human sentence representations. In this work we introduce SPAWN, a cognitively motivated parser that can generate quantitative priming predictions from contemporary theories in syntax which assume a lexicalized grammar. By generating and testing priming predictions from competing theoretical accounts, we can infer which assumptions from syntactic theory are useful for characterizing the representations humans build when processing sentences. As a case study, we use SPAWN to generate priming predictions from two theories (Whiz-Deletion and Participial-Phase) which make different assumptions about the structure of English relative clauses. By modulating the reanalysis mechanism that the parser uses and strength of the parser's prior knowledge, we generated nine sets of predictions from each of the two theories. Then, we tested these predictions using a novel web-based comprehension-to-production priming paradigm. We found that while the some of the predictions from the Participial-Phase theory aligned with human behavior, none of the predictions from the the Whiz-Deletion theory did, thus suggesting that the Participial-Phase theory might better characterize human relative clause representations.
There are no more papers matching your filters at the moment.