Center for Theoretical Physics of the UniverseIBS
The potential social harms that large language models pose, such as generating offensive content and reinforcing biases, are steeply rising. Existing works focus on coping with this concern while interacting with ill-intentioned users, such as those who explicitly make hate speech or elicit harmful responses. However, discussions on sensitive issues can become toxic even if the users are well-intentioned. For safer models in such scenarios, we present the Sensitive Questions and Acceptable Response (SQuARe) dataset, a large-scale Korean dataset of 49k sensitive questions with 42k acceptable and 46k non-acceptable responses. The dataset was constructed leveraging HyperCLOVA in a human-in-the-loop manner based on real news headlines. Experiments show that acceptable response generation significantly improves for HyperCLOVA and GPT-3, demonstrating the efficacy of this dataset.
246
The research investigates the couplings of matter currents to scalarons induced by torsion in Einstein-Cartan gravity, providing general formulas for these interactions. It introduces a novel gauge-invariant coupling for gauge-dependent currents, which generates "phi F\tilde{F}"-type interactions, but highlights that the scalaron's inherent couplings to dimensionful parameters typically prevent it from serving as a QCD axion.
The dS swampland conjecture V/Vc|\nabla V|/V \geq c, where cc is presumed to be a positive constant of order unity, implies that the dark energy density of our Universe can not be a cosmological constant, but mostly the potential energy of an evolving quintessence scalar field. As the dark energy includes the effects of the electroweak symmetry breaking and the QCD chiral symmetry breaking, if the dS swampland conjecture is applicable for the low energy quintessence potential, it can be applied for the Higgs and pion potential also. On the other hand, the Higgs and pion potential has the well-known dS extrema, and applying the dS swampland conjecture to those dS extrema may provide stringent constraints on the viable quintessence, as well as on the conjecture itself. We examine this issue and find that the pion dS extremum at cos(π0/fπ)=1\cos(\pi_0/f_\pi)=-1 implies cO(102105)c\lesssim {\cal O}(10^{-2}-10^{-5}) for arbitraryarbitrary form of the quintessence potential and couplings, where the weaker bound (10210^{-2}) is available onlyonly for a specific type of quintessence whose couplings respect the equivalence principle, while the stronger bound (10510^{-5}) applies for generic quintessence violating the equivalence principle. We also discuss the possibility to relax this bound with an additional scalar field, e.g. a light modulus which has a runaway behavior at the pion dS extremum. We argue that such possibility is severely constrained by a variety of observational constraints which do not leave a room to significantly relax the bound. We make a similar analysis for the Higgs dS extremum at H=0H=0, which results in a weaker bound on cc.
Whether to give rights to artificial intelligence (AI) and robots has been a sensitive topic since the European Parliament proposed advanced robots could be granted "electronic personalities." Numerous scholars who favor or disfavor its feasibility have participated in the debate. This paper presents an experiment (N=1270) that 1) collects online users' first impressions of 11 possible rights that could be granted to autonomous electronic agents of the future and 2) examines whether debunking common misconceptions on the proposal modifies one's stance toward the issue. The results indicate that even though online users mainly disfavor AI and robot rights, they are supportive of protecting electronic agents from cruelty (i.e., favor the right against cruel treatment). Furthermore, people's perceptions became more positive when given information about rights-bearing non-human entities or myth-refuting statements. The style used to introduce AI and robot rights significantly affected how the participants perceived the proposal, similar to the way metaphors function in creating laws. For robustness, we repeated the experiment over a more representative sample of U.S. residents (N=164) and found that perceptions gathered from online users and those by the general population are similar.
Composite asymmetric dark matter scenarios naturally explain why the dark matter mass density is comparable with the visible matter mass density. Such scenarios generically require some entropy transfer mechanism below the composite scale; otherwise, their late-time cosmology is incompatible with observations. A tiny kinetic mixing between a dark photon and the visible photon is a promising example of the low-energy portal. In this paper, we demonstrate that grand unifications in the dark and the visible sectors explain the origin of the tiny kinetic mixing. We particularly consider an ultraviolet completion of a simple composite asymmetric dark matter model, where asymmetric dark matter carries a BLB-L charge. In this setup, the longevity of asymmetric dark matter is explained by the BLB-L symmetry, while the dark matter asymmetry originates from the BLB-L asymmetry generated by thermal leptogenesis. In our minimal setup, the Standard Model sector and the dark sector are unified into SU(5)GUT×SU(4)DGUTSU(5)_\mathrm{GUT} \times SU(4)_\mathrm{DGUT} gauge theories, respectively. This model generates required BLB-L portal operators while suppressing unwanted higher-dimensional operators that could wash out the generated BLB-L asymmetry.
We investigate Euclidean wormholes in Gauss-Bonnet-dilaton gravity to explain the creation of the universe from nothing. We considered two types of dilaton couplings (i.e., the string-inspired model and the Gaussian model) and we obtained qualitatively similar results. There can exist Euclidean wormholes that explain the possible origin of our universe, where the dilaton field is located over the barrier of dilaton potential. This solution can exist even if dilaton potential does not satisfy slow-roll conditions. In addition, the probability is higher than that of the Hawking-Moss instanton with the same final condition. Therefore, Euclidean wormholes in Gauss-Bonnet-dilaton gravity are a possible and probable scenario, which explains the origin of our universe.
About 80\% of the mass of the present Universe is made up of the unknown (dark matter), while the rest is made up of ordinary matter. It is a very intriguing question why the {\it mass} densities of dark matter and ordinary matter (mainly baryons) are close to each other. It may be hinting the identity of dark matter and furthermore structure of a dark sector. A mirrored world provides a natural explanation to this puzzle. On the other hand, if mirror-symmetry breaking scale is low, it tends to cause cosmological problems. In this letter, we propose a mirrored unification framework, which breaks mirror-symmetry at the grand unified scale, but still addresses the puzzle. The dark matter mass is strongly related with the dynamical scale of QCD, which explains the closeness of the dark matter and baryon masses. Intermediate-energy portal interactions share the generated asymmetry between the visible and dark sectors. Furthermore, our framework is safe from cosmological issues by providing low-energy portal interactions to release the superfluous entropy of the dark sector into the visible sector.
We study a number of well-motivated theories of modified gravity with the common overarching theme that they predict the existence of compact objects such as black holes and wormholes endowed with scalar hair. We compute the shadow radius of the resulting compact objects and demonstrate that black hole images such as that of M87^* or the more recent SgrA^* by the Einstein Horizon Telescope (EHT) collaboration may provide a powerful way to constrain deviations of the metric functions from what is expected from general relativity (GR) solutions. We focus our attention on Einstein-scalar-Gauss-Bonnet (EsGB) theory with three well motivated couplings, including the dilatonic and Z2Z_2 symmetric cases. We then analyze the shadow radius of black holes in the contest of the spontaneous scalarization scenario within EsGB theory with an additional coupling to the Ricci scalar (EsRGB). Finally, we turn our attention to spontaneous scalarization in the Einstein-Maxwell-Scalar (EMS) theory and demonstrate the impact of the parameters on the black hole shadow. Our results show that black hole imaging is an important tool for constraining black holes with scalar hair and for some part of the parameter space, black holes solutions with scalar hair may be marginally favoured compared to solutions of GR.
We study hidden sector and long-lived particles at past (CHARM and NuCal), present (NA62 and SeaQuest/DarkQuest), and future (LongQuest) experiments that are at the high-energy frontier of the intensity frontier. We focus on exploring the minimal vector portal and variere-lifetime particles (VLP). VLP models have mostly been devised to explain experimental anomalies while avoiding existing constraints, and we demonstrate that proton fixed-target experiments provide one of the most powerful probes for the sub-GeV to few GeV mass range of the VLP models, using inelastic dark matter (iDM) as an example. We consider an iDM model with small mass splitting that yields the observed dark matter (DM) relic abundance, and a scenario with a sizable mass splitting that can also explain the muon g2g-2 anomaly. We set strong limits based on the CHARM and NuCal experiments, which come close to excluding iDM as full-abundance thermal DM candidates in the MeV to GeV mass range, for the mass arrangements and small mass splittings we consider. We also study the future projections based on NA62 and SeaQuest/DarkQuest, and update the constraints of the minimal dark photon parameter space. We found that NuCal sets the only existing constraint in ϵ108104\epsilon \sim 10^{-8} - 10^{-4} regime reaching \sim 800 MeV in dark photon mass due to the resonant enhancement of the proton bremsstrahlung production. Finally, we propose LongQuest, a three-stage thorough retool of the SeaQuest experiment with short (\lesssim 5 m), medium (\sim 5 m), and long baseline (\gtrsim 35 m) tracking stations/detectors, as a multi-purpose machine to explore dark sector particles with a wide range of couplings to the standard model sector.
Unsupervised image clustering methods often introduce alternative objectives to indirectly train the model and are subject to faulty predictions and overconfident results. To overcome these challenges, the current research proposes an innovative model RUC that is inspired by robust learning. RUC's novelty is at utilizing pseudo-labels of existing image clustering models as a noisy dataset that may include misclassified samples. Its retraining process can revise misaligned knowledge and alleviate the overconfidence problem in predictions. The model's flexible structure makes it possible to be used as an add-on module to other clustering methods and helps them achieve better performance on multiple datasets. Extensive experiments show that the proposed model can adjust the model confidence with better calibration and gain additional robustness against adversarial noise.
41
The axion is expected to solve the strong CP problem of quantum chromodynamics and is one of the leading candidates for dark matter. CAPP in South Korea has several axion search experiments based on cavity haloscopes in the frequency range of 1-6 GHz. The main effort focuses on operation of the experiments with the highest possible sensitivity. It requires maintenance of the haloscopes at the lowest physical temperature in the range of mK and usage of low noise components to amplify the weak axion signal. We report development and operation of low noise amplifiers for 5 haloscope experiments targeting at different frequency ranges. The amplifiers show noise temperatures approaching the quantum limit.
Information presented in Wikipedia articles must be attributable to reliable published sources in the form of references. This study examines over 5 million Wikipedia articles to assess the reliability of references in multiple language editions. We quantify the cross-lingual patterns of the perennial sources list, a collection of reliability labels for web domains identified and collaboratively agreed upon by Wikipedia editors. We discover that some sources (or web domains) deemed untrustworthy in one language (i.e., English) continue to appear in articles in other languages. This trend is especially evident with sources tailored for smaller communities. Furthermore, non-authoritative sources found in the English version of a page tend to persist in other language versions of that page. We finally present a case study on the Chinese, Russian, and Swedish Wikipedias to demonstrate a discrepancy in reference reliability across cultures. Our finding highlights future challenges in coordinating global knowledge on source reliability.
We study electroweak baryogenesis driven by the top quark in a general two Higgs doublet model with flavor-changing Yukawa couplings, keeping the Higgs potential CPCP invariant. With Higgs sector couplings and the additional top Yukawa coupling ρtt\rho_{tt} all of O\mathcal{O}(1), one naturally has sizable CPCP violation that fuels the cosmic baryon asymmetry. Even if ρtt\rho_{tt} vanishes, the favor-changing coupling ρtc\rho_{tc} can still lead to successful baryogenesis. Phenomenological consequences such as tcht\to ch, τμγ\tau \to \mu\gamma, electron electric dipole moment, hγγh\to\gamma\gamma, and hhhhhh coupling are discussed.
We study a number of well-motivated theories of modified gravity with the common overarching theme that they predict the existence of compact objects such as black holes and wormholes endowed with scalar hair. We compute the shadow radius of the resulting compact objects and demonstrate that black hole images such as that of M87^* or the more recent SgrA^* by the Einstein Horizon Telescope (EHT) collaboration may provide a powerful way to constrain deviations of the metric functions from what is expected from general relativity (GR) solutions. We focus our attention on Einstein-scalar-Gauss-Bonnet (EsGB) theory with three well motivated couplings, including the dilatonic and Z2Z_2 symmetric cases. We then analyze the shadow radius of black holes in the contest of the spontaneous scalarization scenario within EsGB theory with an additional coupling to the Ricci scalar (EsRGB). Finally, we turn our attention to spontaneous scalarization in the Einstein-Maxwell-Scalar (EMS) theory and demonstrate the impact of the parameters on the black hole shadow. Our results show that black hole imaging is an important tool for constraining black holes with scalar hair and for some part of the parameter space, black holes solutions with scalar hair may be marginally favoured compared to solutions of GR.
LUX-ZEPLIN (LZ) is a second-generation direct dark matter experiment with spin-independent WIMP-nucleon scattering sensitivity above 1.4×10481.4 \times 10^{-48} cm2^{2} for a WIMP mass of 40 GeV/c2^{2} and a 1000 d exposure. LZ achieves this sensitivity through a combination of a large 5.6 t fiducial volume, active inner and outer veto systems, and radio-pure construction using materials with inherently low radioactivity content. The LZ collaboration performed an extensive radioassay campaign over a period of six years to inform material selection for construction and provide an input to the experimental background model against which any possible signal excess may be evaluated. The campaign and its results are described in this paper. We present assays of dust and radon daughters depositing on the surface of components as well as cleanliness controls necessary to maintain background expectations through detector construction and assembly. Finally, examples from the campaign to highlight fixed contaminant radioassays for the LZ photomultiplier tubes, quality control and quality assurance procedures through fabrication, radon emanation measurements of major sub-systems, and bespoke detector systems to assay scintillator are presented.
A new experiment is described to detect a permanent electric dipole moment of the proton with a sensitivity of 1029e10^{-29}e\cdotcm by using polarized "magic" momentum 0.70.7~GeV/c protons in an all-electric storage ring. Systematic errors relevant to the experiment are discussed and techniques to address them are presented. The measurement is sensitive to new physics beyond the Standard Model at the scale of 3000~TeV.
Anomaly detection aims at identifying deviant instances from the normal data distribution. Many advances have been made in the field, including the innovative use of unsupervised contrastive learning. However, existing methods generally assume clean training data and are limited when the data contain unknown anomalies. This paper presents Elsa, a novel semi-supervised anomaly detection approach that unifies the concept of energy-based models with unsupervised contrastive learning. Elsa instills robustness against any data contamination by a carefully designed fine-tuning step based on the new energy function that forces the normal data to be divided into classes of prototypes. Experiments on multiple contamination scenarios show the proposed model achieves SOTA performance. Extensive analyses also verify the contribution of each component in the proposed model. Beyond the experiments, we also offer a theoretical interpretation of why contrastive learning alone cannot detect anomalies under data contamination.
1
We characterise the class of exponentiable \infty-toposes: X\mathcal X is exponentiable if and only if Sh(X)\mathcal S\mathrm{h}(\mathcal X) is a continuous \infty-category. The heart of the proof is the description of the \infty-category of C\mathcal C-valued sheaves on X\mathcal X as an \infty-category of functors that satisfy finite limits conditions as well as filtered colimits conditions (instead of limits conditions purely); we call such functors ω\omega-continuous sheaves. As an application, we show that when X\mathcal X is exponentiable, its \infty-category of stable sheaves $\mathcal S\mathrm{h}(\mathcal X, \mathrm{Sp})isadualisableobjectinthe is a dualisable object in the \infty$-category of presentable stable \infty-categories.
Utilizing gravitational-wave (GW) lensing opens a new way to understand the small-scale structure of the universe. We show that, in spite of its coarse angular resolution and short duration of observation, LIGO can detect the GW lensing induced by compact structures, in particular by compact dark matter (DM) or primordial black holes of 10105M10 - 10^5 \, M_\odot, which remain interesting DM candidates. The lensing is detected through GW frequency chirping, creating the natural and rapid change of lensing patterns: \emph{frequency-dependent amplification and modulation} of GW waveforms. As a highest-frequency GW detector, LIGO is a unique GW lab to probe such light compact DM. With the design sensitivity of Advanced LIGO, one-year observation by three detectors can optimistically constrain the compact DM density fraction fDMf_{\rm DM} to the level of a few percent.
The purposes of the present paper are two-fold. Firstly we further develop the interplay between the contact Hamiltonian geometry and the geometric analysis of Hamiltonian-perturbed contact instantons with the Legendrian boundary condition, which is initiated by the present author in \cite{oh:contacton-Legendrian-bdy}. We introduce the class of \emph{tame contact manifolds} (M,λ)(M,\lambda), which includes compact ones but not necessarily compact, and establish uniform a priori C0C^0-estimates for the contact instantons. Then we study the problem of estimating the Reeb-untangling energy of one Legendrian submanifold from another, and formulate a particularly designed parameterized moduli space for the study of the problem. We establish the Gromov-Floer-Hofer type convergence result for contact instantons of finite energy and construct its compactification of the moduli space, first by defining the correct energy and then by proving uniform a priori energy bounds in terms of the oscillation of the relevant contact Hamiltonian. Secondly, as an application of this geometry and analysis of contact instantons, we prove that the \emph{self Reeb-untangling energy} of a compact Legendrian submanifold RR in any tame contact manifold (M,λ)(M,\lambda) is greater than that of the period gap Tλ(M,R)T_\lambda(M,R) of the Reeb chords of RR. This is an optimal result in general. In a sequel \cite{oh:shelukhin-conjecture}, we also prove Shelukhin's conjecture specializing to the Legendrianization of contactomorphisms of closedcoorientable contact manifold (Q,ξ)(Q,\xi) and utilizing its Z2\mathbb Z_2-symmetry as the fixed point set of anti-contact involution to overcome the \emph{nontameness} of contact product M=Q×Q×RM = Q \times Q \times \mathbb R.
There are no more papers matching your filters at the moment.