University of New South Wales Canberra
An adaptive non-Gaussian measurement scheme is proposed for implementing a quantum cubic gate on a traveling light beam, utilizing one squeezed vacuum state and a non-Gaussian ancilla. The work demonstrates that classical adaptive control generates the gate's nonlinearity while the non-Gaussian ancilla suppresses noise, also identifying optimal photon-number superposition states to minimize this noise.
Among first order optimization methods, Polyak's heavy ball method has long been known to guarantee the asymptotic rate of convergence matching Nesterov's lower bound for functions defined in an infinite-dimensional space. In this paper, we use results on the robust gain margin of linear uncertain feedback control systems to show that the heavy ball method is provably worst-case asymptotically optimal when applied to quadratic functions in a finite dimensional space.
Quantum computation promises applications that are thought to be impossible with classical computation. To realize practical quantum computation, the following three properties will be necessary: universality, scalability, and fault-tolerance. Universality is the ability to execute arbitrary multi-input quantum algorithms. Scalability means that computational resources such as logical qubits can be increased without requiring exponential increase in physical resources. Lastly, fault-tolerance is the ability to perform quantum algorithms in presence of imperfections and noise. A promising approach to scalability was demonstrated with the generation of one-million-mode 1-dimensional cluster state, a resource for one-input computation in measurement-based quantum computation (MBQC). The demonstration was based on time-domain multiplexing (TDM) approach using continuous-variable (CV) optical flying qumodes (CV analogue of qubit). Demonstrating universality, however, has been a challenging task for any physical system and approach. Here, we present, for the first time among any physical system, experimental realization of a scalable resource state for universal MBQC: a 2-dimensional cluster state. We also prove the universality and give the methodology for utilizing this state in MBQC. Our state is based on TDM approach that allows unlimited resource generation regardless of the coherence time of the system. As a demonstration of our method, we generate and verify a 2-dimensional cluster state capable of about 5,000 operation steps on 5 inputs.
A rainfall-runoff model predicts surface runoff either using a physically-based approach or using a systems-based approach. Takagi-Sugeno (TS) Fuzzy models are systems-based approaches and a popular modeling choice for hydrologists in recent decades due to several advantages and improved accuracy in prediction over other existing models. In this paper, we propose a new rainfall-runoff model developed using Gustafson-Kessel (GK) clustering-based TS Fuzzy model. We present comparative performance measures of GK algorithms with two other clustering algorithms: (i) Fuzzy C-Means (FCM), and (ii)Subtractive Clustering (SC). Our proposed TS Fuzzy model predicts surface runoff using: (i) observed rainfall in a drainage basin and (ii) previously observed precipitation flow in the basin outlet. The proposed model is validated using the rainfall-runoff data collected from the sensors installed on the campus of the Indian Institute of Technology, Kharagpur. The optimal number of rules of the proposed model is obtained by different validation indices. A comparative study of four performance criteria: RootMean Square Error (RMSE), Coefficient of Efficiency (CE), Volumetric Error (VE), and Correlation Coefficient of Determination(R) have been quantitatively demonstrated for each clustering algorithm.
Privacy directly concerns the user as the data owner (data- subject) and hence privacy in systems should be implemented in a manner which concerns the user (user-centered). There are many concepts and guidelines that support development of privacy and embedding privacy into systems. However, none of them approaches privacy in a user- centered manner. Through this research we propose a framework that would enable developers and designers to grasp privacy in a user-centered manner and implement it along with the software development life cycle.
11 Oct 2018
All-dielectric nanophotonics attracts ever increasing attention nowadays due to the possibility to control and configure light scattering on high-index semiconductor nanoparticles. It opens a room of opportunities for the designing novel types of nanoscale elements and devices, and paves a way to advanced technologies of light energy manipulation. One of the exciting and promising prospects is associated with the utilizing so called toroidal moment being the result of poloidal currents excitation, and anapole states corresponding to the interference of dipole and toroidal electric moments. Here, we present and investigate in details via the direct Cartesian multipole decomposition higher order toroidal moments of both types (up to the electric octupole toroidal moment) allowing to obtain new near- and far-field configurations. Poloidal currents can be associated with vortex-like distributions of the displacement currents inside nanoparticles revealing the physical meaning of the high-order toroidal moments and the convenience of the Cartesian multipoles as an auxiliary tool for analysis. We demonstrate high-order nonradiating anapole states (vanishing contribution to the far-field zone) accompanied by the excitation of intense near-fields. We believe our results to be of high importance for both fundamental understanding of light scattering by high-index particles, and a variety of nanophotonics applications and light governing on nanoscale.
On the one hand, artificial neural networks (ANNs) are commonly labelled as black-boxes, lacking interpretability; an issue that hinders human understanding of ANNs' behaviors. A need exists to generate a meaningful sequential logic of the ANN for interpreting a production process of a specific output. On the other hand, decision trees exhibit better interpretability and expressive power due to their representation language and the existence of efficient algorithms to transform the trees into rules. However, growing a decision tree based on the available data could produce larger than necessary trees or trees that do not generalise well. In this paper, we introduce two novel multivariate decision tree (MDT) algorithms for rule extraction from ANNs: an Exact-Convertible Decision Tree (EC-DT) and an Extended C-Net algorithm. They both transform a neural network with Rectified Linear Unit activation functions into a representative tree, which can further be used to extract multivariate rules for reasoning. While the EC-DT translates an ANN in a layer-wise manner to represent exactly the decision boundaries implicitly learned by the hidden layers of the network, the Extended C-Net combines the decompositional approach from EC-DT with a C5 tree learning algorithm to form decision rules. The results suggest that while EC-DT is superior in preserving the structure and the fidelity of ANN, Extended C-Net generates the most compact and highly effective trees from ANN. Both proposed MDT algorithms generate rules including combinations of multiple attributes for precise interpretations for decision-making.
The Southern Hemisphere Asteroid Research Project is an active and informal entity comprising the University of New South Wales, the University of Tasmania, the University of Western Australia, and the Curtin University, which performs asteroid research in collaboration with federal agencies, including the Commonwealth Scientific and Industrial Research Organisation and the National Aeronautics and Space Administration (JPL). Since 2015, we have used the Australian infrastructure to characterize more than 50 near-Earth asteroids through bistatic radar observations. On 29 June 2024, we used four very long baseline interferometer (VLBI) radio telescopes to follow the close approach of 2024 MK to the Earth. In this paper, we describe the detections and the analysis of VLBI and howthese observations can help to improve the understanding of its composition and orbit characterization.
The popularity of generative text AI tools in answering questions has led to concerns regarding their potential negative impact on students' academic performance and the challenges that educators face in evaluating student learning. To address these concerns, this paper introduces an evolutionary approach that aims to identify the best set of Bloom's taxonomy keywords to generate questions that these tools have low confidence in answering. The effectiveness of this approach is evaluated through a case study that uses questions from a Data Structures and Representation course being taught at the University of New South Wales in Canberra, Australia. The results demonstrate that the optimization algorithm is able to find keywords from different cognitive levels to create questions that ChatGPT has low confidence in answering. This study is a step forward to offer valuable insights for educators seeking to create more effective questions that promote critical thinking among students.
Little publicly available data exists for polarimetric measurements. When designing task specific polarimetric systems, the statistical properties of the task specific data becomes important. Until better polarimetric datasets are available to deduce statistics from, the statistics must be simulated to test instrument performance. Most imaged scenes have been shown to follow a power law power spectral density distribution, for both natural and city scenes. Furthermore, imaged data appears to follow a power law power spectral distribution temporally. We are interested in generating image sets which change over time, and at the same time are correlated between different components (spectral or polarimetric). In this brief communication, we present a framework and provide code to generate such data.
For an exponentially decaying potential, analytic structure of the ss-wave S-matrix can be determined up to the slightest detail, including position of all its poles and their residues. Beautiful hidden structures can be revealed by its domain coloring. A fundamental property of the S-matrix is that any bound state corresponds to a pole of the S-matrix on the physical sheet of the complex energy plane. For a repulsive exponentially decaying potential, none of infinite number of poles of the ss-wave S-matrix on the physical sheet corresponds to any physical state. On the second sheet of the complex energy plane, the S-matrix has infinite number of poles corresponding to virtual states and a finite number of poles corresponding to complementary pairs of resonances and anti-resonances. The origin of redundant poles and zeros is confirmed to be related to peculiarities of analytic continuation of a parameter of two linearly independent analytic functions. The overall contribution of redundant poles to the asymptotic completeness relation, provided that the residue theorem can be applied, is determined to be an oscillating function.
This paper presents a short review on the current state of SN Ia progenitor origin. Type Ia supernova explosions are observed to be widely diverse in peak luminosity, lightcurve width and shape, spectral features, and host stellar population environment. In the last decade alone, theoretical simulations and observational data have come together to seriously challenge the long-standing paradigm that all SNe Ia arise from explosions of Chandrasekhar mass white dwarfs. In this review I highlight some of the major developments (and changing views) of our understanding of the nature of SN Ia progenitor systems. I give a brief overview of binary star configurations and their plausible explosion mechanisms, and infer links between some of the various (observationally-categorized) SN Ia sub-classes and their progenitor origins from a theoretical standpoint.
This paper updates the explicit interval estimate for primes between consecutive powers. It is shown that there is least one prime between n155n^{155} and (n+1)155(n+1)^{155} for all n1n\geq 1. This result is in part obtained with a new explicit version of Goldston's 1983 estimate for the error in the truncated Riemann--von Mangoldt explicit formula.
Deep neural networks (DNNs) have achieved state-of-the-art performance on face recognition (FR) tasks in the last decade. In real scenarios, the deployment of DNNs requires taking various face accessories into consideration, like glasses, hats, and masks. In the COVID-19 pandemic era, wearing face masks is one of the most effective ways to defend against the novel coronavirus. However, DNNs are known to be vulnerable to adversarial examples with a small but elaborated perturbation. Thus, a facial mask with adversarial perturbations may pose a great threat to the widely used deep learning-based FR models. In this paper, we consider a challenging adversarial setting: targeted attack against FR models. We propose a new stealthy physical masked FR attack via adversarial style optimization. Specifically, we train an adversarial style mask generator that hides adversarial perturbations inside style masks. Moreover, to ameliorate the phenomenon of sub-optimization with one fixed style, we propose to discover the optimal style given a target through style optimization in a continuous relaxation manner. We simultaneously optimize the generator and the style selection for generating strong and stealthy adversarial style masks. We evaluated the effectiveness and transferability of our proposed method via extensive white-box and black-box digital experiments. Furthermore, we also conducted physical attack experiments against local FR models and online platforms.
Historically, various methods have been employed to understand the origin of the elements, including observations of elemental abundances which have been compared to Galactic Chemical Evolution (GCE) models. It is also well known that 1D Local Thermodynamic Equilibrium (LTE) measurements fail to accurately capture elemental abundances. Non-LTE (NLTE) effects may play a significant role, and neglecting them leads to erroneous implications in galaxy modelling. In this paper, we calculate 3D NLTE abundances of seven key iron-peak and neutron-capture elements (Mn, Co, Ni, Sr, Y, Ba, Eu) based on carefully assembled 1D LTE literature measurements, and investigate their impact within the context of the OMEGA+ GCE model. Our findings reveal that 3D NLTE abundances are significantly higher for iron-peak elements at [Fe/H]< -3, with (for the first time ever) [Ni/Fe] and (confirming previous studies) [Co/Fe] on average reaching 0.6-0.8 dex, and [Mn/Fe] reaching -0.1 dex, which current 1D core-collapse supernova (CCSN) models cannot explain. We also observe a slightly higher production of neutron-capture elements at low metallicities, with 3D NLTE abundances of Eu being higher by +0.2 dex at [Fe/H]= -3. 3D effects are most significant for iron-peak elements in the very metal-poor regime, with average differences between 3D NLTE and 1D NLTE {reaching} up to 0.15 dex. Thus, ignoring 3D NLTE effects introduces significant biases, so including {them} should be considered whenever possible.
The 511 keV line from positron annihilation in the Galaxy was the first γ\gamma-ray line detected to originate from outside our solar system. Going into the fifth decade since the discovery, the source of positrons is still unconfirmed and remains one of the enduring mysteries in γ\gamma-ray astronomy. With a large flux of \sim103^{-3} γ\gamma/cm2^{2}/s, after 15 years in operation INTEGRAL/SPI has detected the 511 keV line at >50σ>50\sigma and has performed high-resolution spectral studies which conclude that Galactic positrons predominantly annihilate at low energies in warm phases of the interstellar medium. The results from imaging are less certain, but show a spatial distribution with a strong concentration in the center of the Galaxy. The observed emission from the Galactic disk has low surface brightness and the scale height is poorly constrained, therefore, the shear number of annihilating positrons in our Galaxy is still not well know. Positrons produced in β+\beta^+-decay of nucleosynthesis products, such as 26^{26}Al, can account for some of the annihilation emission in the disk, but the observed spatial distribution, in particular the excess in the Galactic bulge, remains difficult to explain. Additionally, one of the largest uncertainties in these studies is the unknown distance that positrons propagate before annihilation. In this paper, we will summarize the current knowledge base of Galactic positrons, and discuss how next-generation instruments could finally provide the answers.
The popularity of IoT smart things is rising, due to the automation they provide and its effects on productivity. However, it has been proven that IoT devices are vulnerable to both well established and new IoT-specific attack vectors. In this paper, we propose the Particle Deep Framework, a new network forensic framework for IoT networks that utilised Particle Swarm Optimisation to tune the hyperparameters of a deep MLP model and improve its performance. The PDF is trained and validated using Bot-IoT dataset, a contemporary network-traffic dataset that combines normal IoT and non-IoT traffic, with well known botnet-related attacks. Through experimentation, we show that the performance of a deep MLP model is vastly improved, achieving an accuracy of 99.9% and false alarm rate of close to 0%.
Space launches produce ionospheric disturbances which can be observed through measurements such as Global Navigation Satellite System signal delays. Here we report observations and numerical simulations of the ionospheric depletion due to a Small-Lift Launch Vehicle. The case examined was the launch of a Rocket Lab Electron at 22:30 UTC on March 22, 2021. Despite the very small launch vehicle, ground stations in the Chatham Islands measured decreases in line-of-sight total electron content for navigation satellite signals following the launch. General Circulation Model results indicated ionospheric depletions which were comparable with these measurements. Line-of-sight measurements showed a maximum decrease of 2.72.7~TECU in vertical total electron content, compared with a simulated decrease of 2.62.6~TECU. Advection of the exhaust plume due to its initial velocity and subsequent effects of neutral winds are identified as some remaining challenges for this form of modelling.
The paper develops a methodology for the design of coherent equalizing filters for quantum communication channels. Given a linear quantum system model of a quantum communication channel, the aim is to obtain another quantum system which, when coupled with the original system, mitigates degrading effects of the environment. The main result of the paper is a systematic equalizer synthesis algorithm which relies on methods of state-space robust control design via semidefinite programming.
A central capability of a long-lived reinforcement learning (RL) agent is to incrementally adapt its behavior as its environment changes, and to incrementally build upon previous experiences to facilitate future learning in real-world scenarios. In this paper, we propose LifeLong Incremental Reinforcement Learning (LLIRL), a new incremental algorithm for efficient lifelong adaptation to dynamic environments. We develop and maintain a library that contains an infinite mixture of parameterized environment models, which is equivalent to clustering environment parameters in a latent space. The prior distribution over the mixture is formulated as a Chinese restaurant process (CRP), which incrementally instantiates new environment models without any external information to signal environmental changes in advance. During lifelong learning, we employ the expectation maximization (EM) algorithm with online Bayesian inference to update the mixture in a fully incremental manner. In EM, the E-step involves estimating the posterior expectation of environment-to-cluster assignments, while the M-step updates the environment parameters for future learning. This method allows for all environment models to be adapted as necessary, with new models instantiated for environmental changes and old models retrieved when previously seen environments are encountered again. Experiments demonstrate that LLIRL outperforms relevant existing methods, and enables effective incremental adaptation to various dynamic environments for lifelong learning.
There are no more papers matching your filters at the moment.