Marmara University
This paper presents the (MARUN)2 underwater robotic simulator. The simulator architecture enables seamless integration with the ROS-based mission software and web-based user interface of URSULA, a squid inspired biomimetic robot designed for dexterous underwater manipulation and seabed intervention tasks. (MARUN)2 utilizes the Unity game engine for physics-based rigid body dynamic simulation and underwater environment modeling. Utilizing Unity as the simulation environment enables the integration of virtual reality and haptic feedback capabilities for a more immersive and realistic experience for improved operator dexterity and experience. The utility of the simulator and improved dexterity provided by the VR module is validated through user experiments.
We study first-order electroweak phase transitions nonperturbatively, assuming any particles beyond the Standard Model are sufficiently heavy to be integrated out at the phase transition. Utilising high temperature dimensional reduction, we perform lattice Monte-Carlo simulations to calculate the main quantities characterising the transition: the critical temperature, the latent heat, the surface tension and the bubble nucleation rate, updating and extending previous lattice studies. We focus on the region where the theory gives first-order phase transitions due to an effective reduction in the Higgs self-coupling and give a detailed comparison with perturbation theory.
Research from institutions in Istanbul demonstrates that a standard U-Net architecture, when paired with an extensive and strategically developed data augmentation pipeline, achieves an AUC of 0.9855 and an accuracy of 0.9712 on the DRIVE dataset for retinal vessel segmentation. This approach matches or exceeds the performance of many more complex, state-of-the-art models while requiring significantly less computational resources and training time.
6
Using extensive numerical analysis of 20,000 randomly generated two-qubit states, we provide a quantitative analysis of the connection between entanglement measures and Maximized Quantum Fisher Information (MQFI). Our systematic study shows strong empirical relationships between the metrological capacity of quantum states and three different entanglement measures: concurrence, negativity, and relative entropy of entanglement. We show that optimization over local unitary transformations produces substantially more predictable relationships than fixed-generator quantum Fisher information approaches using sophisticated statistical analysis, such as bootstrap resampling, systematic data binning, and multiple model comparisons. With exponential fits reaching R2>0.99R^2 > 0.99 and polynomial models reaching R2=0.999R^2 = 0.999, we offer thorough empirical support for saturation behavior in quantum metrological advantage. With immediate applications to realworld quantum sensing protocols, our findings directly empirically validate important predictions from quantum resource theory and set fundamental bounds for quantum sensor optimization and resource allocation. These intricate relationships are quantitatively described by the polynomial and exponential fit equations, which offer crucial real-world direction for the design of quantum sensors.
Predicting the stock market trend has always been challenging since its movement is affected by many factors. Here, we approach the future trend prediction problem as a machine learning classification problem by creating tomorrow_trend feature as our label to be predicted. Different features are given to help the machine learning model predict the label of a given day; whether it is an uptrend or downtrend, those features are technical indicators generated from the stock's price history. In addition, as financial news plays a vital role in changing the investor's behavior, the overall sentiment score on a given day is created from all news released on that day and added to the model as another feature. Three different machine learning models are tested in Spark (big-data computing platform), Logistic Regression, Random Forest, and Gradient Boosting Machine. Random Forest was the best performing model with a 63.58% test accuracy.
2
The capital asset pricing model (CAPM) is readily used to capture a linear relationship between the daily returns of an asset and a market index. We extend this model to an intraday high-frequency setting by proposing a functional CAPM estimation approach. The functional CAPM is a stylized example of a function-on-function linear regression with a bivariate functional regression coefficient. The two-dimensional regression coefficient measures the cross-covariance between cumulative intraday asset returns and market returns. We apply it to the Standard and Poor's 500 index and its constituent stocks to demonstrate its practicality. We investigate the functional CAPM's in-sample goodness-of-fit and out-of-sample prediction for an asset's cumulative intraday return. The findings suggest that the proposed functional CAPM methods have superior model goodness-of-fit and forecast accuracy compared to the traditional CAPM empirical estimation. In particular, the functional methods produce better model goodness-of-fit and prediction accuracy for stocks traditionally considered less price-efficient or more information-opaque.
A comprehensive survey provides an up-to-date overview of Quantum Generative Adversarial Networks (QGANs), detailing their theoretical underpinnings, categorizing diverse architectures, and summarizing a wide array of applications from image synthesis to drug discovery. The authors review experimental demonstrations on various quantum hardware platforms, emphasizing advancements and challenges in the field, particularly since early 2023.
We present two innovative functional partial quantile regression algorithms designed to accurately and efficiently estimate the regression coefficient function within the function-on-function linear quantile regression model. Our algorithms utilize functional partial quantile regression decomposition to effectively project the infinite-dimensional response and predictor variables onto a finite-dimensional space. Within this framework, the partial quantile regression components are approximated using a basis expansion approach. Consequently, we approximate the infinite-dimensional function-on-function linear quantile regression model using a multivariate quantile regression model constructed from these partial quantile regression components. To evaluate the efficacy of our proposed techniques, we conduct a series of Monte Carlo experiments and analyze an empirical dataset, demonstrating superior performance compared to existing methods in finite-sample scenarios. Our techniques have been implemented in the ffpqr package in R.
1
Challenges exist in learning and understanding religions, such as the complexity and depth of religious doctrines and teachings. Chatbots as question-answering systems can help in solving these challenges. LLM chatbots use NLP techniques to establish connections between topics and accurately respond to complex questions. These capabilities make it perfect for enlightenment on religion as a question-answering chatbot. However, LLMs also tend to generate false information, known as hallucination. Also, the chatbots' responses can include content that insults personal religious beliefs, interfaith conflicts, and controversial or sensitive topics. It must avoid such cases without promoting hate speech or offending certain groups of people or their beliefs. This study uses a vector database-based Retrieval Augmented Generation (RAG) approach to enhance the accuracy and transparency of LLMs. Our question-answering system is called "MufassirQAS". We created a database consisting of several open-access books that include Turkish context. These books contain Turkish translations and interpretations of Islam. This database is utilized to answer religion-related questions and ensure our answers are trustworthy. The relevant part of the dataset, which LLM also uses, is presented along with the answer. We have put careful effort into creating system prompts that give instructions to prevent harmful, offensive, or disrespectful responses to respect people's values and provide reliable results. The system answers and shares additional information, such as the page number from the respective book and the articles referenced for obtaining the information. MufassirQAS and ChatGPT are also tested with sensitive questions. We got better performance with our system. Study and enhancements are still in progress. Results and future works are given.
This paper proposes a new decentralized framework, named EDGChain-E (Encrypted-Data-Git Chain for Energy), designed to manage version-controlled, encrypted energy data using blockchain and the InterPlanetary File System. The framework incorporates a Decentralized Autonomous Organization (DAO) to orchestrate collaborative data governance across the lifecycle of energy research and operations, such as smart grid monitoring, demand forecasting, and peer-to-peer energy trading. In EDGChain-E, initial commits capture the full encrypted datasets-such as smart meter readings or grid telemetry-while subsequent updates are tracked as encrypted Git patches, ensuring integrity, traceability, and privacy. This versioning mechanism supports secure collaboration across multiple stakeholders (e.g., utilities, researchers, regulators) without compromising sensitive or regulated information. We highlight the framework's capability to maintain FAIR-compliant (Findable, Accessible, Interoperable, Reusable) provenance of encrypted data. By embedding hash-based content identifiers in Merkle trees, the system enables transparent, auditable, and immutable tracking of data changes, thereby supporting reproducibility and trust in decentralized energy applications.
The function-on-function regression model is fundamental for analyzing relationships between functional covariates and responses. However, most existing function-on-function regression methodologies assume independence between observations, which is often unrealistic for spatially structured functional data. We propose a novel penalized spatial function-on-function regression model to address this limitation. Our approach extends the generalized spatial two-stage least-squares estimator to functional data, while incorporating a roughness penalty on the regression coefficient function using a tensor product of B-splines. This penalization ensures optimal smoothness, mitigating overfitting, and improving interpretability. The proposed penalized spatial two-stage least-squares estimator effectively accounts for spatial dependencies, significantly improving estimation accuracy and predictive performance. We establish the asymptotic properties of our estimator, proving its n\sqrt{n}-consistency and asymptotic normality under mild regularity conditions. Extensive Monte Carlo simulations demonstrate the superiority of our method over existing non-penalized estimators, particularly under moderate to strong spatial dependence. In addition, an application to North Dakota weather data illustrates the practical utility of our approach in modeling spatially correlated meteorological variables. Our findings highlight the critical role of penalization in enhancing robustness and efficiency in spatial function-on-function regression models. To implement our method we used the \texttt{robflreg} package on CRAN.
The rapidly increasing use of electric vehicles (EVs) has made it even more important to manage the charging infrastructure sustainably. The expansion of charging station networks, especially in large cities, creates serious logistical challenges for charging point operators (CPOs) in planning maintenance and repair activities. Inefficient field personnel management can lead to time loss, high operational costs, and resource waste. This study presents an integrated method to optimize the planning of EV charging network maintenance operations. The proposed approach groups charging stations according to geographical proximity using the K-means clustering algorithm and calculates the shortest routes between clusters using a genetic algorithm. The method was developed in Python and applied to a dataset consisting of 100 EV charging stations in Istanbul. Considering the population density, traffic density, and resource constraints of Istanbul, the route planning approach presented in this study has great potential, especially for such metropolises. According to the different parameter configurations tested, the most efficient scenario provided approximately 35\% distance savings compared to the reference route created according to the sequential data layout. While the reference route provides a simple comparison, the study presents a solution that will enable field operations in metropolitan cities such as Istanbul to be conducted in a more efficient, planned and scalable manner. In future studies, it is planned to integrate real-time factors such as traffic conditions and field technician constraints.
Smart retail stores are becoming the fact of our lives. Several computer vision and sensor based systems are working together to achieve such a complex and automated operation. Besides, the retail sector already has several open and challenging problems which can be solved with the help of pattern recognition and computer vision methods. One important problem to be tackled is the planogram compliance control. In this study, we propose a novel method to solve it. The proposed method is based on object detection, planogram compliance control, and focused and iterative search steps. The object detection step is formed by local feature extraction and implicit shape model formation. The planogram compliance control step is formed by sequence alignment via the modified Needleman-Wunsch algorithm. The focused and iterative search step aims to improve the performance of the object detection and planogram compliance control steps. We tested all three steps on two different datasets. Based on these tests, we summarize the key findings as well as strengths and weaknesses of the proposed method.
Functional logistic regression is a popular model to capture a linear relationship between binary response and functional predictor variables. However, many methods used for parameter estimation in functional logistic regression are sensitive to outliers, which may lead to inaccurate parameter estimates and inferior classification accuracy. We propose a robust estimation procedure for functional logistic regression, in which the observations of the functional predictor are projected onto a set of finite-dimensional subspaces via robust functional principal component analysis. This dimension-reduction step reduces the outlying effects in the functional predictor. The logistic regression coefficient is estimated using an M-type estimator based on binary response and robust principal component scores. In doing so, we provide robust estimates by minimizing the effects of outliers in the binary response and functional predictor variables. Via a series of Monte-Carlo simulations and using hand radiograph data, we examine the parameter estimation and classification accuracy for the response variable. We find that the robust procedure outperforms some existing robust and non-robust methods when outliers are present, while producing competitive results when outliers are absent. In addition, the proposed method is computationally more efficient than some existing robust alternatives.
The definition of the Sparling-Thirring form is extended to the Brans-Dicke theory. By writing the Brans-Dicke field equations in a formally Maxwell-like form, a superpotential and a corresponding pseudo energy-momentum form are defined. The general energy expression provided by the superpotential in the Jordan frame is discussed in relation to the corresponding expression in the Einstein frame. In order to substantiate its formal definition, the generalized Sparling-Thirring form is used to calculate the energy for the spherically symmetric vacuum solution in the Brans-Dicke theory.
SDR (Software Defined Radio) provides flexible, reproducible, and longer-lasting radio tools for military and civilian wireless communications infrastructure. SDR is a radio communication system whose components are implemented as software. This study aims to establish multi-channel wireless communication with FANET between two SDRs to share location information and examine it in a realistic test environment. We used multi-channel token circulation as a channel access protocol and GNU Radio platform for SDR software development. The structures of the communication layer, including the protocols, communication systems, and network structures suggested in the studies in the literature, are generally tested in the simulation environment. The simulation environment provides researchers with fast and easy development and testing, but disadvantages exist. These cause a product to be isolated from hardware, software, and cost effects encountered while developing and environmental factors affecting the communication channel while testing. Another contribution of the study is to present the developed block diagrams and codes as clear and reproducible. The developed software and block diagrams are available at this http URL.
The development of artificial intelligence has made significant contributions to the financial sector. One of the main interests of investors is price predictions. Technical and fundamental analyses, as well as econometric analyses, are conducted for price predictions; recently, the use of AI-based methods has become more prevalent. This study examines daily Dollar/TL exchange rates from January 1, 2020, to October 4, 2024. It has been observed that among artificial intelligence models, random forest, support vector machines, k-nearest neighbors, decision trees, and gradient boosting models were not suitable; however, multilayer perceptron and linear regression models showed appropriate suitability and despite the sharp increase in Dollar/TL rates in Turkey as of 2019, the suitability of valid models has been maintained.
Measurement of environment interaction forces during robotic minimally-invasive surgery would enable haptic feedback to the surgeon, thereby solving one long-standing limitation. Estimating this force from existing sensor data avoids the challenge of retrofitting systems with force sensors, but is difficult due to mechanical effects such as friction and compliance in the robot mechanism. We have previously shown that neural networks can be trained to estimate the internal robot joint torques, thereby enabling estimation of external forces. In this work, we extend the method to estimate external Cartesian forces and torques, and also present a two-step approach to adapt to the specific surgical setup by compensating for forces due to the interactions between the instrument shaft and cannula seal and between the trocar and patient body. Experiments show that this approach provides estimates of external forces and torques within a mean root-mean-square error (RMSE) of 2 N and 0.08 Nm, respectively. Furthermore, the two-step approach can add as little as 5 minutes to the surgery setup time, with about 4 minutes to collect intraoperative training data and 1 minute to train the second-step network.
This paper presents a functional linear Cox regression model with frailty to tackle unobserved heterogeneity in survival data with functional covariates. While traditional Cox models are common, they struggle to incorporate frailty effects that represent individual differences not captured by observed covariates. Our model combines scalar and functional covariates with a frailty term to address these unmeasured influences, creating a robust framework for high-dimensional survival analysis. We estimate parameters using functional principal component analysis and apply penalized partial likelihood for the frailty structure. A simulation study shows that our model outperforms traditional approaches in estimation accuracy and predictive capacity, especially with high frailty. We also analyze data from the National Health and Nutrition Examination Survey, highlighting significant links between physical activity and mortality in frail subpopulations. Our findings demonstrate the model's effectiveness in managing complex survival data, with potential applications in biomedical research related to unobserved heterogeneity. The method is available as an R package.
Instrumental variables are widely used to adjust for measurement error bias when assessing associations of health outcomes with ME prone independent variables. IV approaches addressing ME in longitudinal models are well established, but few methods exist for functional regression. We develop two methods to adjust for ME bias in scalar on function linear models. We regress a scalar outcome on an ME prone functional variable using a functional IV for model identification and propose two least squares based methods to adjust for ME bias. Our methods alleviate potential computational challenges encountered when applying classical regression calibration methods for bias adjustment in high dimensional settings and adjust for potential serial correlations across time. Simulations demonstrate faster run times, lower bias, and lower AIMSE for the proposed methods when compared to existing approaches. The proposed methods were applied to investigate the association between body mass index and wearable device-based physical activity intensity among community dwelling adults living in the United States.
There are no more papers matching your filters at the moment.