Researchers at Carnegie Mellon University developed a detect-then-impute framework for conformal prediction that maintains valid coverage guarantees even when test data contains cellwise outliers. The approach ensures exchangeability by applying the same outlier detection and imputation steps to both the test point and the calibration set, demonstrating reliable coverage on synthetic and real-world datasets.
View blogResearchers from Renmin University of China and collaborators developed the Multi-level Fine-grained Detection (MFD) framework, which precisely quantifies the degree of LLM involvement at the sentence level across lexical, grammatical, and syntactic dimensions. The framework achieved superior detection accuracy and robustness, outperforming state-of-the-art models with a Mean Absolute Error of 0.1347 and maintaining strong performance against texts generated by advanced LLMs like GPT-4.
View blog