Ziqi Zhong
  • Selected Works
  • Media & Activities
  • Beyond Academia

Selected works

Zhong, Z. and X. Li, “Re-Visiting the Green Puzzle: The Effect of Eco-Positioning on Service Adoption” (2024), ISMS 2024, EMAC 2024, under review at International Journal of Research in Marketing (AJG/ABS: 4), available at http://dx.doi.org/10.2139/ssrn.4138686 Abstract: Service providers are increasingly employing eco-positioning strategies to promote sustainable products. However, these strategies’ effectiveness in driving service adoption, particularly among consumers with varying levels of inertia, remains unexplored. Drawing on the elaboration likelihood model, this research investigates the differential effects of eco-positioning on inertial and new consumers in energy service adoption. In five studies, including a large-scale study of U.S. households’ energy consumption (N = 15,568) and four experimental studies (N = 1,078), we document a consistent inertia utility in green service adoption ($7.69 in the field, $7.44 in the lab), demonstrating that eco-positioning is more effective in increasing service adoption intentions among inertial consumers than new consumers. Through experiments and topic modelling (BERTopic, LDA), we find that eco-positioning more effectively elicits a warm glow and reduces inertial consumers’ service concerns while the sustainability liability effect has a stronger negative impact on new consumers. Through a novel discrete choice model, we quantify that a 3.12% claimed reduction in emissions is equivalent to a $1 incentive in motivating inertial consumers. These findings provide actionable guidance for service providers and policymakers in tailoring eco-positioning intensity to consumers’ inertia utility levels and leveraging quantitative benchmarks to optimise resource allocation between environmental and monetary incentives.

Zhong, Z., X. Li, and B. Chen, “AI-Driven Privacy Policy Optimisation for Sustainable Data Strategy” (JMP), AMA Winter 2025, EMAC 2025, ISMS 2025, targeting Journal of Marketing Research (UTD24, AJG/ABS: 4*) Abstract: We first develop analytical models demonstrating that firms can enhance both profits and consumer surplus by adopting moderate data strategies responsive to privacy sensitivities, promoting sustainable data strategy. Building on this insight, we introduce PrivaAI, a state-of-the-art AI agent incorporating Multi-model Retrieval-Augmented Generation, enhanced Chain-of-Thought reasoning, and Human-AI Fine-tuning loops, alongside a validated multi-dimensional framework for measuring privacy policy effectiveness. Applying PrivaAI to analyse privacy policies from the top 5,000,000 websites according to OpenPageRank—a unique dataset categorised using the Global Industry Classification Standard (GICS)—we find lower-ranked sites consistently demonstrate poorer policy scores and less sustainable data practices, confirming our model's prediction. Through controlled eye-tracking experiments, we validate PrivaAI's effectiveness in capturing how consumers process firms’ data strategy claims within privacy policies. Our framework offers capabilities including industry-specific benchmarking, revealing consumer preferences toward data practices, and providing actionable recommendations for optimising sustainable data strategies. These insights enable firms to design data strategies that balance consumer trust, data minimisation, and sustainability, ultimately helping companies reduce digital carbon footprints while gaining competitive advantages through improved sustainability performance and increased social welfare.

Zhong, Z., Y. Li, X. Wang, X. Tang, and K. Zhang, “ReasonBridge: Efficient Reasoning Transfer from Closed to Open-Source Language Models” (Module in JMP), under review at 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP) (CORE rankings: A*, flagship NLP conference) Abstract: Recent advancements in Large Language Models (LLMs) have revealed a significant performance gap between closed-source and open-source models, particularly in tasks requiring complex reasoning and precise instruction following. This paper introduces ReasonBridge, a methodology that efficiently transfers reasoning capabilities from powerful closed-source to open-source models through a novel hierarchical knowledge distillation framework. We develop a tailored dataset with only 1,000 carefully curated reasoning traces emphasizing difficulty, diversity, and quality. These traces are filtered from across multiple domains using a structured multi-criteria selection algorithm. Our transfer learning approach incorporates: (1) a hierarchical distillation process capturing both strategic abstraction and tactical implementation patterns, (2) a sparse reasoning-focused adapter architecture requiring only 0.3% additional trainable parameters, and (3) a test-time compute scaling mechanism using guided inference interventions. Comprehensive evaluations demonstrate that ReasonBridge improves reasoning capabilities in open-source models by up to 23% on benchmark tasks, significantly narrowing the gap with closed-source models. Notably, the enhanced Qwen2.5-14B outperforms Claude-Sonnet3.5 on MATH500 and matches its performance on competition-level AIME problems. Our methodology generalizes effectively across diverse reasoning domains and model architectures, establishing a sample-efficient approach to reasoning enhancement for instruction following.

Zhong, Z., X. Bi, and X. Tang, “MANTA: Multi-modal Abstraction and Normalisation via Textual Alignment” (Module in JMP), targeting: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (CORE rankings: A*, flagship AI conference) Abstract: While multi-modal learning has advanced significantly, current approaches often treat modalities separately, creating inconsistencies in representation and reasoning. We introduce MANTA: Multi-modal Abstraction and Normalisation via Textual Alignment, a theoretically-grounded framework that unifies visual and auditory inputs into a structured textual space for seamless processing with large language models. MANTA addresses three key challenges: (1) semantic alignment across modalities, (2) temporal synchronization, and (3) efficient retrieval of sparse information from long sequences. We provide formal theoretical analysis of our approach, proving its optimality for context selection under information-theoretic constraints. Experimental results on the challenging task of Long Video Question Answering show that MANTA improves state-of-the-art models by up to 13.7% in overall accuracy, with particularly strong gains (16.9%) on long-duration videos. Our framework introduces a novel optimization method for redundancy minimization while preserving rare signals, demonstrating how structured textual representation can serve as a unifying abstraction for multi-modal reasoning.

Ye, H., S. Chen, Z. Zhong, Y. Zhang, H. Zhang, and J. Yao, “Seeing through the Conflict: Transparent Knowledge Conflict Handling in Retrieval-Augmented Generation” (Module in JMP), under review at 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP) (CORE rankings: A*, flagship NLP conference) Abstract: Large language models (LLMs) equipped with retrieval---the Retrieval-Augmented Generation (RAG) paradigm---should combine their parametric knowledge with external evidence, yet in practice they often hallucinate, over-trust noisy snippets, or ignore vital context. We introduce TCR (Transparent Conflict Resolution), a plug-and-play framework that makes this decision process observable and controllable. TCR (i) disentangles semantic match and factual consistency via dual contrastive encoders, (ii) estimates self-answerability to gauge confidence in internal memory, and (iii) feeds the three scalar signals to the generator through a lightweight soft-prompt with SNR-based weighting. Across seven benchmarks TCR improves conflict detection, raises knowledge-gap recovery by +21.4% and cuts misleading-context overrides by -29.3%, while adding only 0.3% parameters. The signals align with human judgements and expose temporal decision patterns.

+44 7865577777 +86 17876000000
Z.Zhong6@lse.ac.uk
5.04 Marshall Building, London School of Economics, 44 Lincoln's Inn Fields, London WC2A 3LY

let's connect

Copyright © Ziqi Zhong 2023. All rights reserved.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.