Ziqi Zhong
  • Selected Works
  • Media & Activities
  • Beyond Academia

Selected works

Zhong, Z. and X. Li, “Re-Visiting the Green Puzzle: The Effect of Eco-Positioning on Service Adoption” (2024), ISMS 2024, EMAC 2024, under review at Journal of the Academy of Marketing Science (FT50, AJG/ABS: 4*) [Full Paper] Abstract: Service providers are increasingly employing eco-positioning strategies to promote sustainable products. However, these strategies’ effectiveness in driving service adoption, particularly among consumers with varying levels of inertia, remains unexplored. Drawing on the elaboration likelihood model, this research investigates the differential effects of eco-positioning on inertial and new consumers in energy service adoption. In five studies, including a large-scale study of U.S. households’ energy consumption (N = 15,568) and four experimental studies (N = 1,078), we document a consistent inertia utility in green service adoption ($7.69 in the field, $7.44 in the lab), demonstrating that eco-positioning is more effective in increasing service adoption intentions among inertial consumers than new consumers. Through experiments and topic modelling (BERTopic, LDA), we find that eco-positioning more effectively elicits a warm glow and reduces inertial consumers’ service concerns while the sustainability liability effect has a stronger negative impact on new consumers. Through a novel discrete choice model, we quantify that a 3.12% claimed reduction in emissions is equivalent to a $1 incentive in motivating inertial consumers. These findings provide actionable guidance for service providers and policymakers in tailoring eco-positioning intensity to consumers’ inertia utility levels and leveraging quantitative benchmarks to optimise resource allocation between environmental and monetary incentives.

Zhong, Z. and B. Chen, “From Privacy Washing to Sustainable Data Strategies: A Theory-Based AI Approach” (JMP), AMA Winter 2025, EMAC 2025, ISMS 2025, targeting Journal of Marketing Research (UTD24, AJG/ABS: 4*) Abstract: We first developed analytical models showing that firms can enhance profits and consumer surplus by adopting sustainable data strategies. To validate the model and enable scalable policy measurement, we introduced PrivaAI, an AI-based agent with a novel framework derived from regulation, literature, expert input, and human ratings. PrivaAI leverages structured chain-of-thought reasoning and multimodal retrieval-augmented generation, fine-tuned on 1,418 human-labeled policies (8,508 evaluations) to align with consumer perceptions. Applying it to policies from 10 million websites classified by Open PageRank and GICS sectors, we find: (i) lower-ranked providers systematically underperform, consistent with the model’s assumption; (ii) marked industry heterogeneity; and (iii) regulatory asymmetries—under weaker regulation (CCPA), higher-ranked firms often show uneven improvements suggesting privacy washing, whereas stronger regulation (GDPR) yields broader, more balanced improvements. A pre-registered lab experiment with 610 US-representative participants further shows that PrivaAI evaluations explain intentions regarding usage, data sharing, and trust, while validating that the features it flags as suggesting privacy washing align with consumer perceptions. Together, these findings provide a scalable foundation and actionable insights for benchmarking practices, detecting privacy washing, and guiding firms and regulators toward sustainable data strategies that reinforce long-term trust and social responsibility.

Zhong, Z. and X. Tang, “ReasonBridge: Efficient Reasoning Transfer from Closed to Open-Source Language Models” (Module in JMP), under review at Transactions of the Association for Computational Linguistics (TACL) (flagship NLP journal, #1 in JCR Linguistics) [Full Paper] Abstract: Recent advancements in Large Language Models (LLMs) have revealed a significant performance gap between closed-source and open-source models, particularly in tasks requiring complex reasoning and precise instruction following. This paper introduces ReasonBridge, a methodology that efficiently transfers reasoning capabilities from powerful closed-source to open-source models through a novel hierarchical knowledge distillation framework. We develop a tailored dataset Reason1K with only 1,000 carefully curated reasoning traces emphasizing difficulty, diversity, and quality. These traces are filtered from across multiple domains using a structured multi-criteria selection algorithm. Our transfer learning approach incorporates: (1) a hierarchical distillation process capturing both strategic abstraction and tactical implementation patterns, (2) a sparse reasoning-focused adapter architecture requiring only 0.3% additional trainable parameters, and (3) a test-time compute scaling mechanism using guided inference interventions. Comprehensive evaluations demonstrate that ReasonBridge improves reasoning capabilities in open-source models by up to 23% on benchmark tasks, significantly narrowing the gap with closed-source models. Notably, the enhanced Qwen2.5-14B outperforms Claude-Sonnet3.5 on MATH500 and matches its performance on competition-level AIME problems. Our methodology generalizes effectively across diverse reasoning domains and model architectures, establishing a sample-efficient approach to reasoning enhancement for instruction following.

Zhong, Z. and X. Tang, “MANTA: Cross-Modal Semantic Alignment and Information-Theoretic Optimization for Long-form Multimodal Understanding” (Module in JMP), under review at Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (CORE rankings: A*, flagship AI conference) [Full Paper] Abstract: While multi-modal learning has advanced significantly, current approaches often treat modalities separately, creating inconsistencies in representation and reasoning. We introduce MANTA (Multi-modal Abstraction and Normalization via Textual Alignment), a theoretically-grounded framework that unifies visual and auditory inputs into a structured textual space for seamless processing with large language models. MANTA addresses four key challenges: (1) semantic alignment across modalities with information-theoretic optimization, (2) adaptive temporal synchronization for varying information densities, (3) hierarchical content representation for multi-scale understanding, and (4) context-aware retrieval of sparse information from long sequences. We formalize our approach within a rigorous mathematical framework, proving its optimality for context selection under token constraints. Extensive experiments on the challenging task of Long Video Question Answering show that MANTA improves state-of-the-art models by up to 22.6% in overall accuracy, with particularly significant gains (27.3%) on videos exceeding 30 minutes. Additionally, we demonstrate MANTA's superiority on temporal reasoning tasks (23.8% improvement) and cross-modal understanding (25.1% improvement). Our framework introduces novel density estimation techniques for redundancy minimization while preserving rare signals, establishing new foundations for unifying multimodal representations through structured text.

Ye, H., S. Chen, and Z. Zhong, “Seeing through the Conflict: Transparent Knowledge Conflict Handling in Retrieval-Augmented Generation” (Module in JMP), under review at Proceedings of AAAI Conference on Artificial Intelligence (AAAI) (CORE rankings: A*, flagship AI conference) Abstract: Large language models (LLMs) equipped with retrieval---the Retrieval-Augmented Generation (RAG) paradigm---should combine their parametric knowledge with external evidence, yet in practice they often hallucinate, over-trust noisy snippets, or ignore vital context. We introduce TCR (Transparent Conflict Resolution), a plug-and-play framework that makes this decision process observable and controllable. TCR (i) disentangles semantic match and factual consistency via dual contrastive encoders, (ii) estimates self-answerability to gauge confidence in internal memory, and (iii) feeds the three scalar signals to the generator through a lightweight soft-prompt with SNR-based weighting. Across seven benchmarks TCR improves conflict detection, raises knowledge-gap recovery by +21.4% and cuts misleading-context overrides by -29.3%, while adding only 0.3% parameters. The signals align with human judgements and expose temporal decision patterns.

+44 7865577777 +86 17876000000
Z.Zhong6@lse.ac.uk
5.04 Marshall Building, London School of Economics, 44 Lincoln's Inn Fields, London WC2A 3LY

let's connect

Copyright © Ziqi Zhong 2023. All rights reserved.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.