Memory-Efficient Diffusion Image Generation: A Hybrid Framework for Low-Resource Environments

Authors

Keywords:

Text-to-Image Generation, Low-Resource AI, Image Generation Optimization

Abstract

This paper outlines a hardware-efficient structure optimizing text-to-image diffusion models under low-resource environments. Experiments were run solely on CPUs and free-tier cloud services using Stable Diffusion 1.5 and SDXL-Turbo. The hybrid pipeline (SD1.5 → SDXL-Turbo) combines the compositional stability of SD1.5 with the refinement of details of SDXL-Turbo, having an LPIPS score of 0.3716 compared to 0.6743 with SDXL-Turbo alone. A pixel-average ensemble algorithm provided smoother images but higher perceptual differences. The framework illustrates that it is possible to create high-quality image generation and viable quantitative evaluation without having resource-intensive GPUs, thus providing practical guidance to resource-constrained researchers and developers.

Downloads

Published

2026-05-06

How to Cite

MUKIIBI, M., Singh, K. B., Ali, H. F. M., & Al-Absi , A. A. (2026). Memory-Efficient Diffusion Image Generation: A Hybrid Framework for Low-Resource Environments. Environment-Behaviour Proceedings Journal, 11(37). Retrieved from https://ebpj.e-iph.co.uk/index.php/EBProceedings/article/view/7889