Memory-Efficient Diffusion Image Generation: A Hybrid Framework for Low-Resource Environments
Keywords:
Text-to-Image Generation, Low-Resource AI, Image Generation OptimizationAbstract
This paper outlines a hardware-efficient structure optimizing text-to-image diffusion models under low-resource environments. Experiments were run solely on CPUs and free-tier cloud services using Stable Diffusion 1.5 and SDXL-Turbo. The hybrid pipeline (SD1.5 → SDXL-Turbo) combines the compositional stability of SD1.5 with the refinement of details of SDXL-Turbo, having an LPIPS score of 0.3716 compared to 0.6743 with SDXL-Turbo alone. A pixel-average ensemble algorithm provided smoother images but higher perceptual differences. The framework illustrates that it is possible to create high-quality image generation and viable quantitative evaluation without having resource-intensive GPUs, thus providing practical guidance to resource-constrained researchers and developers.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 MOSES MUKIIBI, Dr Khadak, Hussein Fouad Mohamed Ali, Ahmed Abdulhakim Al-Absi

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.