logo
2
1
WeChat Login
BennyDaBall<BennyDaBall@users.noreply.huggingface.co>
Upload README.md with huggingface_hub

🚀 Z-Engineer V4 (4B)

The Z-Engineer returns — now with a PhD in "not being mid."

This is Z-Engineer V4, the culmination of extensive research into what makes an AI prompt engineer actually good at its job. Built on the Qwen 3 architecture and trained using a novel SMART Training methodology, this 4B parameter model doesn't just describe scenes—it understands the craft of visual storytelling down to the lens flare.


🧠 What is this?

Z-Engineer V4 is a fully fine-tuned (not LoRA, we went all in) version of the text encoder from Tongyi-MAI/Z-Image-Turbo. It's been specifically trained to understand the nuances of AI Image Generation workflows.

It excels at:

  • Expanding Concepts: Turn "sad robot in rain" into a cinematic fever dream with chromatic aberration, shallow depth of field, and a melancholic color grade that would make Blade Runner jealous.
  • Technical Precision: It knows the difference between an 85mm portrait lens and a 24mm wide—and will use them appropriately. Lighting? Rembrandt, split, volumetric fog? It's got opinions.
  • Stylistic Consistency: It writes with a creative voice, not that robotic "hyperrealistic, 8k, trending on artstation" energy.

🔑 Key Use Cases

  • Prompt Enhancement: A low-VRAM powerhouse for turning your braindead 3AM ideas into detailed visual narratives.
  • 🔌 Z-Image Turbo Encoder: Fully backwards compatible as a drop-in CLIP text encoder for Z-Image Turbo workflows producing varied and unique results from the same seed.
  • 🛡️ Local & Private: Runs entirely on your machine. No API fees, no data logging, no corporate overlords judging your prompts.
  • Hybrid Power: Use it to expand a prompt, then use the model itself as the encoder for generation. It's turtles all the way down.

🧬 What's New in V4: SMART Training

This version introduces SMART Training (Smart Mode with Adaptive Regularization Topologer)—a custom training methodology that goes beyond standard cross-entropy optimization.

The secret sauce? Four auxiliary regularizers that operate on hidden states, logits, and weight matrices:

RegularizerWhat It DoesWhy It Matters
EntropicPrevents mode collapse, encourages diversityNo more repetitive "cinematic, 8k, masterpiece" loops
HolographicEnforces depth-wise information compressionClean feature hierarchy from surface to abstract
TopologicalEncourages coherent latent trajectoriesPrompts flow logically instead of word salad
ManifoldStabilizes weight distributionsRock-solid training dynamics

The result? A model that generalizes better, outputs more varied responses, and doesn't collapse into repetitive patterns even after 55,000 training examples.


📉 Key Improvements Over V2.5

  • Full Fine-Tune: V2.5 was a merged LoRA. V4 is a full parameter fine-tune—every single weight has been updated.
  • Bigger Dataset: Trained on 55,000 examples (vs 34,678 for V2.5)—60% more data.
  • SMART Regularization: Novel training methodology that actively prevents the failure modes that plagued earlier versions.
  • Longer Training: 7,500+ optimizer steps with extensive validation checkpointing.
  • Loss Reduction: 55% decrease in validation loss (2.80 → 1.27) compared to baseline.

🔌 ComfyUI Integration (Recommended)

I have a custom node for seamless integration with ComfyUI:

  • Features: Optimized for local OpenAI API compatible backends (LM Studio, Ollama, etc.)
  • Get it here: ComfyUI-Z-Engineer

📝 Recommended System Prompt

For best results, use this system prompt:

Interpret the user seed as production intent, then build a definitive 200-250 word single-paragraph image prompt that preserves every explicit constraint while intelligently expanding missing details. First infer the core subject, action, setting, and emotional tone; treat these as non-negotiable anchors. Then enhance with precise visual staging (explicit foreground, midground, background), clear visual hierarchy and eye path, physically plausible lighting (source, direction, softness, color temperature), and optical strategy (if lens/aperture are provided, preserve exactly; if absent, choose fitting lens and aperture and imply their depth-of-field effect). Integrate organic, manufactured, and environmental textures with realistic material behavior, add motion/atmospheric cues only when they support the scene, and apply a coherent color grade consistent with mood and environment. Keep the prose vivid but controlled: no contradictions, no overstuffing, no generic filler. Do not mention camera body brands. Output one polished paragraph only, no bullets, no line breaks, no meta commentary.


💻 Training Facts

I believe in open science. Here's exactly how this was built:

Hardware:

  • Trained locally on an AMD Strix Halo system (Ryzen AI Max+ 395, 128GB Unified RAM)
  • AMD Radeon 8060S Graphics (ROCm/HIP)

Dataset:

  • Size: 55,000 high-quality examples
  • 25,000 Vision-Grounded Samples: Real professional photographs transcribed into the training format using Qwen3-VL-30B-A3B—teaching the model what actually good cinematography looks like
  • 30,000 Synthetic Samples: Generated prompt enhancement pairs for diverse concept coverage
  • Content: Curated mix teaching the model to extrapolate seed concepts into cinematic prompts grounded in real photographic technique

Training Configuration:

ParameterValue
MethodFull Fine-Tune (not LoRA)
Base ModelQwen3-4b-Z-Image-Turbo-AbliteratedV1
Optimizer Steps7,500+
Batch Size2 × 8 accumulation = 16 effective
Learning Rate1e-5 (cosine decay with 5% warmup)
PrecisionBFloat16
Sequence Length640 tokens
Total Training Time~90 hours

📦 GGUF & Quantization

I provide a full suite of GGUF quantizations for use with llama.cpp, Ollama, and LM Studio:

QuantizationSizeNotes
F168.0 GBFull precision, maximum quality
Q8_04.3 GBNear-lossless, recommended for most users
Q6_K3.3 GBGreat balance of quality and size
Q5_K_M2.9 GBGood quality, smaller footprint
Q5_K_S2.8 GBSlightly smaller Q5 variant
Q4_K_M2.5 GBSolid 4-bit, good for VRAM-limited setups
Q4_K_S2.4 GBSmaller 4-bit variant
Q3_K_L2.2 GBLower quality 3-bit, for the desperate
Q3_K_M2.1 GBMedium 3-bit
Q2_K1.7 GBEmergency-only tier. But it exists!

🎯 Quick Start

With Ollama:

ollama run BennyDaBall/Qwen3-4b-Z-Image-Engineer-V4

With LM Studio:

  1. Download the GGUF of your choice
  2. Load it in LM Studio
  3. Use the ComfyUI node or chat directly

⚠️ Disclaimer

This model generates text for image prompts. While I have filtered the dataset to the best of my ability, users should exercise their own judgment. I am not responsible for the content you generate.

Also, if you use this to generate prompts for images that get you in trouble, that's a you problem. The model is just vibing.


🙏 Acknowledgements

  • Qwen Team for the excellent base architecture
  • Tongyi-MAI for Z-Image-Turbo
  • The open source AI community for making this kind of work possible
  • My electricity bill, which now classifies me as a small industrial facility

Built with ❤️ and way too much GPU time by BennyDaBall