After a long period of underlying refactoring and a baptism of tens of millions of data points, ChenkinNoob-XL-V0.5 is officially released!
As the latest major milestone from the ckn Mainline Lab, V0.5 completely shakes off the "cheap AI-generated look" of traditional anime models and truly steps into the standard of industrial-grade productivity. This update not only takes a massive lead in data knowledge base but also achieves a qualitative leap in compositional tension and lighting texture.
Independence Statement: The ChenkinNoob team operates independently; this project is not an official release of Laxhar Lab, but is built upon their excellent open-source base model.
Key Upgrades
1. Comprehensive Evolution of Data and Cognition
To wash away the common homogenized "AI look" on the market, we made drastic reforms on the data side. We removed datasets that deviated too much from the core 2D anime distribution, and on top of V0.2, we added 2.17 million strictly filtered open-source game concept designs and high-quality Western datasets (core data cutoff: January 2026). This not only massively increases the model's total knowledge base but also allows it to effortlessly master the latest trending art styles and popular characters.
2. True Productivity for Game Devs
ckn is by no means just a "gacha toy." During the R&D of V0.5, we collaborated deeply with real AI game development teams, directly listening to the pain points of frontline concept artists and lead artists. The model's understanding of complex clothing, specific perspectives, and character designs has significantly improved, making it fully capable of integrating into modern game art workflows.
3. Rebuilding the Underlying Training Architecture
Facing a dataset of tens of millions, we completely abandoned the original open-source training scripts and built ckn's exclusive underlying training architecture from scratch. This resulted in an epic improvement in our training efficiency! At the same time, we fully matured the Hierarchical Dropout and Repeat Tag Resampling strategies explored in V0.3-BETA, endowing the model with extremely strong generalization capabilities.
4. Exclusive UniControl Ecosystem
Alongside the release of V0.5, we have open-sourced a killer ControlNet trained on the ckn V0.5 base model: Chenkin-UniControl-XL.
It fuses 8 control modes (lineart, depth, pose, etc.) into a single base model and pioneers the Fuse (Multi-Condition Fusion Control) feature. It achieves precise control without polluting the original art style, all at extremely low VRAM usage. (Note: This requires pairing with our dedicated ComfyUI advanced node.)
Roadmap
The ckn ecosystem is expanding rapidly:
Ecosystem Expansion: Currently, the IP-Adapter (IPA), Style Transfer, and Character Transfer models based on V0.5 have officially entered the training schedule.
Multimodal & New Architectures: The Chenkin Edit Lab's image editing model is in intense preparation. Meanwhile, we are actively researching entirely new model architectures—this year, we will absolutely not stop at training SDXL!
Recommended Settings
To achieve the best generation results, please refer to the following settings:
CFG Scale: 5 ~ 6
Steps: 25 ~ 30
Sampler: Euler a (or equivalent high-frequency samplers)
Resolution: Total pixel area around 1024x1024 (e.g., 832x1216, 1024x1024, 1216x832, etc.)
Prompting Guide
Positive Prompt Seeds:
masterpiece, best quality, newest, high resolution, aesthetic, excellent, year 2026,
(Note: V0.5 still retains the comprehensive Quality Tags and Date Tags system. You can precisely control image quality using tags like aesthetic, excellent, newest, etc.)
V0.5 Tag System
[TBD: The latest table defining resolution, quality, and year tags for V0.5]
Community & Support
Join our community to get the latest updates, share your artworks, or report issues:
This model inherits the fair-ai-public-license-1.0-sd license from noobai-XL-1.1.
Core Guidelines:
Any unauthorized illegal commercial use is strictly prohibited.
Disclaimer: Do not generate illegal, harmful, or unethical content. Users must assume all legal responsibilities for the content generated by the model and its consequences.
Participants and Contributors
The ckn Team
We are an independent, passionate, collaborative, and continuously improving open-source geek team. We consist of:
Mainline Lab: Responsible for the stable iteration of major foundational versions like V0.5.
Frontier Tech Lab: Responsible for R&D of the latest architectures and UniControl.
Image Editing Lab: Exploring the multimodal future such as inpainting and style transfer.
Platform Support: Special thanks to ModelScope for their support in the release of V0.5.
Ecosystem Co-creation: A grand thank you to our deep partner—the Deconstruct Original (解构原典) community. As a hardcore anime AI community active since 2023, you provided the most authentic feedback and the hottest promotional battleground.
Core Ecosystem Contributors: Thanks to MIAOKA (喵咔), silvermoong (银月), nian__gao233 (年糕), yuno779 (九月) for their immense efforts in model testing, semantic alignment, and ecosystem building.
Technical Advisors (Laxhar Lab): Special thanks to @LAX (LAX) and @Nebulae (Nebulae) from Laxhar Lab for serving as long-term advisors and providing continuous guidance on model design and training.
Art Advisors: Thanks to MLiang, BLACKDUO, and Sdwang for their professional guidance on model aesthetics and industrial-grade implementation.
V0.5 Contributors:
Discord Beta Testers: Thanks to Bluvoll, Anzhc, Drac (Special Thanks), talan, Panchovix, itterative, Ryusho, Ly, Silvelter, and others for their invaluable feedback during the V0.3~V0.4 closed testing phase.
QQ Group Testers & Visual Support: Thanks to heathcliff, boundless, 2222k, suqingwei114514, 三费武装白色人种, vv--laov, and others for providing cover art, and thanks to all group members for participating in the model's closed testing!
Community Support & Feedback: Thanks to 孤辰, 昊天, 米豆粒, 乾杯君 (Snke), 砚青, 双月丸‖soutsukimaru, 大尾立人间体, 青空, 喵九 (Kojya), and others for their enthusiastic support and help during the R&D and testing of V0.5.
Cover Design and Promotion Support: Thanks to poi, neko, MMX and others.
Open-Source Pioneers
The development of ckn relies on the exploration of predecessors in the open-source community. Special thanks to the following teams and individuals for laying the foundation of the anime AI ecosystem:
AngelBottomless: Thanks to the core contributor of the Illustrious series for providing an excellent foundation and guidance to the open-source community.
DeepGHS: Thanks to the deepghs team for open-sourcing various training sets, image processing tools, and models.
Onommai: Thanks to OnomaAI for open-sourcing their powerful base model.
Mikubill: Thanks for developing the Naifu trainer.
"The romance of open source lies in the fact that you are never fighting alone. Join us to define the future of anime AI!"