🧠 Model

Tifa-Deepsex-14b-CoT-GGUF-Q4

by ValueFX9507

--- base_model: - deepseek-ai/deepseek-r1-14b language: - zh - en library_name: transformers tags: - incremental-pretraining - sft - reinforcement-learning - roleplay - cot - sex license: apache-2.0 --- - **HF Model**: ValueFX9507/Tifa-Deepsex-14b-CoT - **GGUF**: F16 | Q8(Q4损失较大,建议Q8) - **Demo APK**...

🕐 Updated 12/18/2025

🧠 Architecture Explorer

Neural network architecture

1 Input Layer
2 Hidden Layers
3 Attention
4 Output Layer

About

- **HF Model**: ValueFX9507/Tifa-Deepsex-14b-CoT - **GGUF**: F16 | Q8(Q4损失较大,建议Q8) - **Demo APK**: 点击下载 - **简单的前端**:Github链接 本模型基于Deepseek-R1-14B进行深度优化,借助Tifa_220B生成的数据集通过三重训练策略显著增强角色扮演、小说文本生成与思维链(CoT)能力。特别适合需要长程上下文关联的创作场景。 - **上海左北科技提供算法与算力**企业网址 - **Deepseek团队共享GRPO算法** - **Qwen团队提供优秀开源底座** ...

📝 Limitations & Considerations

  • Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • FNI scores are relative rankings and may change as new models are added.
  • Data source: [{"source_platform":"huggingface","source_url":"https://huggingface.co/ValueFX9507/Tifa-Deepsex-14b-CoT-GGUF-Q4","fetched_at":"2025-12-18T04:21:59.005Z","adapter_version":"3.2.0"}]

📚 Related Resources

📄 Related Papers

No related papers linked yet. Check the model's official documentation for research papers.

📊 Training Datasets

Training data information not available. Refer to the original model card for details.

🔗 Related Models

Data unavailable

🚀 What's Next?