🧠 Model

Mixtral-8x7B-Instruct-v0.1

by mistralai

--- library_name: vllm language: - fr - it - de - es - en license: apache-2.0 base_model: mistralai/Mixtral-8x7B-v0.1 inference: false widget: - messages: - role: user content: What is your favorite condiment? extra_gated_description: >- If you want to learn more about how we process your personal d...

πŸ• Updated 12/18/2025

🧠 Architecture Explorer

Neural network architecture

1 Input Layer
2 Hidden Layers
3 Attention
4 Output Layer

About

> [!TIP] > PRs to correct the transformers tokenizer so that it gives 1-to-1 the same results as the mi...

πŸ“ Limitations & Considerations

  • β€’ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • β€’ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • β€’ FNI scores are relative rankings and may change as new models are added.
  • β€’ Data source: [{"source_platform":"huggingface","source_url":"https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1","fetched_at":"2025-12-18T04:21:58.998Z","adapter_version":"3.2.0"}]

πŸ“š Related Resources

πŸ“„ Related Papers

No related papers linked yet. Check the model's official documentation for research papers.

πŸ“Š Training Datasets

Training data information not available. Refer to the original model card for details.

πŸ”— Related Models

Data unavailable

πŸš€ What's Next?