| Provider | Meta |
|---|---|
| Model ID | meta-llama/llama-4-maverick |
| Prompt Price per 1M tokens | $0.15 |
| Completion Price per 1M tokens | $0.6 |
| Sample Workload Cost 1M input + 500K output | $0.45 |
| Context Window | 1.05M |
| Release Date | 2025-04-05 |
| Popularity Rank | Unranked |
| Daily Demand | N/A |
Llama 4 Maverick
Meta model details for pricing, context, and release tracking.
Estimate your workload cost
Estimate this model for your workload
This estimate uses normalized public API pricing per 1M tokens. It is a planning aid, not a billing quote. Verify provider pricing, limits, and terms before production use.
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward...
Llama 4 Maverick is best suited for long-context workloads.
A 1M input token plus 500K output token workload is estimated at $0.45.
Decision Shortcuts
Compare this model
Search head-to-head pages that include Llama 4 Maverick and review input price, output price, context, and sample workload cost.
Find comparisonsMeta catalog
See other Meta models before narrowing your shortlist.
Open provider hubCheaper alternatives
Start from models ranked by a standard cost estimate when budget is the first constraint.
Browse low-cost modelsLong-context alternatives
Compare large-context models for retrieval, documents, and codebase review.
Browse long-context modelsPopular Comparisons
Search all comparisons| Comparison | Newest Release |
|---|---|
| No related comparisons are available yet. | |