Llama Guard
Llama Guard is Meta’s safety classification model that identifies unsafe instructions and responses, enabling structured guardrails for LLM applications.
Llama Guard is Meta’s safety classification model that identifies unsafe instructions and responses, enabling structured guardrails for LLM applications.
🤖 Help GenAIFolks discover smarter tools ✨
SubmitExplore 🤖 the AI stack transforming productivity and innovation.
GenAIFolks Tools curates top AI apps, APIs, and frameworks — making it easy for builders, coders, and founders to find the right solution fast. 💡
💬 Got an AI product or partnership idea? Let’s connect at genaifolks.com/contact