The AI Verify Foundation launches Project Moonshot, one of the world’s first Large Language Models (LLMs) Evaluation Toolkit, aimed at addressing the risk of biases and harmful content from unchecked LLMs. Now in beta and open-sourced on GitHub, it offers a seamless way to evaluate LLM applications’ performance, both pre- and post-deployment.