Tag: OpenAI
-
YouTube | OpenAI|Collaborating with The Met to Awaken “Sleeping Beauties” with AI
OpenAI, Enriching Lives through Creativity with AI August 15, 2024 OpenAI collaborated with the Metropolitan Museum of Art’s Costume Institute to create a custom chat experience for the exhibit “Sleeping Beauties: Reawakening Fashion.” This experience allows visitors to interact with AI representations of historical figures from the 1930s, enhancing the museum experience. The project showcases…
-
OpenAI|Collaborating with The Met to Awaken “Sleeping Beauties” with AI
OpenAI, Enriching Lives through Creativity with AI August 15, 2024 OpenAI collaborated with the Metropolitan Museum of Art’s Costume Institute to create a custom chat experience for the exhibit “Sleeping Beauties: Reawakening Fashion.” This experience allows visitors to interact with AI representations of historical figures from the 1930s, enhancing the museum experience. The project showcases…
-
OpenAI|Introducing SWE-bench Verified
OpenAI has released SWE-bench Verified, a human-validated subset of SWE-bench, designed to more accurately assess AI models’ ability to solve real-world software problems. SWE-bench Verified addresses issues in the original benchmark, such as overly specific tests and ambiguous problem statements, improving the reliability of AI evaluations in software engineering tasks. Source: OpenAI (August 13, 2024)
-
Zico Kolter Joins OpenAI’s Board of Directors|OpenAI
Published on August 8, 2024OpenAI has appointed Zico Kolter to its Board of Directors to strengthen its governance with expertise in AI safety and alignment. Zico, a professor at Carnegie Mellon University, brings significant experience in AI safety and the robustness of machine learning classifiers. He will also join the Board’s Safety & Security Committee,…
-
GPT-4o System Card|OpenAI
Published on August 8, 2024This article details the safety measures undertaken before the release of GPT-4o, including external red teaming and frontier risk evaluations. Key risk areas such as voice generation, speaker identification, and unauthorized content generation were evaluated, with appropriate mitigations implemented. GPT-4o’s voice capabilities, which closely match human response times, were assessed as…
-
OpenAI|Introducing Structured Outputs in the API
OpenAI introduces Structured Outputs in the API, ensuring model-generated outputs exactly match developer-supplied JSON Schemas. This new feature addresses the limitations of the previous JSON mode by guaranteeing schema conformity with the gpt-4o-2024-08-06 model, achieving 100% reliability. The feature supports function calling with strict output matching and a new response_format parameter for structured responses. Both…
-
Improving Model Safety Behavior with Rule-Based Rewards | OpenAI
OpenAI has developed a new method leveraging Rule-Based Rewards (RBRs) to align models to behave safely without extensive human data collection. RBRs use clear, simple rules to evaluate if the model’s outputs meet safety standards, integrated into the standard reinforcement learning from human feedback (RLHF) pipeline. Experiments show RBR-trained models have comparable safety performance to…
-
New Compliance and Administrative Tools for ChatGPT Enterprise
OpenAI, July 18, 2024 OpenAI announced new tools to support managing compliance programs, enhancing data security, and securely scaling user access for ChatGPT Enterprise. The new tools include the Enterprise Compliance API and SCIM integration, along with expanded GPT controls. These enhancements aim to help enterprise customers in regulated industries meet logging and audit requirements…
-
OpenAI|GPT-4o mini: Advancing Cost-Efficient Intelligence
OpenAI, July 18, 2024 OpenAI introduces GPT-4o mini, a highly cost-efficient small model. GPT-4o mini scores 82% on MMLU and is over 60% cheaper than GPT-3.5 Turbo. It supports multiple languages and excels in mathematical and coding tasks, with enhanced safety measures in place. Source: OpenAI (July 18, 2024)
-
OpenAI | Prover-Verifier Games improve legibility of language model outputs
Research from OpenAI has shown that training strong language models to produce text that weak models can easily verify also makes the text easier for humans to evaluate. This technique, called “Prover-Verifier Games,” involves two players (a prover and a verifier) checking the correctness of solutions. This ensures that model outputs are not only accurate…