This is a misleading title. OpenAI had a bug that was causing some users to be flagged for cyber abuse that shouldn't have been. The bug appears to have since been fixed and for lost people there is no action required to continue to use 5.3-codex.<p>Here's the actual context and what the discussion should probably be about (copied from: <a href="https://openai.com/index/trusted-access-for-cyber/" rel="nofollow">https://openai.com/index/trusted-access-for-cyber/</a>):<p>"Frontier models like GPT‑5.3-Codex have been designed with mitigations like training the model to refuse clearly malicious requests like stealing credentials. In addition to safety training, automated classifier-based monitors will detect potential signals of suspicious cyber activity. Developers and security professionals doing cybersecurity-related work may be impacted by these mitigations, while we calibrate our policies and classifiers.<p>To use models for potentially high-risk cybersecurity work:
- Users can verify their identity at chatgpt.com/cyber (opens in a new window)
- Enterprises can request trusted access for their entire team by default through their OpenAI representative<p>Security researchers and teams who may need access to even more cyber capable or permissive models to accelerate legitimate defensive work can express interest in our invite-only program (opens in a new window). Users with trusted access must still abide our Usage Policies and Terms of Use .<p>This approach is designed to reduce friction for defenders while preventing prohibited behavior, including data exfiltration, malware creation or deployment, and destructive or unauthorized testing. We expect to evolve our mitigation strategy and Trusted Access for Cyber over time based on what we learn from early participants."
by connorshinn
|
Feb 12, 2026, 10:26:08 PM
If it reroutes a request and I'm paying for it - this is called stealing. Because I did not consent to paying for the worse model to handle the task.
by zb3
|
Feb 12, 2026, 10:26:08 PM