The National Information Technology Development Agency (NITDA) has warned Nigerian users of ChatGPT about newly identified security flaws that could expose personal and organizational data to malicious actors.
In an advisory released on Thursday through its Computer Emergency Readiness and Response Team (CERRT.NG), the agency highlighted seven vulnerabilities affecting OpenAI’s GPT-4o and GPT-5 models. These weaknesses could allow attackers to carry out “indirect prompt injection” attacks, manipulating the AI tool’s behavior through hidden instructions embedded in webpages, URLs, or online comments.
The advisory explained that during routine ChatGPT use, such as browsing, searching, or content summarization, these hidden commands could trigger unintended actions, potentially leading to data leakage or unauthorized operations.
According to NITDA, some of the vulnerabilities also enable attackers to bypass ChatGPT’s safety controls by disguising malicious content behind trusted domains. Others exploit markdown rendering bugs, allowing hidden instructions to go undetected by the system.
The warning comes amid growing reliance on AI-powered tools for business, research, and public-sector tasks across Nigeria.
NITDA urged individuals and organizations to exercise caution when using ChatGPT for sensitive activities and to stay informed about software updates from OpenAI that may address these security gaps.

