
Deploy AI with Confidence: Four Strategies for Managing Generative AI Security and Privacy

Generative AI is not just a trend, but a growing force that is expected to continue accelerating over the next few years. Enterprises are increasingly learning to integrate this latest technology into their automation, decision-making, and content generation processes.
A well-structured and thoughtful risk management strategy is critical to enabling companies to continue ensuring data security and privacy. This allows them to confidently adopt AI, which in turn leads to further innovation.
This paper identifies six AI security and privacy issues and suggests four approaches to managing these costs effectively.
6 Data Security and Privacy Risks When Using GenAI
To effectively control risks, organizations must first understand the key drivers contributing to the overall risk of generative AI platforms. This knowledge will empower you to make informed decisions and implement cost-effective strategies, ensuring you are prepared for the challenges ahead.
- Prompt Data Leakage -This is probably the issue most people think about, especially when companies leverage publicly available generative AI models as unlicensed or “free” versions. GenAI services are great tools for analyzing large amounts of tabularized data or textual documents and summarizing them for the user. By uploading internal information into external AI platforms, these platforms become a data exit point for firms, resulting in the loss of control of proprietary information.
- Training Data Leakage - Generative AI models are trained on massive amounts of data. While generative AI models generally do not explicitly reproduce raw training data in their responses, it can happen, especially if malicious or poorly designed prompts are used. While most publicly available models are trained on large amounts of general publicly available data, meaning the risk is lower, proprietary models trained on specific internal datasets that include sensitive data can inadvertently memorize and regenerate confidential information. This leakage risk is exacerbated when models are exposed through APIs or used in customer-facing applications.
- Prompt Injection and Model Exploitation - Expanding on the above, malicious actors can manipulate model inputs (prompts) to trick models into returning inappropriate sensitive data, even after explicit guardrail controls are in place to attempt to prevent the models from responding directly to inappropriate prompts. This class of attacks is one of the risks unique to generative AI compared to other technologies.
- Shadow AI Usage - Employees may use unauthorized GenAI tools outside the corporate IT ecosystem. These “unofficial” solutions are unlikely to be integrated fully into the formal IT environment, and key security controls such as monitoring or data loss prevention may be missing. This further increases the risk of internal proprietary data being leaked accidentally or deliberately through these channels.
- Inadequate Access Controls - Given the ease with which generative AI systems can be implemented to help employees answer questions about their work, it is important to ensure that the models know who is allowed to see what information in the responses. This risk is amplified when the same AI solutions are leveraged to support external users. Without robust identity and access management (IAM) protocols, GenAI systems can be misused by internal or external actors. Improper configuration can expose confidential outputs or allow unauthorized model interaction.
- Regulatory and Compliance Exposure - Given GenAI’s data-hungry nature and opaqueness, legal requirements around transparency, privacy, and decision-making mean extra care must be taken when including AI services in customer-related processes, especially in healthcare, finance, and public services. GDPR, CCPA, HIPAA, and other data protection laws all include elements of explicit consent, transparency, and data minimization.
Mitigation Strategies and Solutions
After identifying some of the key drivers behind the risks associated with generative AI, enterprises can develop strategies to minimize their exposure to these risks.
Leverage Advanced Training Strategies
For situations where you are directly training proprietary models on proprietary data, various strategies exist for ensuring that individual data elements used within the training set are not reproducible in model outputs and the privacy of personal records is maintained.
- Data Anonymization and Minimization - Pre-processing data to remove or obfuscate personally identifiable information (PII) helps reduce privacy risks. Synthetic data generation and tokenization can be leveraged to maintain context and relevance through model training without compromising the true underlying sensitive records.
- Differential Privacy - Differential privacy works by introducing a carefully calibrated amount of statistical noise into data inputs, during training, or in the output in a well-structured approach that provides mathematical guarantees of privacy.
- Federated Learning - Federated learning allows training on localized data to remain on localized systems, with only the resulting model parameters centralized without transferring raw data, thus preserving local privacy. Federated learning also enables collaboration across organizations while ensuring that sensitive or regulated data remains within its originating environment.
Use AI Proxies and API Gateways
In situations where you are leveraging proprietary and third-party models, adding additional control points in the communication flow can improve security by providing specific access and content controls.
- Model Entitlements and Access Controls - Best practice across any data and technology environment is to actively minimize access to any application, function or data only to those individuals and systems that have a legitimate use. Limiting individuals only to the individual generative AI models that they need for their role reduces the risk of data exposure through model outputs. Centralized management of access controls across multiple functions and capabilities in one place enables organizations to implement broad role-based access control (RBAC) strategies.
- AI Web Application Firewall Controls - Web Application Firewalls (WAFs) protect API endpoints from attacks through various rate limiting, protocol validation, and SQL injection checks. AI-focused services extend these capabilities by focusing on the content to mitigate prompt-injection attacks, validate data formats, restrict specific keywords or phrases, etc. Behavioral anomaly detection approaches can also monitor inputs and outputs for sudden changes in usage patterns that indicate an attack.
Implement Attack Simulations and Security Monitoring
Like any other technologies, AI models should be constantly monitored and periodically evaluated for security issues.
- IT Performance Monitoring - While overall performance metrics (e.g., CPU, Memory, Response Rates, Traffic, etc.) are good indicators of model operational health, they can also be reactive indicators of potential security issues. For example, a sudden increase in traffic could indicate an attack.
- SEIM Monitoring – In addition to IT performance monitoring, SEIM monitoring focuses on elements such as individual logins, geography, type of access, and data content, which can be correlated to suggest something is not right in the environment.
- AI Red-Teaming - Red teaming and other penetration or adversarial-style testing can be applied to AI models to help identify potential vulnerabilities. Red-teaming exercises simulate real-world attack methods that malicious actors use, especially relevant in client-facing or sensitive data situations.
Invest in Governance, Design, and People
Finally, don’t forget about the people involved. Responsible AI starts with the individuals at all levels tasked with identifying opportunities for leveraging AI and executing against these processes daily.
- Governance, Risk Management, and Compliance (GRC) Frameworks - There is no substitute for well-thought-through implementations. Establish processes for identifying appropriate use cases, align the correct model and deployment approach for the specific circumstances that consider data sensitivity, security, and privacy, and implement continuous monitoring and supervision processes to track long-term performance and ensure good controls remain in place.
- Employee Education and Usage Policies - Last but not least, invest time and resources in employees who you expect to leverage AI. Implement education programs that teach everyone the opportunities that AI provides them and the risks and issues that arise from this new type of technology. Maintain and enhance these training and awareness programs over time so everyone feels confident in their use and knows how to implement responsible AI.
Conclusion
Deploying generative AI introduces new and complex data privacy and security challenges that cannot be addressed solely through conventional IT controls. However, with education, awareness, and investments in well-designed controls, AI can be leveraged safely and respect privacy within any organization.
To learn more about how TrustBolt can help you deploy AI with confidence, visit us www.trustbolt.ai/learn
Credits: Research for this report was conducted with help from various AI services, including Perplexity, ChatGPT and Copilot, and was edited with the help of Grammarly