Security Cipher

Security Cipher

Adversarial Risk
AI Security and Privacy Training
Establish Business Cases
Governance
Legal
Regulatory
Using or Implementing Large Language Model Solutions
Testing
Evaluation
Verification
and Validation (TEVV)
Model Cards and Risk Cards
RAG: Large Language Model Optimization
AI Red Teaming
ComponentsDescription
Adversarial RiskScrutinize how competitors are investing in artificial intelligence. Although there are risks in AI adoption, there are also business benefits that may impact future market positions.
Adversarial RiskThreat Model: how attackers may accelerate exploit attacks against the organization, employees, executives, or users.
Adversarial RiskThreat models potential attacks on customers or clients through spoofing and generative AI.
Adversarial RiskInvestigate the impact of current controls, such as password resets, which use voice recognition.
Adversarial RiskUpdate the Incident Response Plan and playbooks for LLM incidents.
AI Asset InventoryCatalog existing AI services, tools, and owners. Designate a tag in asset management for specific inventory
AI Asset InventoryInclude AI components in the Software Bill of Material (SBOM), a comprehensive list of all the software components, dependencies, and metadata associated with applications
AI Asset InventoryCatalog AI data sources and the sensitivity of the data (protected, confidential, public)
AI Asset InventoryEstablish if pen testing or red teaming of deployed AI solutions is required to determine the current attack surface risk.
AI Asset InventoryCreate an AI solution onboarding process
AI Asset InventoryEnsure skilled IT admin staff is available either internally or externally, in accordance to the SBoM
AI Security and Privacy TrainingTrain all users on ethics, responsibility, and legal issues such as warranty, license, and copyright.
AI Security and Privacy TrainingUpdate security awareness training to include GenAI related threats. Voice cloning and image cloning, as well as in anticipation of increased spear phishing attacks
AI Security and Privacy TrainingAny adopted GenAI solutions should include training for both DevOps and cybersecurity for the deployment pipeline to ensure AI safety and security assurances.
Establish Business CasesEnhance customer experience
Establish Business CasesBetter operational efficiency
Establish Business CasesBetter knowledge management
Establish Business CasesEnhanced innovation
Establish Business CasesMarket Research and Competitor Analysis
Establish Business CasesDocument creation, translation, summarization, and analysis
GovernanceEstablish the organizationś AI RACI chart (who is responsible, who is accountable, who should be consulted, and who should be informed)
GovernanceDocument and assign AI risk, risk assessments, and governance responsibility within the organization.
GovernanceEstablish data management policies, including technical enforcement, regarding data classification and usage limitations. Models should only leverage data classified for the minimum access level of any user of the system. For example, update the data protection policy to emphasize not to input protected or confidential data into nonbusiness-managed tools.
GovernanceCreate an AI Policy supported by established policy (e.g., standard of good conduct, data protection, software use)
GovernancePublish an acceptable use matrix for various generative AI tools for employees to use.
GovernanceDocument the sources and management of any data that the organization uses from the generative LLM models.
LegalConfirm product warranties are clear in the product development stream to assign who is responsible for product warranties with AI.
LegalReview and update existing terms and conditions for any GenAI considerations.
LegalReview AI EULA agreements. End-user license agreements for GenAI platforms are very different in how they handle user prompts, output rights and ownership, data privacy, compliance and liability, privacy, and limits on how output can be used.
LegalReview existing AI-assisted tools used for code development. A chatbotś ability to write code can threaten a companyś ownership rights to its own product if a chatbot is used to generate code for the product. For example, it could call into question the status and protection of the generated content and who holds the right to use the generated content.
LegalReview any risks to intellectual property. Intellectual property generated by a chatbot could be in jeopardy if improperly obtained data was used during the generative process, which is subject to copyright, trademark, or patent protection. If AI products use infringing material, it creates a risk for the outputs of the AI, which may result in intellectual property infringement.
LegalReview any contracts with indemnification provisions. Indemnification clauses try to put the responsibility for an event that leads to liability on the person who was more at fault for it or who had the best chance of stopping it. Establish guardrails to determine whether the provider of the AI or its user caused the event, giving rise to liability
LegalReview liability for potential injury and property damage caused by AI systems
LegalReview insurance coverage. Traditional (D&O) liability and commercial general liability insurance policies are likely insufficient to fully protect AI use.
LegalIdentify any copyright issues. Human authorship is required for copyright. An organization may also be liable for plagiarism, propagation of bias, or intellectual property infringement if LLM tools are misused.
LegalEnsure agreements are in place for contractors and appropriate use of AI for any development or provided services
LegalRestrict or prohibit the use of generative AI tools for employees or contractors where enforceable rights may be an issue or where there are IP infringement concerns.
LegalAssess and AI solutions used for employee management or hiring could result in disparate treatment claims or disparate impact claims.
LegalMake sure the AI solutions do not collect or share sensitive information without proper consent or authorization.
RegulatoryDetermine State specific compliance requirements
RegulatoryDetermine compliance requirements for restricting electronic monitoring of employees and employment-related automated decision systems (Vermont)
RegulatoryDetermine compliance requirements for consent for facial recognition and the AI video analysis required (Illinois, Maryland)
RegulatoryReview any AI tools in use or being considered for employee hiring or management.
RegulatoryConfirm the vendorś compliance with applicable AI laws and best practices.
RegulatoryAsk and document any products using AI during the hiring process. Ask how the model was trained, how it is monitored, and track any corrections made to avoid discrimination and bias.
RegulatoryAsk and document what accommodation options are included.
RegulatoryAsk and document whether the vendor collects confidential data.
RegulatoryAsk how the vendor or tool stores and deletes data and regulates the use of facial recognition and video analysis tools during pre-employment.
RegulatoryReview other organization-specific regulatory requirements with AI that may raise compliance issues. The Employee Retirement Income Security Act of 1974, for instance, has fiduciary duty requirements for retirement plans that a chatbot might not be able to meet.
Using or Implementing Large Language Model SolutionsThreat Model: LLM components and architecture trust boundaries
Using or Implementing Large Language Model SolutionsData Security: Verify how data is classified and protected based on sensitivity, including personal and proprietary business data. (How are user permissions managed, and what safeguards are in place?)
Using or Implementing Large Language Model SolutionsAccess Control: Implement least privilege access controls and implement defense-in-depth measures
Using or Implementing Large Language Model SolutionsTraining Pipeline Security: Require rigorous control around training data governance, pipelines, models, and algorithms.
Using or Implementing Large Language Model SolutionsInput and Output Security: Evaluate input validation methods, as well as how outputs are filtered, sanitized, and approved.
Using or Implementing Large Language Model SolutionsMonitoring and Response: Map workflows, monitoring, and responses to understand automation, logging, and auditing. Confirm audit records are secure.
Using or Implementing Large Language Model SolutionsInclude application testing, source code review, vulnerability assessments, and red teaming in the production release process.
Using or Implementing Large Language Model SolutionsConsider vulnerabilities in the LLM model solutions (Rezilion OSFF Scorecard).
Using or Implementing Large Language Model SolutionsLook into the effects of threats and attacks on LLM solutions, such as prompt injection, the release of sensitive information, and process manipulation.
Using or Implementing Large Language Model SolutionsInvestigate the impact of attacks and threats to LLM models, including model poisoning, improper data handling, supply chain attacks, and model theft.
Using or Implementing Large Language Model SolutionsSupply Chain Security: Request third-party audits, penetration testing, and code reviews for third-party providers. (both initially and on an ongoing basis)
Using or Implementing Large Language Model SolutionsInfrastructure Security: How often does the vendor perform resilience testing? What are their SLAs in terms of availability, scalability, and performance?
Using or Implementing Large Language Model SolutionsUpdate incident response playbooks and include an LLM incident in tabletop exercises.
Using or Implementing Large Language Model SolutionsIdentify or expand metrics to benchmark generative cybersecurity AI against other approaches to measure expected productivity improvements.
Testing, Evaluation, Verification, and Validation (TEVV)Establish continuous testing, evaluation, verification, and validation throughout the AI model lifecycle.
Testing, Evaluation, Verification, and Validation (TEVV)Provide regular executive metrics and updates on AI Model functionality, security, reliability, and robustness.
Model Cards and Risk CardsReview a models model card
Model Cards and Risk CardsReview risk card if available
Model Cards and Risk CardsEstablish a process to track and maintain model cards for any deployed model including models used through a third party.
RAG: Large Language Model OptimizationRetrieval Augmented Generation (RAG) & LLM: Examples
RAG: Large Language Model Optimization12 RAG Pain Points and Proposed Solutions
AI Red TeamingIncorporate Red Team testing as a standard practice for AI Models and applications.