Edit
Animation
Options

Comprehensive AI Policy

A Framework for Trust and Responsibility

Animation
Options

Part I: Foundational Principles and Policy Scope

1.0 Our Commitment to Responsible AI

Policy Purpose Statement

This policy outlines ManoByte's comprehensive framework for the responsible, ethical, and legal development, deployment, and use of Artificial Intelligence (AI). Our primary objective is to harness the transformative potential of AI to augment human capabilities, drive innovation, and deliver exceptional value to our customers. This is achieved while proactively managing risks and upholding an unwavering commitment to stakeholder trust. At its core, this policy is guided by the belief that AI should serve as a powerful assistant to human intelligence and judgment, not as a replacement. This document serves as the definitive roadmap for ManoByte's AI adoption journey, ensuring that all our practices are transparent, accountable, and rigorously aligned with our core values, the expectations of our customers, and the highest standards of social responsibility.   

Strategic Alignment

This policy is intentionally designed as a catalyst for responsible innovation, not as a roadblock to progress. It provides the essential guardrails to foster cross-functional collaboration, enable the safe and efficient integration of new AI tools, and establish a clear governance structure that aligns our AI initiatives with our broader business strategy. By creating a culture of curiosity and providing clear guidance, we aim to empower our employees with the tools and knowledge necessary to leverage AI effectively. This will drive productivity, create transformative change for our business and our clients, and solidify our position as a leader in the ethical application of technology.

2.0 Guiding Principles

 

The following six principles are the non-negotiable ethical pillars that govern all AI activities at ManoByte. Synthesized from the established best practices of global technology leaders and international standards organizations, these principles form the "north star" for our AI governance and decision-making processes. Every AI system we design, deploy, or use will be measured against these standards.  

2.1 Fairness and Non-Discrimination

We commit to designing, training, and deploying AI systems that treat all individuals equitably and justly. ManoByte will actively and continuously work to identify, measure, and mitigate harmful biases in our data, algorithms, and operational practices to prevent discriminatory outcomes. This commitment extends to all protected characteristics, including but not limited to race, gender, age, disability, ethnicity, sexual orientation, and religion. Our approach to fairness is multifaceted, involving the use of diverse and representative datasets for training, the incorporation of quantitative fairness metrics into the development and testing lifecycle, and the execution of regular, independent bias audits on our systems, particularly those deemed high-risk.  

2.2 Reliability and Safety

Our AI systems will be engineered for robust, secure, and reliable performance, ensuring they operate safely and as intended under a wide range of conditions. This principle mandates building resilience against adversarial attacks, data poisoning, and other malicious attempts to compromise system integrity. It also requires that our systems are designed to degrade gracefully when encountering unexpected inputs or conditions, thereby preventing unintentional harm. Rigorous testing, validation, verification, and continuous post-deployment monitoring are mandatory components throughout the entire AI system lifecycle to ensure sustained reliability and safety.

2.3 Privacy and Security

The protection of personal data and confidential information is a foundational requirement for all AI activities at ManoByte. We will adhere to a strict "privacy-by-design and by-default" approach, embedding robust data protection and security measures into the architecture of our AI systems from their inception. This commitment includes the rigorous application of data protection principles such as data minimization, purpose limitation, and storage limitation. We will employ strong encryption for data in transit and at rest, enforce granular access controls, and ensure full compliance with all applicable data protection laws globally, including the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).  

2.4 Transparency and Explainability

We commit to being transparent about our use of AI. For our customers and the public, this means providing clear, accessible, and timely disclosures when they are interacting with an AI system. For AI-driven decisions that have a significant effect on individuals, we will provide meaningful explanations of the outcomes. Internally, this principle drives us to build AI systems that are as interpretable as possible. This technical explainability is crucial for facilitating effective debugging, auditing, security assessments, bias detection, and ultimately, ensuring human accountability for the system's behavior.

2.5 Accountability and Human Oversight

AI is a tool to augment and enhance human capabilities; ultimate responsibility and accountability for the outputs and impacts of any AI system rest with people. ManoByte will ensure that meaningful human oversight is integrated into all AI-driven processes. The level of oversight will be proportional to the risk posed by the system, with the most stringent human-in-the-loop requirements applied to high-risk applications. We will maintain clear and comprehensive audit trails of AI system performance and decisions. Furthermore, we will designate specific, named individuals and teams as responsible for the governance, performance, and ethical adherence of each AI system to ensure that accountability is unambiguous and enforced.  

 
 
 
 
 

 

 

2.6 Inclusiveness and Societal Benefit

 

We will strive to develop and deploy AI in ways that are inclusive, accessible to people with diverse needs and abilities, and contribute positively to society and the environment. We will proactively consider the broader societal impacts of our AI systems, from their environmental footprint to their potential effects on employment and social equity. We commit to aligning our AI development efforts with principles of sustainability and human flourishing, ensuring that the technologies we create serve the greater good.  

 
 
 
 
 

 

 

3.0 Scope and Applicability

 

 

Personnel

 

This policy applies universally to all ManoByte full-time and part-time employees, temporary employees, contractors, consultants, and any other third parties acting on behalf of the company, regardless of their location or role. Adherence to this policy is a condition of employment and engagement with ManoByte.  

 
 

 

 

AI Systems

 

This policy governs the entire lifecycle—from conception and design to development, procurement, deployment, operation, and retirement—of all AI systems. The term "AI System" is defined broadly to include, but is not limited to, machine learning (ML) models, generative AI, large language models (LLMs), natural language processing (NLP), computer vision, and advanced analytics systems that infer or generate predictions, content, recommendations, or decisions. This policy covers:  

 
 

 

  • AI technologies and features integrated into any product or service offered to ManoByte customers.

  • Internal AI systems used for business operations, including but not limited to HR, finance, marketing, and IT.

  • Third-party and publicly available AI tools (e.g., ChatGPT, Microsoft Copilot, Google Gemini) used by employees for any work-related purpose.  

     
     
     

     

 

Interaction with Other Policies

 

This AI Policy is designed to complement and operate in conjunction with other established ManoByte corporate policies. It does not supersede them. Key related policies include, but are not limited to:

  • Cybersecurity Policy

  • Data Privacy Policy

  • Employee Code of Conduct

  • Data Classification Policy

  • Diversity, Equity, and Inclusion (DEI) Policy

  • Intellectual Property Policy

  • Records Retention Policy

Where any provision in this policy conflicts with another company policy or with applicable law, the most restrictive provision that ensures the highest level of compliance and ethical integrity shall apply. All employees are responsible for understanding how this policy integrates with their other professional obligations.  

 

 

 

4.0 Core Definitions

 

A comprehensive glossary of key terms is provided in Appendix A of this document. This glossary is essential for ensuring a clear, consistent, and shared understanding of AI-related concepts across the organization, which is a prerequisite for effective policy adherence and fostering AI literacy. Key defined terms include: Artificial Intelligence (AI), Generative AI, Large Language Model (LLM), Algorithm, Machine Learning (ML), Personal Data, Sensitive Personal Information (SPI), Protected Health Information (PHI), Bias (Systemic, Computational, Human-Cognitive), Explainability, Transparency, Accountability, AI System Lifecycle, and others relevant to this policy's scope.  

 
 
 
 
 

 

 

Part II: AI Governance and Accountability

 

 

5.0 The ManoByte AI Governance Committee (AIGC)

 

Effective AI governance requires more than just a written policy; it demands a living, empowered structure to interpret, apply, and enforce that policy. An organizational system is necessary to ensure accountability, as ad-hoc decision-making by individual teams presents a significant liability in the face of rapidly evolving technology and regulations. Leading technology firms have demonstrated the value of formal governance bodies to operationalize ethical principles. Therefore, to translate the principles of this policy into practice and build demonstrable trust, ManoByte hereby establishes the AI Governance Committee (AIGC).  

 
 
 
 
 

 

 

5.1 Mandate and Authority

 

The AIGC is established as the central body responsible for the strategic oversight and tactical governance of all AI activities at ManoByte. It is vested with the authority to review and approve AI projects, establish and enforce technical and ethical standards, investigate incidents of non-compliance, and recommend the modification or termination of any AI system that violates this policy or is found to pose an unacceptable risk to ManoByte, its customers, or society. The AIGC will provide regular, strategic counsel to ManoByte's executive leadership on emerging AI-related risks, opportunities, and regulatory changes, ensuring that our AI strategy remains both ambitious and responsible.  

 
 
 
 

 

 

5.2 Composition

 

The AIGC will be a permanent, cross-functional committee composed of senior representatives from key departments to ensure that a diverse range of perspectives—technical, legal, ethical, and commercial—informs its decisions. This multidisciplinary approach is critical for holistic and effective risk management. The AIGC shall be chaired by the Chief Legal Officer or a designated senior legal counsel. At a minimum, standing membership shall include senior representatives (Director-level or above) from:  

 
 
 

 

  • Legal and Compliance (Chair)

  • Information Security (CISO or delegate)

  • Data Privacy (DPO or delegate)

  • Technology and Engineering (AI/ML leadership)

  • Product Management

  • Human Resources

To ensure impartiality and access to external expertise, the AIGC is authorized to retain an independent Ethics Advisor to provide unbiased perspectives on complex cases and evolving ethical norms.  

 
 

 

 

5.3 Responsibilities

 

The core responsibilities of the AIGC are to:

  • Maintain, review, and update this AI Policy on at least an annual basis, or more frequently as required by legal or technological developments.  

     

     

  • Develop, implement, and oversee the ManoByte AI System Risk Classification Framework detailed in Part IV.

  • Conduct mandatory pre-development and pre-deployment reviews of all AI projects classified as "High-Risk".  

     
     

     

  • Establish and oversee a rigorous AI vendor due diligence and procurement process.  

     
     

     

  • Serve as the primary escalation point for unresolved ethical concerns and policy violations reported by employees or identified through monitoring.  

     

     

  • Commission, receive, and review the findings of regular internal and external AI audits, and ensure that all corrective action plans are implemented and tracked to completion.  

     
     

     

  • Maintain a comprehensive, centralized inventory of all AI systems designed, developed, procured, or deployed by ManoByte.

 

Table: AI Governance Committee (AIGC) Charter Summary

 

The following table summarizes the AIGC's core operational framework, providing a clear and accessible reference for all ManoByte stakeholders. This structure codifies our commitment to governance and transparency.  

 
 
 

 

Aspect

Description

Mandate

To ensure all AI activities at ManoByte align with our Guiding Principles, comply with all legal and regulatory obligations, and serve the best interests of our customers and stakeholders.

Composition

A permanent, cross-functional committee of senior leadership from Legal (Chair), Information Security, Data Privacy, Technology, Product Management, and Human Resources. May be supplemented by an external Ethics Advisor.

Key Responsibilities

Policy maintenance and enforcement; risk classification framework management; mandatory review of all high-risk AI systems; vendor governance; incident investigation and oversight; commissioning and review of AI audits.

Decision Authority

The AIGC has the authority to approve, deny, or halt AI projects based on its risk assessment. Its decisions are binding across the organization unless formally overruled by the CEO with a documented justification that is recorded in the Committee's minutes.

Meeting Cadence

The AIGC will meet on a regular quarterly basis. Ad-hoc meetings will be convened as needed to address urgent high-risk project reviews or significant incident investigations.  

 

 

 

6.0 AI System Lifecycle Governance

 

AI risk management is not a static, one-time check but a continuous, dynamic process that must be integrated into every stage of an AI system's existence, from initial concept to final decommissioning. The AIGC will define and enforce specific review gates, documentation standards, and approval requirements for each stage of the lifecycle. This ensures that our ethical and technical principles are applied consistently and proactively. The key stages are:  

 
 
 

 

  • Plan & Design: This initial phase requires a documented purpose definition, an initial risk assessment and classification, a data sourcing plan that addresses privacy and bias, and an evaluation of potential societal impact.

  • Develop & Test: This phase includes mandatory bias testing, security vulnerability assessments, robustness and reliability testing, and checks for explainability. All testing procedures and results must be documented.

  • Deploy & Operate: This phase requires a plan for post-deployment monitoring of performance, accuracy, and fairness. It also includes the implementation of incident response protocols and mechanisms for gathering user feedback.

  • Modify & Update: Any significant modification to a model or its training data requires re-evaluation through the appropriate lifecycle gates, with the level of scrutiny determined by the nature of the change and the system's risk level.

  • Retire: This final phase involves a formal decommissioning plan, including secure deletion of personal data, model archiving procedures, and clear communication to any dependent systems or affected stakeholders.

 

7.0 Roles and Responsibilities

 

To ensure clear lines of accountability for every AI system at ManoByte, the following roles and responsibilities are formally defined :  

 
 
 

 

  • AI System Owner: A designated business leader (e.g., a Product Manager, Department Head) who is ultimately accountable for a specific AI system. The Owner is responsible for the system's alignment with business goals, its overall performance and ROI, its budget, and its adherence to this policy throughout its lifecycle.

  • AI Developer/Provider: The technical team or individual responsible for the hands-on design, building, testing, and maintenance of the AI system. They are responsible for implementing the technical requirements of this policy, including security controls, bias mitigation techniques, and documentation.

  • AI User: Any employee, contractor, or other authorized individual who interacts with or uses an AI system as part of their work. Users are responsible for operating the system in accordance with the Acceptable Use guidelines (Part III), for applying critical judgment, and for verifying the accuracy and appropriateness of its outputs before reliance or dissemination.

  • AI Auditor: A functionally independent role, which may be fulfilled by Internal Audit or a qualified external firm, responsible for conducting periodic, objective assessments of AI systems. The Auditor evaluates systems against the criteria set forth in this policy and relevant legal and industry standards, reporting findings directly to the AIGC.

 

8.0 Training and AI Literacy

 

A foundational commitment to responsible AI requires a workforce that is knowledgeable and skilled in its application and risks. A culture of responsibility is built on a culture of competence. ManoByte will therefore design and implement a mandatory, tiered AI literacy and training program for all personnel.  

 
 
 

 

  • Level 1 (All Employees & Contractors): Mandatory annual training covering the core tenets of this AI Policy. This foundational module will focus on our Guiding Principles, the rules for acceptable use of AI tools, data confidentiality obligations, and how to identify and report potential ethical concerns or policy violations.

  • Level 2 (AI Users & People Managers): Role-specific training for all employees who regularly use AI tools in their work and for all managers who oversee them. This training will cover the specific capabilities and limitations of AIGC-approved tools, best practices for effective and safe prompt engineering, and practical methods for identifying and flagging potential inaccuracies, bias, or inappropriate outputs.  

     
     

     

  • Level 3 (AI Developers, System Owners & AIGC Members): Advanced, in-depth technical and procedural training for personnel directly involved in building and governing AI systems. Curricula will include advanced topics such as privacy-by-design engineering for AI, secure machine learning development practices, statistical techniques for bias detection and mitigation, explainability methods, and the detailed compliance requirements for developing and documenting high-risk systems.  

     
     

     

 

Part III: Rules for Internal Use of AI Systems

 

 

9.0 Acceptable and Prohibited Use

 

The widespread availability of powerful public AI tools creates immediate risks and opportunities. This section provides clear rules to guide employees in harnessing these tools productively while safeguarding ManoByte's assets and reputation.

 

9.1 Approved AI Tools

 

The AIGC, in close collaboration with the Information Technology and Information Security departments, will establish, maintain, and publish a list of approved AI tools and platforms. These tools will have undergone a rigorous vetting process to assess their security posture, data privacy policies, intellectual property terms, and overall compliance with ManoByte's standards. The use of any AI tool, application, or service that is not on this official list for any company-related business is strictly prohibited unless an exception is granted through the formal approval process.  

 
 
 

 

 

9.2 Authorized Use Cases

 

AI tools are intended to enhance productivity, augment creativity, and automate mundane tasks. They are not a substitute for human critical thinking, professional judgment, or accountability. Authorized use cases for approved tools include:  

 
 

 

  • Brainstorming ideas and exploring concepts.

  • Drafting routine, non-confidential communications and documents.

  • Summarizing publicly available or non-confidential information.

  • Generating, debugging, and optimizing code snippets, provided no proprietary code is exposed.

Prohibited use cases for any AI tool, unless explicitly sanctioned for a specific, high-risk-assessed internal system, include:

  • Making or assisting in any employment-related decisions, including recruitment, hiring, performance evaluation, promotion, or termination.  

     

     

  • Performing technical, legal, or financial research without independent verification from primary, authoritative sources.

  • Engaging in any activity that requires the processing of sensitive, confidential, or personal data in a non-approved tool.  

     
     
     

     

 

9.3 Requesting New Tools

 

To foster innovation while maintaining security, ManoByte will provide clear guardrails, not insurmountable roadblocks. A formal process will be established and managed by the AIGC for employees to request the evaluation and potential approval of new AI tools. This process will require a clear business case, an initial risk assessment, and will trigger the standard vendor due diligence procedure. This structured pathway allows ManoByte to adapt to new technologies safely and efficiently.  

 
 
 

 

 

10.0 Protection of Confidential Information and Intellectual Property

 

The inadvertent leakage of sensitive data into public AI models represents one of the most significant and immediate threats to ManoByte's security, intellectual property, and customer trust. Studies have shown that a significant percentage of employees admit to inputting sensitive company data into public AI tools, making this a critical area for strict enforcement.  

 

 

 

10.1 Strict Prohibition on Data Input

 

It is strictly and unequivocally prohibited to input, upload, or otherwise provide any of the following categories of information into any public, third-party, or non-AIGC-approved AI system:

  • Customer Data: Any information, in any form, provided by or related to a ManoByte customer. This includes, but is not limited to, names, contact details, business plans, technical data, or any content of their communications with us.

  • Personal Information: Any Personally Identifiable Information (PII), Sensitive Personal Information (SPI), or Protected Health Information (PHI) related to any individual, including employees, customers, partners, or their end-users.  

     
     

     

  • Company Confidential Information: Any non-public information belonging to ManoByte. This includes, but is not limited to, trade secrets, internal financial data, strategic plans, product roadmaps, proprietary source code, security vulnerabilities, or any document or data marked "Confidential," "Proprietary," or "Internal Use Only".  

     
     

     

  • Intellectual Property (IP): Any unpublished or proprietary information, inventions, or creative works belonging to ManoByte or its clients that could be subject to IP protection.

 

10.2 The Public Disclosure Rule

 

To ensure absolute clarity, all employees must operate under the following guiding principle: Treat every piece of information you enter into a third-party generative AI tool as if it were being published on the public internet with your name and ManoByte's name attached. The terms of service for many public AI tools grant the provider broad rights to use input data for model training and other purposes, effectively nullifying any expectation of confidentiality.  

 
 

 

 

10.3 Use of Approved Enterprise Tools

 

For specific, approved business use cases that require the processing of confidential or personal data, employees must exclusively use AIGC-approved, enterprise-grade AI solutions. These are platforms where ManoByte has a negotiated contractual agreement that explicitly guarantees data privacy, security, IP ownership, and strict confidentiality, preventing the use of our data for training the vendor's public models.  

 

 

 

11.0 Verification and Responsibility for AI-Generated Content

 

 

11.1 Human Accountability

 

The individual employee using an AI tool is ultimately and fully responsible for the content, quality, and consequences of the work product they create with its assistance. AI is a tool, and accountability for its use remains with the user. It is a violation of this policy to represent AI-generated content as one's own original work without appropriate review and modification.  

 
 

 

 

11.2 Mandatory Verification

 

All substantive outputs from generative AI tools must be meticulously reviewed, fact-checked, and verified for accuracy, appropriateness, and potential bias by a qualified human before being used in any official capacity. Generative AI systems are known to be prone to "hallucinations"—producing plausible but factually incorrect, misleading, or outdated information. Relying on unverified AI output is a serious lapse in professional judgment and a violation of this policy. All sources, data, and claims must be independently validated against authoritative primary sources.  

 
 
 

 

 

11.3 Disclosure of Use

 

Transparency is key to maintaining trust internally and externally. Employees must inform their direct supervisor when a generative AI tool has been used to provide a significant contribution to a work product. For certain external-facing documents or client deliverables, explicit disclosure of AI use may be required, as detailed in Part VII of this policy.  

 

 

 

Part IV: Development and Deployment of AI-Enabled Products & Services

 

 

12.0 Risk-Based Classification of AI Systems

 

To ensure that governance efforts are proportional to the potential for harm, ManoByte adopts a risk-based approach to AI development, aligned with emerging global regulatory standards. This framework allows us to focus the most stringent controls on applications that pose the greatest potential risk, while fostering innovation in lower-risk areas. All AI systems developed, co-developed, or deployed by ManoByte for customer use must be formally classified according to the following framework. The AIGC holds the final authority on the classification of any system.  

 
 
 

 

 

12.1 Unacceptable Risk (Prohibited)

 

ManoByte will not, under any circumstances, develop, deploy, use, or sell AI systems that are designed for purposes deemed to pose an unacceptable risk to fundamental rights and societal values. This prohibition includes, but is not limited to, AI systems that:

  • Employ subliminal, manipulative, or deceptive techniques to materially distort a person's behavior in a manner that causes or is likely to cause physical or psychological harm.  

     
     
     

     

  • Are intended for general-purpose social scoring of individuals by public authorities.

  • Exploit the vulnerabilities of a specific group of persons due to their age, physical or mental disability, or social or economic situation.  

     
     

     

 

12.2 High Risk

 

High-risk AI systems are those whose failure or misuse could pose a significant risk to an individual's health, safety, or fundamental rights. These systems are permissible but are subject to the most stringent requirements for governance, testing, documentation, and oversight as detailed in Section 13.0. An AI system is classified as high-risk if it is intended to be used in any of the following contexts, among others specified by the AIGC:

  • Employment and Workforce Management: Systems used for recruitment, screening, hiring, promotion, performance monitoring, or termination of employees.  

     
     

     

  • Access to Essential Services: Systems used to evaluate creditworthiness, determine eligibility for loans, or assess entitlement to public assistance benefits or essential private services like insurance.  

     

     

  • Education and Vocational Training: Systems that determine access to or assign individuals to educational institutions.

  • Law Enforcement and Administration of Justice.

  • Medical Applications (if applicable): Systems used for medical diagnostics, treatment planning, or as a component of a medical device.  

     
     

     

 

12.3 Limited Risk

 

Limited-risk AI systems are those that do not meet the high-risk criteria but pose a specific risk related to transparency. These systems are subject to specific disclosure obligations to ensure users are aware they are interacting with an AI. Examples include:

  • Chatbots and virtual assistants intended for direct interaction with humans.  

     

     

  • Systems that generate or manipulate image, audio, or video content (e.g., "deepfakes").  

     

     

 

12.4 Minimal Risk

 

This category includes all other AI systems that do not fall into the Unacceptable, High, or Limited risk categories. These systems pose a low or negligible risk to individual rights and safety. Examples may include AI for spam filtering, inventory management optimization, or internal workflow automation. While still subject to the general principles of this policy, these systems have fewer specific, mandatory compliance obligations.  

 

 

 

Table: ManoByte AI System Risk Classification Framework

 

This table serves as a practical decision-making tool for product and development teams, clarifying the governance pathway for any proposed AI system based on its risk classification.

Risk Level

Definition & Examples

Mandatory Requirements

Unacceptable

Systems that violate fundamental rights and company values. Examples: Social scoring, manipulative systems that cause harm.

Development, deployment, and use are strictly prohibited.

High

Systems whose failure could cause significant harm to health, safety, or fundamental rights. Examples: AI for hiring, credit scoring, medical diagnostics.

AIGC Pre-Approval Required. Full compliance with all requirements in Section 13.0, including: Risk Management System, Rigorous Data Governance, Bias Audits, Comprehensive Technical Documentation, Automatic Logging, Human Oversight Mechanisms, and Pre- and Post-Deployment Audits.

Limited

Systems that pose a transparency risk to users. Examples: Chatbots, deepfake generation tools.

Must comply with specific Transparency & Disclosure Obligations (Part VII). Users must be informed they are interacting with an AI or that content is AI-generated.

Minimal

All other AI systems with low or negligible risk. Examples: Spam filters, internal process automation, predictive maintenance.

Must adhere to the General Principles of this policy (Part I). Subject to standard security and privacy reviews. Exempt from mandatory AIGC pre-approval and high-risk documentation requirements.

 

13.0 Requirements for High-Risk Systems

 

Any AI system classified as "High-Risk" must, without exception, adhere to a strict set of development, documentation, and operational requirements before it can be deployed to any customer. These requirements are designed to ensure maximum safety, fairness, and accountability, and are aligned with emerging global regulations like the EU AI Act.  

 
 
 
 

 

  • Continuous Risk Management System: A formal, documented risk management system must be established and maintained throughout the AI system's entire lifecycle. This process must identify, estimate, evaluate, and mitigate reasonably foreseeable risks to health, safety, and fundamental rights.  

     
     

     

  • Rigorous Data Governance: The datasets used for training, validation, and testing must be subject to stringent governance practices. They must be relevant, representative, free of errors, and as complete as possible. A documented process for examining datasets for potential biases and implementing mitigation strategies is mandatory.  

     
     
     

     

  • Comprehensive Technical Documentation: Detailed technical documentation must be created and kept up-to-date. This documentation must provide regulators and auditors with the necessary information to assess the system's compliance, including its purpose, capabilities, limitations, algorithmic design choices, and testing procedures and results.  

     
     

     

  • Automatic Record-Keeping & Logging: The system must be designed with the technical capability to automatically log events during its operation. These logs are essential for ensuring the traceability of the system's functioning, monitoring for anomalies, and investigating incidents or unexpected outcomes.  

     
     

     

  • Transparency & Explainability for Deployers: The system must be designed to be sufficiently transparent to enable those who deploy it to understand and interpret its outputs. Clear and comprehensive instructions for use, including the system's intended purpose, limitations, and the role of human oversight, must be provided to all customers.

  • Effective Human Oversight: The system must be designed to be effectively overseen by humans. This includes implementing appropriate human-in-the-loop, human-on-the-loop, or human-in-command interfaces, which must allow for human intervention, the ability to override a decision, or the ability to stop the system's operation entirely.  

     
     

     

  • Accuracy, Robustness, and Cybersecurity: The system must achieve a high level of accuracy and performance, as defined and tested for its specific intended purpose. It must be robust enough to perform consistently during use and resilient against both errors and attempts to compromise its security through malicious inputs or other cyberattacks.  

     
     
     

     

 

14.0 Third-Party AI and Vendor Due Diligence

 

When ManoByte procures or integrates AI systems, models, or components from third-party vendors, we extend our commitment to responsibility to our supply chain. We will conduct rigorous due diligence to ensure that our vendors' practices align with our own ethical principles and legal obligations. The procurement process for any AI-enabled technology, overseen by the AIGC, will include:  

 
 

 

  • Vendor Risk Assessment: A comprehensive evaluation of the vendor's AI governance framework, security practices, data handling policies, and methodologies for bias detection and mitigation.

  • Contractual Safeguards: Ensuring that all contracts and licensing agreements include robust clauses that:

    • Guarantee the protection and confidentiality of any ManoByte or customer data.

    • Grant ManoByte the right to audit the vendor's compliance with these terms.

    • Require transparency from the vendor regarding the AI model's training data, architecture, and performance limitations.  

       
       

       

  • HIPAA Compliance (as applicable): For any AI tool that will create, receive, maintain, or transmit Protected Health Information (PHI) on behalf of ManoByte or its customers, a formal Business Associate Agreement (BAA) is mandatory. The vendor must provide evidence of robust technical, physical, and administrative safeguards that are fully compliant with the HIPAA Security Rule.  

     
     
     
     

     

 

Part V: Data Governance and Security in AI

 

 

15.0 Data Handling for AI

 

Data is the lifeblood of AI systems, and its responsible handling is the foundation of trustworthy AI. The principles of modern data protection law are not suspended for AI; rather, they are amplified in their importance due to the scale and complexity of AI data processing. Our policy explicitly interprets and applies these principles to the unique context of AI.

 

15.1 Lawful Basis for Processing

 

All personal data used for any stage of the AI lifecycle, including training, testing, and operation, must have a clearly documented lawful basis for processing under applicable data protection law (e.g., Article 6 of the GDPR). For sensitive personal information, an additional condition for processing (e.g., GDPR Article 9) must be met and documented.  

 
 

 

 

15.2 Purpose Limitation

 

Data collected for a specific, explicit, and legitimate purpose shall not be repurposed for training an AI model for a new, incompatible purpose without establishing a new lawful basis, which may require obtaining fresh, specific consent from the individuals concerned. Before any such reuse, the AI System Owner must conduct and document a formal compatibility assessment, which must be reviewed and approved by ManoByte's Data Protection Officer (DPO). This prevents "function creep" and ensures that data usage remains aligned with the expectations of the individuals who provided it.  

 

 

 

15.3 Data Minimization and De-identification

 

We are committed to the principle of data minimization. We will process only the personal data that is adequate, relevant, and limited to what is necessary for the defined purpose of the AI system. To operationalize this principle in an AI context:  

 
 
 

 

  • Whenever technically and operationally feasible, personal data must be anonymized or strongly pseudonymized before being used to train or test AI systems.  

     
     

     

  • Where anonymization is not possible, we will explore the use of privacy-enhancing technologies (PETs), such as differential privacy or the generation of synthetic data, to reduce privacy risks.  

     
     

     

  • For any data subject to HIPAA, de-identification must strictly adhere to either the Safe Harbor method (removal of 18 specific identifiers) or the Expert Determination method (formal statistical assessment of re-identification risk).  

     
     
     

     

 

15.4 Data Quality and Accuracy

 

The reliability of an AI system is directly dependent on the quality of its underlying data. We will implement reasonable measures to ensure that the data used to train and operate our AI systems is accurate, complete, and kept up-to-date to the extent necessary for the system's intended purpose. Processes will be in place to rectify or erase inaccurate data promptly upon discovery.  

 

 

 

15.5 Storage Limitation

 

Personal data will not be kept in an identifiable form for longer than is necessary for the purposes for which it is processed. This principle applies directly to AI training datasets. Clear data retention periods for all AI-related data, including training, validation, and testing sets, must be defined, documented, and technically enforced.  

 
 

 

 

16.0 Bias Detection and Mitigation

 

Achieving fairness is not a one-time task but a continuous commitment to identifying and mitigating unwanted bias throughout the AI lifecycle. Our strategy for combating bias is proactive and multi-layered.  

 
 
 
 

 

 

16.1 Diverse Development Teams

 

We recognize that a crucial defense against bias is a diversity of perspectives. We will actively strive to build diverse and interdisciplinary teams for AI development, including individuals with varied backgrounds in technology, ethics, social sciences, and the relevant domain. This diversity helps to surface blind spots and challenge assumptions that could otherwise lead to biased outcomes.  

 

 

 

16.2 Data and Model Auditing

 

All AI systems, with the most rigorous requirements applied to high-risk systems, will undergo regular, documented audits to test for discriminatory bias. This process includes:  

 
 
 
 

 

  • Data Audits: Analyzing training data for skews, underrepresentation of demographic groups, or the presence of historical societal biases.

  • Model Outcome Audits: Testing model predictions and decisions across different demographic subgroups using established statistical fairness metrics (e.g., demographic parity, equalized odds) to identify any disparate impacts.

 

16.3 Mitigation Techniques

 

Where unacceptable bias is detected, the AI System Owner is responsible for ensuring that appropriate mitigation techniques are implemented and their effectiveness is validated. These techniques may include, but are not limited to, re-sampling or re-weighting the training data to correct for imbalances, or applying algorithmic adjustments and post-processing techniques to the model's outputs. All mitigation strategies and their results must be documented as part of the system's technical records.  

 

 

 

17.0 AI Security and Resilience

 

AI systems introduce novel security vulnerabilities that require a specialized and proactive approach to defense. Our AI security program, which aligns with the principles of the NIST AI Risk Management Framework, is designed to protect the confidentiality, integrity, and availability of our AI systems.  

 
 
 

 

 

17.1 Secure Development Lifecycle (SDLC) for AI

 

We will integrate security considerations into every phase of the AI development lifecycle. This includes conducting threat modeling specifically for AI-related attack vectors, such as:

  • Data Poisoning: Malicious manipulation of training data to corrupt the model.

  • Model Inversion: Attacks designed to extract sensitive training data from a deployed model.

  • Adversarial Examples: Specially crafted inputs designed to cause the model to make incorrect classifications or decisions.  

     
     

     

 

17.2 Access Controls

 

Strict, role-based access controls (RBAC) will be implemented for all components of the AI ecosystem, including datasets, model files, development environments, and deployment infrastructure. The principle of least privilege will be enforced to ensure that personnel can only access the data and systems absolutely necessary to perform their duties.  

 

 

 

17.3 Vulnerability Management

 

We will conduct regular vulnerability scanning of AI systems and their underlying software and hardware infrastructure. Identified vulnerabilities must be patched or otherwise remediated promptly, with prioritization based on the severity of the vulnerability and the risk level of the affected system.  

 

 

 

17.4 Incident Response

 

ManoByte will develop, maintain, and regularly test incident response plans that are specifically tailored to address AI-related security incidents. These plans will outline procedures for detecting, containing, and responding to events such as a model producing harmful or toxic content, a data breach through a compromised AI API, or the detection of a successful adversarial attack.  

 
 

 

 

Part VI: Compliance, Auditing, and Redress

 

 

18.0 Regulatory Compliance

 

ManoByte is unequivocally committed to full compliance with all applicable laws and regulations governing artificial intelligence and data protection in every jurisdiction where we operate. The AIGC is charged with the responsibility of continuously monitoring the evolving legal landscape and ensuring this policy and our operational practices are updated accordingly.  

 
 
 

 

 

18.1 GDPR & Data Subject Rights

 

We fully uphold all rights granted to individuals under the General Data Protection Regulation (GDPR). We will maintain clear, efficient, and transparent procedures to respond to Data Subject Requests (DSRs) within the statutory timeframes. This includes the right of access, rectification, erasure ("right to be forgotten"), restriction of processing, and data portability, and extends to all personal data used in our AI systems, whether for training or operation.  

 
 
 

 

 

18.2 CCPA & Consumer Rights

 

We fully uphold all rights granted to California residents under the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA). This includes the right to know what personal information is collected, the right to delete that information, and the right to opt-out of the sale or sharing of personal information. We will provide a clear and conspicuous "Do Not Sell or Share My Personal Information" link on our digital properties and will honor opt-out requests, including those related to the use of personal information for AI-driven cross-context behavioral advertising.  

 
 
 

 

 

18.3 EU AI Act

 

We will proactively monitor the implementation timeline and evolving guidance related to the EU AI Act. We commit to ensuring that all our AI systems that fall within its scope comply with its requirements based on their risk classification by the applicable deadlines. Our internal risk classification framework (Part IV) is designed to align with the Act's structure to facilitate future compliance.  

 
 
 

 

 

18.4 HIPAA (as applicable)

 

For any ManoByte product, service, or internal system that creates, receives, maintains, or transmits Protected Health Information (PHI), we will ensure full and strict compliance with the Health Insurance Portability and Accountability Act (HIPAA). This includes adherence to the Privacy Rule, the Security Rule, and the Breach Notification Rule. It is mandatory to have a signed Business Associate Agreement (BAA) in place with any customer, vendor, or subcontractor involved in the handling of PHI in connection with an AI system.  

 
 
 

 

 

19.0 AI Audits and Monitoring

 

To verify compliance with this policy, ensure the ongoing effectiveness of our AI systems, and maintain stakeholder trust, ManoByte will conduct regular, systematic audits of its AI ecosystem.  

 
 
 

 

 

19.1 Scope of Audits

 

AI audits will be comprehensive, assessing systems against the principles and rules set forth in this policy. The scope will include, but is not limited to, evaluations of:

  • Model performance, accuracy, and reliability against defined benchmarks.

  • Data quality, relevance, and governance practices.

  • Fairness, including testing for harmful bias and discriminatory impacts.

  • Security, including vulnerability assessments and resilience to adversarial attacks.

  • Compliance with applicable legal and regulatory requirements.

  • Adequacy and effectiveness of human oversight mechanisms.  

     
     

     

 

19.2 Audit Frequency

 

The frequency of audits will be determined by the system's risk classification.

  • High-Risk Systems: Must undergo a comprehensive audit by a qualified independent auditor (internal or external) at least annually, and following any significant modification.  

     

     

  • Limited-Risk and Minimal-Risk Systems: Will be subject to periodic audits on a less frequent schedule, as determined by the AIGC, based on their complexity and potential impact.

 

19.3 Independence and Reporting

 

Audits will be conducted by a qualified function or firm that is independent of the AI system's development team and system owner to ensure objectivity. All audit findings, recommendations, and management responses will be formally documented and reported directly to the AIGC. The AIGC is responsible for tracking all corrective action plans to completion and reporting on the overall state of AI compliance to executive leadership.

 

20.0 Customer Redress and Challenge Mechanisms

 

Preventing harm is our primary goal, but we recognize that complex systems can sometimes produce unintended or erroneous outcomes. A transparent, accessible, and fair redress mechanism is a cornerstone of accountability and is critical for building and maintaining customer trust. It transforms a potential negative experience into an opportunity to demonstrate our commitment to our customers' rights.  

 
 
 

 

 

20.1 Right to Explanation and Human Review

 

In any instance where a ManoByte AI system makes a decision based solely on automated processing that produces a legal or similarly significant effect on a customer (e.g., related to credit, finance, insurance, or access to essential services), that customer has the right to:

  • Receive a clear, concise, and meaningful explanation of the decision reached. This explanation will be provided in plain language and will include the main factors and general logic that contributed to the outcome.  

     
     
     
     

     

  • Obtain intervention from a qualified ManoByte employee who has the authority and competence to review the decision and all the data that informed it.  

     
     

     

  • Express their point of view, provide additional information, and formally challenge the automated decision. The human reviewer has the authority to override the AI's decision.  

     

     

 

20.2 Redress Procedure

 

ManoByte will establish a dedicated "AI Decision Review" process. This process will be clearly signposted and easily accessible through our primary customer support channels and website. The procedure is as follows:

  1. A customer submits a review request through the designated channel.

  2. The request is logged, and an acknowledgment is sent to the customer with an estimated timeline for review.

  3. The case is assigned to a trained team that is independent of the original AI system's development and operational teams.

  4. The team conducts a thorough review of the automated decision, the data used, and any additional information provided by the customer.

  5. A formal response, including the outcome of the human review and a clear explanation, is provided to the customer within a defined and reasonable timeframe.

 

20.3 AI Ombudsman

 

The AIGC will serve as ManoByte's internal AI Ombudsman. It will act as the final, internal point of appeal for customer grievances related to AI systems that are not satisfactorily resolved through the standard redress procedure. The AIGC has the authority to mandate remedial actions, including changes to AI systems or processes, to address systemic issues identified through individual or collective complaints.  

 
 

 

 

Part VII: Stakeholder Engagement and Trust

 

 

21.0 Transparency and Disclosure to Customers

 

Proactive transparency is fundamental to our goal of instilling customer trust. We believe that clarity and honesty about how we use AI are not just legal obligations but are essential for building strong, lasting relationships with our customers. This means moving beyond legalistic privacy policies to provide contextual, just-in-time information.

 

21.1 Disclosure of AI Interaction

 

When a customer is interacting directly with an AI system, such as a customer service chatbot or a virtual assistant, we will provide a clear and conspicuous disclosure at the beginning of the interaction to inform them that they are communicating with an AI, unless it is already patently obvious from the context of the application.  

 
 

 

 

21.2 Disclosure of AI-Generated Content

 

Any content such as images, audio, or video ("deepfakes") that is synthetically generated or significantly modified by an AI system must be clearly and persistently labeled as such, so that users are aware of its origin. This helps prevent deception and misinformation.  

 

 

 

21.3 AI Disclosure Statements

 

For ManoByte products and services that rely significantly on AI to deliver their core functionality, we will provide a clear, concise, and easy-to-understand AI Disclosure Statement. This statement, written in non-technical language, will be made readily accessible to customers and will outline:

  • The purpose and scope of the AI system and the value it provides.

  • A high-level overview of its key capabilities and known, material limitations.

  • A general description of the types of data the AI uses to function.

  • A clear statement of our commitment to human oversight and a direct link to this policy and our customer redress mechanism.  

     
     
     

     

 

21.4 Measuring ROI and Value

 

We will measure the Return on Investment (ROI) of our AI implementations using a balanced scorecard that includes not only financial metrics but also key indicators of customer value. These metrics will include improvements in customer satisfaction scores (CSAT), reductions in error rates, faster service delivery, and enhanced product capabilities. We will communicate this value proposition to our customers, demonstrating how our investment in AI is being used to directly enhance their experience and success.  

 
 
 

 

 

22.0 Communicating AI's Impact

 

 

22.1 Internal Communication

 

We recognize that the integration of AI into the workplace can be a source of both excitement and apprehension for our employees. We commit to an open, honest, and ongoing dialogue about how AI is transforming our business and the future of work at ManoByte. We will be transparent about our AI strategy, the introduction of new tools, and the potential impact on job roles and responsibilities. Our communication will consistently frame AI as a tool for augmentation and empowerment, designed to free our employees from mundane tasks to focus on more strategic, creative, and fulfilling work. To support this, we will make significant investments in the upskilling, reskilling, and training necessary to ensure our entire workforce can thrive in an AI-enabled environment.  

 
 
 
 
 

 

 

22.2 Public Communication

 

We believe that leadership in the AI era requires public accountability. We will be transparent with the public, our partners, and all stakeholders about our commitment to responsible AI. This comprehensive AI Policy will be made publicly available on our website as a testament to our principles. Furthermore, the AIGC will periodically publish a summary report on our AI governance activities, progress, and learnings, fostering a broader dialogue on the ethical development and deployment of this transformative technology.

 

23.0 Policy Enforcement and Review

 

 

23.1 Enforcement

 

Adherence to this policy is a mandatory condition of employment and engagement with ManoByte. Any violation of this policy by an employee may result in disciplinary action, up to and including termination of employment, in accordance with company procedures and applicable law. Any violation by a contractor, vendor, or other third party may result in the termination of their contract and legal action where appropriate.  

 
 

 

 

23.2 Policy Review

 

The field of artificial intelligence is evolving at an unprecedented pace, as are the legal and societal norms that govern it. Therefore, this policy is a living document. The AI Governance Committee will formally review and, if necessary, update this policy at least annually. More frequent reviews will be conducted as needed in response to significant new technological developments, major regulatory changes, or evolving ethical best practices to ensure that ManoByte's AI governance framework remains relevant, effective, and at the forefront of responsible innovation.