top of page
Search

Determining Appetite for Risk: Frameworks for Drafting AI Policy in the Workplace

  • Writer: Nicole Clayton
    Nicole Clayton
  • Oct 13
  • 6 min read
Robot butlers, yes please. Robot overlord, no thank you.
Robot butlers, yes please. Robot overlord, no thank you.

by Productivity Labs


The New Frontier of Work Is Already Here

Artificial Intelligence has entered the workplace not through the front door of policy, but through the side door of experimentation. It showed up in meeting notes, marketing drafts, and spreadsheet formulas long before it appeared in governance documents or leadership briefings.


What’s happening now is a cultural shift as much as a technological one. Individuals, not institutions, are deciding when and how to use AI. Some are embracing it as a creative amplifier, a shortcut through repetitive work, or a partner in brainstorming. Others are opting out entirely, wary of accuracy concerns, ethical implications, or the loss of human craftsmanship.


In every case, behavior precedes policy. That means most organizations already have an AI policy; it’s just invisible, written through the habits and ethics of their employees.


Leaders who want to stay competitive can’t afford to ignore this. Whether an organization fully embraces AI or deliberately limits its use, it must know its appetite for risk, the degree to which it’s willing to accept uncertainty in pursuit of innovation. The answer to that question doesn’t come from technology. It comes from culture, ethics, and the willingness to design thoughtful systems of accountability.


AI won’t replace people. However, people who learn to leverage AI with discernment, curiosity, and ethical rigor will outperform those who don’t. If organizations don’t define how AI should be used, employees will define it for them.


From Experimentation to Intention: Why AI Needs a Framework

Every prompt typed into an AI system symbolizes a decision about ownership, accuracy, data privacy, and representation. The more those decisions happen without structure, the more risk compounds quietly behind the scenes.


AI policy isn’t about restriction; it’s about calibration, setting clear boundaries, testing what works, and resetting expectations through feedback loops. Organizations that take a measured, iterative approach to AI governance will not only protect their reputation but also cultivate a workforce that’s confident, skilled, and future-ready.


The Human Side of Risk Appetite

AI use in the workplace often follows personality as much as policy:

  • High-adoption, low-effort users treat AI as a shortcut. They generate outputs quickly but skip the verification that gives those outputs credibility, like fact-checking, tone alignment, and human voice. It looks productive, but it’s often hollow work.

  • High-adoption, high-intent users use AI as an amplifier. They prompt strategically, cross-check outputs, and refine results through human insight. Their work reflects both speed and discernment, the hallmark of ethical productivity.

  • Low-adoption users often abstain altogether, citing ethical uncertainty, lack of trust, or fear of obsolescence.

A “high-use” environment isn’t automatically an innovative one. The distinction lies in effort and intention. Organizations should reward thoughtful use, not volume of use, encouraging workflows where AI assists quality work rather than accelerates shallow work.


Where Organizations Need to Step In

To move from accidental to intentional use, organizations must define their AI risk appetite, the level of uncertainty they’re willing to accept in pursuit of innovation.


A practical sequence:

  1. Define use cases – Identify where AI is already in play, formally or informally.

  2. Map the risks – Evaluate confidentiality, integrity, and accuracy needs for each.

  3. Set approval tiers – Low-risk (auto-approved), moderate (manager sign-off), high-risk (executive or legal review).

  4. Establish accountability – Assign responsibility for human verification and oversight.

  5. Align with top authorities and frameworks – Reference guidance from legal council, recognized regulatory bodies, and industry councils that govern your field. Examples include the EU AI Act (for transparency and risk classification), the NIST AI Risk Management Framework (for governance and accountability), and sector-specific organizations such as the AMA, AICPA, AAM, IAPP, or SHRM. Aligning with your organization's top authority ensures your policy reflects both ethical standards and evolving best practices.


Failing to adopt such frameworks or educate the workforce on the organization’s official stance creates a “Wild West” scenario where AI is used inconsistently, often without context or care. In a vacuum, three things happen:

  • Some employees use AI with precision and ethics, generating impressive, high-quality results.

  • Others avoid it altogether, missing out on efficiency and innovation gains.

  • And most concerning, a subset uses AI carelessly, producing unverified or poorly crafted work that can compromise the organization’s credibility, data integrity, and reputation.


Clear frameworks prevent this drift. They set expectations for how and when AI should be used, creating structure for innovation rather than chaos disguised as creativity.


Balancing Ethics and Efficiency

Recently, I helped draft an AI Use Policy for a creative organization, a space where the line between inspiration and originality matters deeply. After researching best practices, we designed a policy with intentional nuance:

  • AI is prohibited for generating original creative works such as visual art, writing, or curatorial statements.

  • AI is permitted for enhancement and efficiency tasks like improving photo resolution, adjusting lighting, or editing imagery.


The reasoning was simple. In reputation-driven fields like the arts, authenticity is non-negotiable. The organization wanted to ensure that artists and curators remain the true authors of their work while still embracing AI tools that streamline production and logistics.


At the same time, the policy encourages AI use for administrative efficiency, such as taking a photo of handwritten meeting notes and uploading it to ChatGPT for accurate, time-efficient transcription. In that use case, AI supports humans in refining and organizing ideas; it doesn’t create them.


A human must still review and revise, ensuring context, nuance, and intent are preserved before dissemination.

AI should not replace good work, it should support humans in doing better work.

This approach satisfied both ethical and operational priorities. Creatives felt protected, administrators felt empowered, and leadership gained confidence that AI use was aligned with institutional values.


Designing Guardrails and Feedback Loops

AI policy should be treated as a living document, one that evolves with use, evaluation, and feedback.


To ensure responsible implementation:

  • Build double-check protocols: Require human review before decisions or public release.

  • Log prompts and outputs: Maintain records for auditing and training improvements.

  • Encourage red-teaming: Periodically test for bias, hallucination, or data exposure.

  • Run feedback cycles: Gather employee input and recalibrate boundaries quarterly.


Policies that evolve alongside behavior remain relevant and trusted, the key to sustainable innovation.


Training for a Risk-Aware Culture

Effective AI governance isn’t about restriction; it’s about designing for discernment. The goal is to create a culture where employees can innovate safely, ethically, and with confidence.


Organizations can:

  • Offer AI literacy training to demystify tools and limitations.

  • Require transparency when AI contributes to a deliverable.

  • Use ethical reflection checklists before applying AI in sensitive work: Who benefits? Who could be harmed? Can this output be verified independently?


When teams are equipped to ask those questions, risk awareness becomes second nature, and compliance becomes culture.


Practical Applications by Industry

AI’s usefulness, and its risk, changes dramatically by context. The best policies recognize that not all work should or shouldn’t involve AI in the same way. Here’s a guide to help organizations calibrate by sector.


Academia and Research

  • Appropriate use: Editing, grammar correction, formatting citations, summarizing literature, organizing ideas.

  • Restricted use: Generating original research, analysis, or academic argumentation.

  • Why: Academic integrity demands human authorship of ideas. AI may assist in polishing, but it cannot originate claims or conclusions.


Creative and Fine Arts

  • Appropriate use: Technical editing of imagery (color correction, lighting, resizing), workflow automation, metadata tagging, or marketing copy support.

  • Restricted use: Creating original artworks, curatorial statements, or creative writing presented as human-authored work.

  • Why: Authenticity and integrity are core to artistic value. AI can support production, not replace artistic intent.


Marketing, Communications, and Client Relations

  • Appropriate use: Drafting first-pass copy, generating taglines or headlines, segmenting audiences, creating ideas for newsletters, branding, and brainstorming responses to challenging customer interactions.

  • Restricted use: Publishing AI-generated content without human review or attribution.

  • Why: AI can accelerate creativity and responsiveness, but tone, empathy, and nuance must still be verified by humans.


Healthcare and Life Sciences

  • Appropriate use: Summarizing clinical documentation, routing referrals and appointments, powering chatbots to triage FAQs or provide non-diagnostic guidance, automating scheduling and follow-ups, and analyzing anonymized datasets for quality improvement.

  • Restricted use: Making diagnostic or treatment recommendations, generating individualized patient communications, or interpreting clinical data without clinician oversight.

  • Why: Patient safety, privacy, and regulatory compliance demand human oversight. AI can streamline workflows but must never replace licensed judgment.


Legal and Financial Services

  • Appropriate use: Drafting templates, analyzing precedent patterns, summarizing case files or financial reports, conducting initial risk scans.

  • Restricted use: Producing binding language, forecasts, or compliance statements without professional verification.

  • Why: These outputs have legal and fiduciary implications that demand licensed human sign-off.


Corporate Operations and Human Resources

  • Appropriate use: Transcribing meeting notes, synthesizing feedback, drafting policy outlines, designing surveys, summarizing performance metrics.

  • Restricted use: Making personnel decisions or publishing official HR communications without review.

  • Why: AI improves clarity and efficiency but cannot substitute for human empathy or discretion.


Technology, Engineering, and Data Analytics

  • Appropriate use: Code generation, test script creation, documentation automation, drafting user manuals, summarizing analytics.

  • Restricted use: Deploying unverified code or using AI for autonomous product or security decisions.

  • Why: Technical precision and security require rigorous testing and human QA.


Business Development and Entrepreneurship

  • Appropriate use: Market research summaries, competitor analysis, pitch-deck brainstorming, grant or proposal drafting.

  • Restricted use: Financial forecasting or investor communication without validation.

  • Why: AI can amplify insights but must not distort financial or strategic credibility.


The Strategic Imperative: Humans + AI = Ethical Productivity

AI isn’t replacing human intelligence; it’s redistributing where and how we apply it. The organizations that thrive will be those that intentionally pair AI’s best output with human judgment, creativity, and integrity.


If leaders don’t define how AI should be used, AI will define how work gets done. And when that happens, organizations risk either over-reliance or irrelevance.

 
 
 

Comments


bottom of page