In a moment of heightened scrutiny over data privacy, intellectual property rights, and the rapid pace of artificial intelligence advancement, Matt Calkins, the cofounder and chief executive of Appian, has laid out a bold, four-principle framework for responsible AI that aims to restore trust between AI providers and end users. His plan emphasizes transparency, consent, compensation, and privacy-preserving practices as the core pillars of a more trustworthy AI era. While the industry wrestles with how to regulate and govern AI, Calkins argues that regulation alone will not unlock AI’s full potential unless it is anchored in principled practice. Appian’s stance places the company at the center of a shifting landscape where enterprise AI must be both powerful and principled, particularly for organizations that rely on low-code platforms to build and deploy AI-powered applications. The broader context includes persistent concerns about data governance, the risk of misuse, and the societal impact of automation on jobs and equity. In this environment, Calkins’ guidelines are presented as a pragmatic blueprint to align innovation with user rights and organizational responsibility, reducing friction for enterprises trying to scale responsible AI.
The Bold Proposal: Appian’s Four Guiding Principles for Responsible AI
Appian’s guidelines center on four explicit, interlocking principles designed to address core pain points in current AI practice: where data comes from, how private data is used, how personally identifiable information is handled, and how copyrighted material is managed. The four principles are described as follows:
-
Disclosure of data sources. Entities building and deploying AI should clearly disclose where data originates, including the provenance of datasets and the historical contexts that shape them. This transparency is intended to help users understand biases, limitations, and potential legal or ethical concerns tied to the data feeding an AI system. The practical implications include the need for auditable data lineage, documentation that accompanies AI outputs, and a framework for customers to assess whether a model’s training data aligns with their own governance and compliance standards. The principle aims to empower organizations to scrutinize the data behind AI recommendations and decisions, rather than accepting opaque results.
-
Use of private data only with consent and compensation. Private or sensitive data should be utilized in ways that require explicit user consent, and where appropriate, compensation should be provided. This principle addresses the common tension between leveraging data for improved AI performance and protecting individuals’ privacy and economic interests. Implementing it would involve clear consent mechanisms, user-friendly disclosures about how data will be used, and compensation models that reflect the value extracted from private data. The practical challenges include designing consent flows that are meaningful and revocable, aligning compensation with data usage, and enforcing these terms across complex data ecosystems.
-
Anonymization and permission for personally identifiable data. When personal data is involved, processes must emphasize anonymization and obtain permission for the use of personally identifiable information (PII). The goal is to minimize privacy risks while preserving data utility for AI development and deployment. Techniques such as strong de-identification, differential privacy, and robust access controls would be central to this principle, accompanied by governance that continually evaluates the balance between data utility and privacy protection. The approach would require ongoing risk assessment to ensure that anonymization remains effective in changing data environments and that permissions are respected in real time as data flows through systems.
-
Consent and compensation for copyrighted information. AI systems that rely on copyrighted material should do so under clearly defined licensing and compensation terms. This principle seeks to ensure that creators’ rights are respected and that the use of copyrighted content in model training or downstream AI outputs is properly licensed, documented, and compensated where necessary. Implementing this principle would likely involve establishing transparent licensing models, fair-use assessments, and mechanisms to reconcile the needs of AI developers with the rights holders who contribute content to the training or inference processes.
Calkins argues that these four pillars collectively foster trust by making data origins, user rights, and rights management explicit rather than implicit. The intended outcome is a stronger alignment between AI systems and the values of individuals and organizations that rely on them. In his view, trust is not merely a soft add-on; it is a practical enabler that makes AI more relevant to daily business use, improves data stewardship, and supports better decision-making. By building trust around consent, data provenance, privacy protections, and fair use, AI providers can unlock access to more meaningful data and more precise contexts for decision support, while mitigating risk for users and regulators alike. This approach contrasts with approaches that prioritize scale or speed at the expense of governance and accountability, presenting a path toward more sustainable adoption of AI across industries.
Practical implications for implementation
-
Establishing data provenance documentation: Implementing the disclosure principle requires systematic cataloging of datasets, including sources, transformations, and governing policies. Organizations would need to embed data lineage tools into their AI pipelines, generate easily accessible lineage reports, and integrate provenance information into model documentation and governance dashboards. The expectation is that stakeholders—from data engineers to compliance teams and business leaders—will have clear visibility into how data informs model outputs.
-
Creating consent and compensation frameworks: Organizations would design consent workflows that are clear, revocable, and auditable, with explicit terms on how data is used and what benefits or compensations may apply. This could involve user dashboards where individuals can adjust their preferences over time and monitor how their data contributes to AI capabilities. Compensation mechanisms might range from monetary payments to enhanced services, value-sharing arrangements, or other incentives that reflect the value of private data used for model improvement.
-
Deploying robust anonymization and privacy protections: The anonymization principle would push for implementing strong privacy-preserving techniques and access controls. This includes not only removing direct identifiers but also mitigating re-identification risks through differential privacy, data minimization, and secure-by-design architectures. It would also require ongoing risk assessments as data landscapes evolve and as models are updated.
-
License-aware use of copyrighted material: The final principle demands that AI developers secure licenses or otherwise establish fair-use arrangements for copyrighted material used in training or generation. Organizations would need processes to verify licensing status, manage rights, and ensure that outputs do not infringe on creators’ rights. Clear documentation and compliance checks would be essential components of governance.
-
Integrating principles into governance and culture: Beyond technical changes, these four principles imply a shift in governance culture. Companies would need to embed responsible AI practices into corporate policies, risk management frameworks, and performance metrics. This means cross-functional collaboration among product teams, legal, compliance, privacy officers, and external partners to ensure that guidelines are not merely theoretical but actively enforced in day-to-day operations.
The Critique of Current Regulation and the Data-Provenance Debate
Calkins does not mince words when discussing the current regulatory landscape. He contends that existing regulation often fails to address foundational issues such as data provenance and fair use, leaving a “gray zone” in which large tech firms can accelerate AI development with limited accountability. In this view, regulation that focuses primarily on broad risk categories without mandating transparent data practices may inadvertently slow meaningful progress or shift risk onto smaller players who lack leverage to shape policy. He points to high-profile statements from influential policymakers as indicative of a regulatory discourse that emphasizes speed and data accumulation over structural safeguards.
This critique hinges on a few core ideas. First, data provenance—the origin, history, and transformation of data as it enters AI systems—matters for both performance and governance. Without robust provenance, it is difficult to assess bias, trace the lineage of model outputs, or assign responsibility for incorrect or harmful conclusions. Second, fair use in AI is a dynamic and contested area; simply labeling AI training as lawful under broad interpretations may overlook the rights and expectations of content creators whose work underpins model capabilities. Third, the signaling effect of national-level policy statements can create a chilling effect that stifles collaboration or innovation, especially among smaller players who may be disproportionally affected by ambiguity and broad compliance requirements.
To this end, Appian’s proposal aims to fill gaps left by current policy by proposing concrete rules that guide responsible AI development while remaining adaptable to the evolving regulatory environment. By coupling data provenance with consent, compensation, anonymization, and fair-use licensing, the approach seeks to create a more transparent, accountable, and privacy-preserving AI ecosystem. The underlying assumption is that clear governance can expand the legitimate use cases for AI in enterprise settings, enabling organizations to pursue higher-risk, higher-reward AI initiatives with a structured risk-management framework. This perspective emphasizes internal governance tools and market incentives as complements to external regulation, arguing that responsible practices can coexist with aggressive AI innovation.
What “proof of responsible AI” would entail in practice
-
Transparent data governance programs: Enterprises would implement comprehensive data governance programs that produce accessible reports about data sources, data quality, and data-use policies. These programs would also document any data transformations, model training inputs, and the checks used to mitigate bias or discrimination.
-
Rights-respecting AI development: The approach would promote collaboration with content creators, privacy advocates, and regulatory bodies to define licensing models and consent standards that respect rights while enabling AI progress. It would encourage ongoing dialogue about what counts as fair and equitable use of copyrighted material in the context of machine learning and generation tasks.
-
Public- and private-sector alignment: The guidelines would require alignment across sectors, ensuring that public-sector data governance standards, privacy norms, and IP considerations inform enterprise AI adoption. This alignment would be essential for cross-border use cases and for industries with stringent compliance requirements, such as healthcare, finance, and critical infrastructure.
-
Measurable trust indicators: Instead of relying on abstract assurances, responsible AI would be evaluated using concrete metrics related to transparency, consent rates, data-source disclosures, rate and quality of user permissions, and frequencies of rights-related issues. Organizations could benchmark their performance against industry norms and regulatory expectations.
By reframing regulation as a set of actionable, auditable practices rather than a one-size-fits-all mandate, Calkins’ perspective invites regulators and industry players to converge toward a shared standard for responsible AI. It envisions a regulatory environment where compliance is directly tied to governance capabilities, enabling more predictable, scalable, and trustworthy AI deployments in enterprise contexts.
The Next Phase of AI: Trust as the New Currency
A central tenet of Calkins’ argument is the proposition that the next phase of AI will be defined not by who can amass the most data or build the largest models, but by who can cultivate the deepest trust with users. He characterizes the current era as the end of a phase focused on data dominance and says we are entering a second phase where trust becomes the primary differentiator and value driver. He emphasizes that trust unlocks access to more personal and context-rich information while simultaneously requiring stronger protections around privacy and consent. In his view, trust is not an abstract virtue; it is a measurable, enforceable standard that shapes user behavior, adoption rates, and the practical utility of AI systems in real-world settings.
This reframing carries significant implications for both providers and customers. For AI developers, trust implies designing products with built-in privacy protections, clearer disclosures, and offerings that provide meaningful control to users. It also means that performance alone is no longer sufficient to win in the market; resilience to data breaches, safeguards against biased or manipulative outputs, and transparent governance mechanisms become competitive differentiators. For customers—whether enterprises, developers, or end users—trust translates into a greater willingness to share data and participate in collaborative AI ecosystems when explicit consent, fair compensation, and robust privacy safeguards are in place. In practice, trust can expand the practical envelope of AI capabilities by enabling access to richer, more relevant data under controlled conditions, which could enhance accuracy, personalization, and decision support without compromising rights or safety.
Trust-building elements in practice
-
Transparent model behavior: Users should be able to understand why AI is making certain recommendations, with explanations that are accurate, accessible, and useful. This includes providing interpretable outputs, rationale summaries, and the ability to audit model decisions where necessary to detect bias or misalignment with user expectations.
-
Strong consent architectures: Trust requires that users have control over how their data is used and can easily modify or revoke consent. This implies dynamic consent mechanisms, clear purposes for data use, and readily visible records of permissions granted and withdrawn.
-
User-centric data governance: Organizations should center governance around user rights, data minimization, and purpose limitation. This includes implementing principled data retention policies, facilitating user requests for data deletion or correction, and ensuring data flows respect specified uses.
-
Rights-based compensation models: When private data is used or when copyrighted materials contribute to AI capabilities, compensation or value-sharing mechanisms should be integrated into product and service offerings. These mechanisms should be transparent, fair, and enforceable, strengthening the trust equation.
-
Accountability and redress: Clear channels for accountability, complaint handling, and remediation are essential to trust. Users should have access to straightforward processes to report concerns and to receive timely responses and solutions when issues arise.
Calkins argues that a trust-centric model does not diminish AI capability; it reframes how capability translates into value. By prioritizing user consent, data provenance, and fair use protections, AI systems can become more context-aware and effective because they operate within a framework that respects boundaries and rights. He sees trust as a catalyst that makes AI more valuable to individuals and organizations alike by aligning incentives, enabling legitimate data sharing, and reducing the risk of regulatory or reputational damage. This perspective suggests that the winners in the AI race will be those who balance technical prowess with ethical governance, rather than those who pursue scale at any cost. In essence, trust becomes the new operating principle that shapes who can innovate, how they innovate, and to whom they are accountable.
Implications for enterprise AI strategy
-
Strategic alignment with governance objectives: Enterprises will need to embed responsible AI principles into their strategic planning, ensuring that product roadmaps, risk management practices, and compliance programs reflect a trust-forward posture. This alignment will influence vendor selection, data management approaches, and the design of AI-powered applications.
-
Data-sharing ecosystems built on trust: When trust is foundational, organizations may be more willing to participate in data-sharing arrangements with clear consent terms and compensation models. Such ecosystems can accelerate AI development while maintaining guardrails that protect privacy and rights.
-
Customer-centric innovation: Customers who experience AI that respects their privacy, explains its decisions, and compensates data contributions are likely to value the partnership more highly. This can translate into deeper adoption, longer-term relationships, and more sustainable revenue streams for AI solutions.
-
Risk management as a product feature: Trust-related governance becomes a product differentiator. Corporations may market AI products not just on accuracy or speed but on transparency, accountability, and ethical compliance, thereby appealing to risk-sensitive industries.
Appian’s Position: Trustworthy AI as a Competitive Advantage in Low-Code
Appian is positioned as a leading provider of low-code automation solutions, with a platform designed to help organizations rapidly build and deploy AI-powered applications while maintaining robust control over data privacy and security. Calkins argues that the company’s emphasis on responsible AI aligns naturally with market demand for trustworthy AI that can be deployed in enterprise environments with complex governance needs. This alignment could offer Appian a distinct competitive advantage as more enterprises seek AI solutions that not only deliver value but also reflect their values and regulatory obligations.
How Appian’s platform could accelerate trustworthy AI adoption
-
Rapid development with governance by design: Low-code platforms streamline the creation of AI-enabled business applications, enabling rapid iteration and deployment. When governance, consent, and data provenance are integrated into the platform by design, development teams can produce compliant AI solutions without sacrificing speed.
-
Fine-grained data privacy controls: A platform that provides built-in data privacy controls—such as configurable access levels, data minimization options, and robust encryption—supports the four guiding principles. Users can manage who sees what data and under which conditions, reducing privacy risk in production deployments.
-
Clear data lineage and auditable workflows: By exposing data provenance information within the development environment and runtime, Appian could help customers demonstrate compliance to regulators and internal stakeholders. This transparency supports accountability and trust for AI-driven decisions.
-
Rights-respecting data usage: Integrating licensing and compensation considerations into the platform could simplify the management of copyrighted materials used during model training or content generation. A platform that supports licensing workflows and attribution can help organizations meet fair-use expectations and creator rights obligations.
-
Customer-centric trust features: Features that empower end users—such as explainability tools, consent dashboards, and data-use disclosures—would support the trust narrative. These capabilities could increase user adoption and satisfaction in enterprise contexts where stakeholders demand visibility into AI behavior.
Industry positioning and potential competitive dynamics
Appian’s emphasis on responsible AI dovetails with broader industry trends toward governance, risk management, and regulatory compliance as competitive differentiators. Firms that can demonstrate clear data provenance, consent-first data practices, and fair-use licensing in a scalable manner may attract enterprise buyers who require auditable and compliant AI deployments. In markets where regulatory scrutiny is intensifying, the ability to provide transparent data lineage, verifiable consent, and clear licensing terms could reduce procurement risk and speed up sales cycles. However, sustaining this advantage will require continuous investment in governance tooling, licensing infrastructures, and cross-functional collaboration to maintain alignment with evolving policies and standards.
Implications for Regulators, Enterprises, and the Public
The push for a trust-centered AI paradigm intersects with public policy, corporate governance, and consumer protection in meaningful ways. Regulators face the challenge of balancing the drive for innovation with the imperative to protect privacy, prevent abuse, and ensure fair competition. For enterprises, the shift toward trustworthy AI necessitates new capabilities, processes, and metrics to manage data provenance, consent, compensation, and licensing. For the public, the emphasis on transparency and user rights could improve confidence in AI systems and clarify expectations around data use and creator rights.
Regulatory scrutiny and public debate
-
Privacy protections and data rights: As AI systems increasingly rely on large data ecosystems, robust privacy protections and clear rights for individuals become central to ongoing policy discussions. Regulators may seek to codify expectations around consent, data minimization, and the responsible use of private data.
-
Intellectual property considerations: The use of copyrighted material in AI training remains a contentious area. Policymakers may push for licensing standards and compensation frameworks to ensure that content creators are fairly recognized and compensated for the use of their works in AI systems.
-
Accountability and liability: Questions about who is responsible for AI decisions—developers, operators, or deployers—will continue to shape policy debates. Transparent data provenance and auditable decision paths can support clearer accountability and easier enforcement of standards.
Industry response and adoption dynamics
-
Early adopters and launch partners: While the original proposal suggested outreach for collaboration, the broader industry may look for concrete demonstrations of value, risk reduction, and governance maturity before committing to formal partnerships. Organizations that implement responsible AI in a scalable manner could set benchmarks for others to emulate.
-
Barriers to adoption: Key obstacles include the complexity of implementing comprehensive data provenance, consent, and licensing frameworks across heterogeneous data ecosystems; the potential cost of compliance; and the challenge of aligning disparate stakeholder expectations across business units, regulators, and customers.
-
Opportunities for differentiation: Vendors that can offer integrated governance features, transparent data lineage, auditable outputs, and rights-based licensing mechanisms may differentiate themselves in markets demanding higher levels of trust and regulatory compliance.
Technical Foundations for Trustworthy AI: Data Sources, Consent, Anonymization, and IP Considerations
The Appian framework emphasizes concrete technical practices to translate the four guiding principles into actionable safeguards. This section unpacks the practical aspects behind each principle and how organizations can operationalize them in enterprise AI deployments.
Data sources disclosure and provenance
-
Data lineage instrumentation: Implement end-to-end tracing of data from source to model input, including transformations, aggregations, and sampling. This requires instrumented pipelines, standardized metadata, and centralized catalogs that are accessible to governance teams.
-
Dataset documentation: Maintain comprehensive documentation for datasets used in training and inference, including source information, licensing terms, data quality metrics, and known limitations. This documentation should accompany model cards and be easily interpretable by non-technical stakeholders.
-
Impact assessment workflows: Integrate data provenance with risk assessments that evaluate the potential for bias, fairness, and unintended consequences. Regular reviews should inform model updates and governance decisions.
Consent and compensation for private data
-
Clear consent interfaces: Build user-facing controls that explain how data will be used, the purposes of AI applications, and the potential consequences of data sharing. Consent should be granular, revocable, and time-bound where appropriate.
-
Value-sharing mechanisms: Design compensation structures that reflect the contribution of private data to AI enhancement. Compensation could take the form of enhanced services, monetary payments, or alternative incentives agreed upon by data contributors.
-
Auditable consent records: Maintain immutable or tamper-evident logs of consent events to support compliance audits and dispute resolution. Users should be able to access their consent histories and modify preferences easily.
Anonymization and protection of PII
-
Privacy-preserving techniques: Apply robust anonymization, aggregation, and differential privacy where feasible to minimize re-identification risks. Ensure that even in combined data workflows, privacy protections remain strong.
-
Access controls and encryption: Use strict access controls, role-based permissions, and encryption in transit and at rest. Regular security reviews and penetration testing should verify the resilience of data protections.
-
Ongoing privacy risk assessments: Treat privacy as a continuous discipline, updating anonymization strategies as data landscapes evolve and as new re-identification threats emerge.
Copyright and licensing considerations
-
Licensing workflows: Establish clear processes for obtaining licenses to use copyrighted materials in training or deployment contexts. Maintain records of licenses, terms, and permissible uses.
-
Attribution and rights management: Ensure proper attribution when appropriate and implement mechanisms to respect attribution requirements where required by license.
-
Compliance checks in deployment: Integrate right-to-use checks into model deployment pipelines to prevent outputs that would infringe rights or violate licensing terms.
Together, these technical foundations provide a concrete toolkit for companies pursuing responsible AI. They offer paths to measurable accountability, auditable governance, and enforceable rights management—core ingredients in a trust-based AI ecosystem.
Roadmap to a Trust-Centric AI Era: Building Trust in Practice
Turning the four guiding principles into day-to-day operations requires a deliberate, phased approach. A practical roadmap would include the following components:
-
Phase 1: Establish governance and baseline capabilities. Build the core data provenance, consent, anonymization, and licensing infrastructure. Create initial governance policies, model cards, and transparency dashboards. Ensure executive sponsorship and cross-functional alignment across product, privacy, legal, security, and engineering.
-
Phase 2: Integrate into product development and deployment. Embed trust-focused checks into the product lifecycle, including data source disclosures, consent capture, and licensing verification. Develop user-facing interfaces that explain AI behavior and provide control over data usage.
-
Phase 3: Scale and optimize. Expand provenance coverage to more data sources, broaden consent models to cover additional use cases, and refine compensation frameworks. Implement continuous monitoring for bias, privacy risks, and rights-related compliance.
-
Phase 4: Measure, audit, and iterate. Establish quantitative metrics for trust, such as consent rates, data-source transparency indices, licensing compliance, and incident response times. Conduct independent audits and publish insights to build external credibility while maintaining appropriate confidentiality.
-
Phase 5: Ecosystem collaboration. Seek partnerships with content creators, regulators, and industry groups to harmonize standards, share best practices, and align incentives for responsible AI across sectors.
Metrics and success indicators
-
Transparency scores: Composite metrics that measure how well data sources and model decisions are disclosed to stakeholders.
-
Consent engagement: Rates of user consent, granularity of preferences, and ability to revoke consent without friction.
-
Data rights compliance: The proportion of data usage that adheres to licensing, compensation agreements, and rights management policies.
-
Privacy risk indicators: Frequency and severity of privacy incidents, residual re-identification risk, and effectiveness of anonymization techniques.
-
Trust-driven adoption: Uptake of AI features by customers who prioritize governance, with measurable improvements in user satisfaction and risk posture.
-
Business impact: Correlations between trust practices and ROI, customer retention, and regulatory alignment.
Conclusion
In a landscape characterized by rapid AI advancement and heightened concerns about data privacy, IP rights, and societal impact, Appian’s four guiding principles offer a structured path toward responsible AI that centers trust. By foregrounding disclosure of data sources, consent and compensation for private data, anonymization of PII, and licensed use of copyrighted materials, these guidelines aim to transform how AI is developed, deployed, and governed in enterprise settings. Calkins’ broader vision reframes the AI race as a pursuit of trust rather than merely data accumulation or speed, arguing that trust will become the critical differentiator and value driver in the next phase of AI. Appian’s position as a leader in low-code automation positions the company to demonstrate how trustworthy AI can be integrated into practical enterprise solutions without sacrificing performance or agility.
The implications extend beyond one company or one framework. Regulators, enterprises, content creators, and the public all stand to gain from a governance approach that makes AI data provenance, consent, compensation, and licensing explicit and auditable. If the industry embraces these principles, the winners will be those who can deliver not only the most powerful algorithms but also the most trustworthy ones. As the AI landscape evolves, trust—woven into governance, policy, and product design—will shape the trajectory of AI adoption, unlock broader, more responsible uses of data, and define the ecosystem in which innovation can thrive with accountability. The path ahead invites collaboration, careful experimentation, and ongoing dialogue among all stakeholders as the industry moves toward a future where AI serves the common good while respecting individual rights and creators’ contributions.

