Appian CEO Matt Calkins Urges AI Industry to Put Trust First and Pave a New Era of Responsible Development

Appian CEO Matt Calkins Urges AI Industry to Put Trust First and Pave a New Era of Responsible Development

Matt Calkins, the cofounder and CEO of Appian, has put forward a bold, comprehensive set of guidelines aimed at steering AI development toward greater responsibility and stronger trust between providers and users. His proposals arrive at a moment when concerns about data privacy, intellectual property rights, and the rapid pace of AI advancement are fueling intense public and regulatory scrutiny. He frames his stance not as opposition to AI progress, but as a proactive blueprint to ensure AI flourishes in a way that respects individuals, protects assets, and maintains social license to operate. This article explores his four guiding principles in depth, contextualizing them within the broader regulatory discourse, the evolving business landscape, and the practical implications for organizations striving to deploy AI responsibly at scale.

Four guiding principles for responsible AI: a practical, enforceable framework

Appian’s proposal centers on four core principles designed to create a transparent, consent-driven, and fair foundation for AI systems. Each principle is presented as a concrete obligation, accompanied by practical considerations for implementation, governance, and accountability.

Principle 1: Disclosure of data sources

The first pillar mandates clear disclosure of the data sets and sources that underpin AI models and outputs. In practice, this means organizations should provide traceable documentation detailing where training data originates, how it was collected, and the specific pipelines used to curate the material that informs model behavior and decision-making. The rationale is to enable users, auditors, and regulators to assess potential biases, provenance, and rights ownership associated with AI outputs. This transparency is not merely informational; it becomes a governance mechanism that allows affected parties to need to understand the lifecycle of data as it flows through AI systems. For enterprises, implementing this principle requires robust data lineage instrumentation, metadata management, and auditable records that can be reviewed in real time or through periodic assessments. It also implies a commitment to explainable AI practices, where decisions are not opaque but can be traced back to the data sources and modeling choices that produced them. In a practical sense, disclosure of data sources becomes a trust signal, signaling to customers that the organization is attentive to where information comes from, how it’s used, and what constraints apply to its use in AI applications.

Principle 2: Use of private data only with consent and compensation

The second principle focuses on the ethical and lawful use of private data. It requires that any private data employed by AI systems must be accessed with the explicit consent of the data subject and, where appropriate, with compensation for use. This principle acknowledges the privacy and property interests at stake when personal information is incorporated into AI training or inference processes. From a governance perspective, it demands consent management capabilities, clear articulation of data-use purposes, and transparent mechanisms for individuals to opt in or opt out. It also invites consideration of compensation models for data contributors, which could range from direct payments to value-based arrangements that recognize the contributions of individuals or organizations whose data informs AI outcomes. For enterprises, operationalizing this principle entails designing consent workflows, ensuring data-sharing agreements reflect fair value, and incorporating privacy-by-design practices into product and platform development. It also requires ongoing monitoring to ensure that consent remains valid as contexts change and as AI applications evolve over time.

Principle 3: Anonymization and permission for personally identifiable data

A third principle centers on the responsible handling of personally identifiable information (PII). It prescribes robust anonymization techniques and requires explicit permission for the processing or reuse of any PII, even in aggregated or anonymized forms where re-identification risks may exist. The emphasis is on balancing the utility of data for AI development with the imperative to protect individual privacy. Implementing this principle involves evaluating the effectiveness of anonymization methods, maintaining controls to prevent re-identification, and ensuring compliance with evolving privacy standards and regulations. It also implies governance around data minimization and selective data retention, so that only what is necessary for the intended AI use case is processed and stored. For organizations, this principle drives architectural choices—such as data partitioning, access controls, and privacy-preserving computation techniques—that minimize privacy risk while preserving the ability to derive meaningful AI insights.

Principle 4: Consent and compensation for copyrighted information

The fourth principle addresses the use of copyrighted material in AI training and outputs. It asserts that copyrighted content should be used only with appropriate consent and fair compensation to rights holders. This is a critical issue in AI ecosystems where vast corpora of text, images, code, and other media may be used to train language models, image generators, and related systems. Implementation requires transparent licensing approaches, clear attribution when feasible, and mechanisms to secure licensing or compensation agreements from the outset. It also calls for collaborative engagement with rights holders to align incentives and reduce friction in deployment. For AI providers and enterprises, this principle means building licensing frameworks into product roadmaps, negotiating terms with publishers and authors, and maintaining auditable records of permissions and payments. It reinforces the broader objective of aligning AI development with legitimate intellectual property rights while preserving the innovative potential of AI technologies.

Together, these four principles form an integrated governance framework designed to promote trust, accountability, and fair use in AI systems. Calkins argues that broad adoption of these rules would create a more transparent, user-centric AI ecosystem, enabling individuals and organizations to engage with AI more confidently and to recognize the value created by responsible data practices. The core claim is that when data provenance is clear, consent is respected and compensated where appropriate, and privacy protections are robust, AI becomes more relevant and beneficial to users and enterprises alike, rather than a source of anxiety and suspicion. The practical challenge, of course, lies in translating these principles into scalable, enforceable policies across diverse sectors and geographies, while maintaining the pace of AI innovation. The proposed rules are meant to be a starting point for constructive dialogue and practical implementation that can evolve through collaboration among industry players, regulators, and civil society.

A critique of the current regulatory approach and the data governance landscape

Calkins does not sugarcoat the tensions between rapid AI advancement and the existing regulatory framework. He argues that measures currently proposed or enacted by leading policymakers often overlook essential issues—most notably data provenance and fair use—which he believes are central to building a trustworthy AI ecosystem. In his view, a regulatory approach that fails to address these core elements risks leaving critical gaps that could erode public trust and hinder meaningful, broad-based adoption of AI technologies. He points to recent public statements from the White House and influential lawmakers as examples of this perceived shortcoming: a tendency to address high-level concerns about AI deployment without adequately grappling with the technical and ethical specifics that govern how data is sourced, transformed, and utilized in AI systems. According to him, this omission creates a “gray zone” in which large tech firms may continue operating with limited constraints on data provenance and fair use, while smaller players and potential partners find themselves cautious or excluded from collaborations that could otherwise accelerate responsible innovation. This situation, he contends, could impede the whole industry by delaying the development of robust practices that protect rights holders, users, and society at large.

The critique reflects a broader concern within enterprise technology about governance that is both rigorous and adaptable. The regulation landscape is evolving under pressure to balance innovation with accountability. Policymakers and industry leaders are increasingly debating how to set standards for data rights, consent, licensing, and accountability mechanisms for AI systems. The tension centers on achieving a policy environment that encourages experimentation and deployment at scale while ensuring that data rights are respected, bias is mitigated, and transparency is maintained. Calkins’ stance contributes to this debate by reframing the dialogue around data provenance and fair use as foundational elements of responsible AI governance. He argues that if these core issues are resolved through clear rules and practical governance mechanisms, the path to trustworthy AI becomes more navigable for organizations of all sizes. The aim is to move beyond a binary choice between laissez-faire development and heavy-handed regulation toward a nuanced approach that fosters trust, aligns incentives, and provides measurable accountability.

In his analysis, the current regulatory posture tends to emphasize high-level risk assessments and broad compliance requirements without giving sufficient attention to the specifics of data lineage and usage rights. This can create gaps where models are technically compliant on the surface yet remain ethically problematic because the provenance of training data is opaque, or because user privacy protections are inadequately safeguarded in practice. The four guiding principles proposed by Appian are intended to complement existing regulatory efforts by providing a concrete, operational framework that organizations can implement in parallel with compliance activities. The goal is to establish a shared baseline for responsible AI that reduces ambiguity and fosters trust among users, customers, and partners. In short, the critique emphasizes the need for governance mechanisms that can be audited, demonstrated to stakeholders, and continuously improved as AI technologies evolve. Those mechanisms would ideally bridge the gap between rapidly advancing AI capabilities and the evolving expectations and requirements of privacy, copyright, and data rights regimes.

The next phase of AI: shifting from a data race to a trust race

Calkins paints a clear and provocative picture of where AI development stands today and where it is headed. He argues that the industry has effectively completed “phase one”—a period characterized by a race to accumulate and ingest as much data as possible, with the belief that bigger datasets would automatically yield superior AI performance. He contends that this initial phase has produced impressive milestones but has now reached a limit in terms of sustainable value, efficiency, and societal acceptance. The next phase, in his view, is not about amassing more data but about cultivating trust. He describes this new phase as a race to trust—an intentional shift toward building AI systems that users feel comfortable engaging with, that respect their privacy, and that operate under transparent, accountable governance. The transition to trust is framed as an inevitable evolution: once the data-exhausting phase peaks, the industry must pivot to a model where ethical considerations, user consent, data rights, and responsible use become the primary engines of progress.

This reframing has several implications for both practice and policy. For practitioners, the focus shifts from simply improving model accuracy or leveraging larger datasets to designing systems that people can rely on in meaningful ways. This includes implementing robust data provenance practices, ensuring consent mechanisms are clear and enforceable, and establishing fair use policies that respect intellectual property. It also entails rethinking incentive structures within organizations so that teams are rewarded not only for performance metrics but also for governance quality, user trust, and risk mitigation. In a broader sense, the shift to trust is tied to the social license to operate: it recognizes that long-term AI viability depends on public confidence, regulatory legitimacy, and the alignment of technology with human values.

From a market perspective, the transition to a trust-based paradigm could reshape competitive dynamics. Firms that can convincingly demonstrate responsible data practices, transparent disclosures, and strong privacy protections may gain competitive advantage, particularly in industries with sensitive data or heavy regulatory oversight. Conversely, organizations that cling to a data-solely approach without integrating governance and consent mechanisms risk eroding trust and facing heightened regulatory scrutiny or consumer backlash. Calkins’ argument emphasizes that the coming years will reward those who can operationalize trust at scale—through governance frameworks, clear data-lineage disclosures, consent-enabled data use, and fair compensation models for data contributions. In this context, trust becomes a strategic asset, as it enables closer and more sustained engagements with users, customers, and partners, unlocking deeper engagement and more personalized value without sacrificing privacy or rights.

The “phase two” narrative also invites a reexamination of how success is measured in AI. Traditional metrics such as accuracy, throughput, or model loss functions, while still important, may need to be complemented by governance-centered indicators: rates of consent uptake, transparency scores for data sources, degrees of anonymization effectiveness, and the completeness of licensing arrangements for copyrighted material. A trust-oriented framework would encourage continuous auditing, routine third-party assessments, and clear remediation pathways when issues arise. It would also encourage collaboration across stakeholders—regulators, industry groups, rights holders, and consumer advocates—to establish shared expectations and workable standards that can adapt to evolving technologies. In short, the next phase of AI is envisioned as a collective project in which safety, privacy, fairness, and user empowerment become the primary currencies of innovation, guiding both product design and strategic decision-making.

Calkins emphasizes that this shift toward trust is not a theoretical ideal but a practical imperative. He argues that trust is the true enabler of deeper data collaboration—where individuals and organizations willingly participate in AI workflows because they understand, control, and benefit from the use of their data. When users feel their personal information is handled with respect, when they see clear data provenance, and when they receive fair compensation for the use of their data or the data of those in their care, they are more likely to engage with AI-enabled services, provide more meaningful data inputs, and accept more personalized experiences. This cycle creates a virtuous loop: trusted AI becomes more useful because it has access to higher-quality inputs, while stronger governance and consent protections reinforce trust, reducing risk and opening up new business models. The vision is ambitious, but the logic is consistent with broader societal trends toward empowerment, accountability, and privacy protection. If realized, it could transform the boundaries of what is possible with AI and redefine the relationship between technology providers and the people who use their products.

Appian’s position: leveraging trustworthy AI to gain a competitive edge

As a leading provider of low-code automation solutions, Appian sits at a strategic intersection of automation, AI, and enterprise software. The company’s platform is designed to enable organizations to rapidly build and deploy AI-powered applications while maintaining rigorous control over data privacy and security. This combination—speed and governance—positions Appian to capitalize on the broader industry shift toward trustworthy AI. The four guiding principles provide a natural blueprint for how Appian intends to differentiate itself in a crowded market where customers are increasingly seeking assurances about data handling, licensing, and user rights.

Appian’s approach to responsible AI development emphasizes practical governance that can be embedded directly into platform capabilities. For enterprise customers, this translates into a reliability advantage: the ability to implement data-source disclosures, consent management, privacy-preserving techniques, and licensing compliance within the same tooling used to design, test, and deploy AI-enabled workflows. The result is not only better compliance but also more transparent and auditable AI systems that stakeholders can trust. This trust translates into business value: organizations can unlock more effective AI-driven processes, improve collaboration with data owners and rights holders, and reduce the risk of costly regulatory or reputational setbacks. In short, Appian’s proposed guidelines align with its core strengths—rapid deployment, strong data governance, and a commitment to responsible AI—that the company believes will become increasingly important as enterprises seek solutions that align with their ethical and governance standards.

The broader regulatory environment intensifies the relevance of Appian’s strategy. Regulators and lawmakers are scrutinizing AI more closely, with particular attention to issues such as job displacement, algorithmic bias, and the potential for misuse by bad actors. By foregrounding data provenance, consent, anonymization, and intellectual property considerations, Appian offers a concrete framework that can help its customers navigate compliance and risk management more effectively. This positions Appian not only as a technology provider but also as a governance partner—a role that can help customers implement responsible AI in a scalable and sustainable way. For stakeholders, this means that adopting Appian’s approach could mitigate regulatory exposure, enhance stakeholder trust, and support long-term adoption of AI within complex enterprise environments.

Calkins, in presenting these guidelines, has underscored a pragmatic, collaborative path forward. Although he has not publicly announced formal launch partners at this stage, he remains optimistic about the potential for broad-based participation. He described the moment as a launching point and expressed a willingness to articulate terms and invite others to join the effort. This approach reflects a strategic emphasis on coalition-building and shared standards, which could help accelerate the adoption of responsible AI practices across industries. The emphasis on collaboration also aligns with the complex, multi-stakeholder nature of data rights, licensing, and governance in the AI ecosystem, where alignment among developers, users, rights holders, and policymakers is essential for sustainable progress.

The stakes in this leadership moment are high. Calkins argues that the industry has already exhausted phase one—an era defined by data accumulation and rapid scaling without sufficient attention to governance. The next phase, he asserts, will be defined by trust: companies that can demonstrate a credible commitment to responsible AI development and robust data governance will be best positioned to thrive. This is not merely about compliance; it is about rethinking value creation in AI—from raw capability to trustworthy capability that yields better outcomes for users and organizations alike. Appian’s emphasis on privacy controls, consent management, and fair-use licensing reflects a broader industry trend toward integrating ethics and governance into the core of AI product development rather than treating them as add-ons or afterthoughts.

The road ahead: adoption, governance, and industry transformation

The path from concept to widespread adoption of trustworthy AI guidelines involves a series of practical steps, coordinated across technology teams, legal and compliance functions, and external partners. Appian’s framework implies several critical actions that organizations will need to consider as they pursue responsible AI initiatives.

First, there is a clear operational need to implement end-to-end data provenance capabilities. Corporations must invest in systems that can track and document the lineage of data used in AI models, including data sources, transformations, and the exact contexts in which data was employed. This transparency supports accountability, helps identify potential biases, and facilitates auditability by internal governance bodies and external regulators. It also enables more accurate impact analysis, allowing teams to understand how different data sources influence outcomes and to adjust inputs accordingly to improve fairness and reliability. The practical challenges include integrating disparate data systems, establishing standardized metadata schemas, and ensuring that provenance records remain up-to-date as data ecosystems evolve. Organizations that succeed will likely require cross-functional collaboration among data engineers, data stewards, and AI developers to design, implement, and continuously refine data lineage processes.

Second, consent and compensation mechanisms must be embedded into both data collection and AI deployment practices. This requires clear, user-friendly interfaces for obtaining consent, along with transparent explanations of how data will be used in AI processes. In parallel, compensation models for data contributions may need to be developed, especially for contexts in which individuals or organizations provide data that materially shapes AI capabilities. From a governance standpoint, this involves contractual design, privacy impact assessments, and ongoing monitoring to ensure that consent remains aligned with evolving use cases and regulatory expectations. It also calls for governance structures that can adapt to changes in data rights regimes and licensing requirements. In enterprise settings, consent and compensation considerations must be integrated into product roadmaps, data-sharing agreements, and supplier relationships, ensuring that all partners within the data value chain operate under consistent, enforceable terms.

Third, robust privacy protections—particularly regarding anonymization and the control of personally identifiable information—must be implemented as a core feature of AI workflows. This means evaluating anonymization techniques for effectiveness, monitoring re-identification risks, and maintaining strict access controls to protect sensitive data. It also entails adopting privacy-preserving technologies, like differential privacy or secure multiparty computation, where appropriate, to minimize risk while preserving analytical utility. Governance must enforce data minimization principles, define retention schedules, and ensure that privacy safeguards are maintained across the entire AI lifecycle—from development to deployment and ongoing operation. For organizations, this principle is a reminder that privacy is not a mere compliance box to check but a design criterion that should influence architecture decisions, data handling practices, and user experience design in AI-enabled products and services.

Fourth, fair use and licensing for copyrighted materials used in AI training and outputs must be explicitly recognized and managed. This requires proactive licensing discussions, transparent attribution where possible, and, when warranted, compensation arrangements for rights holders. The practical implications include building licensing considerations into product development cycles, maintaining auditable licensing records, and designing processes to respond quickly when licensing issues arise. It also involves engaging with rights holders and industry groups to establish reasonable, scalable models for licensing AI training data and content. For organizations deploying AI in production, the implication is that responsible AI involves not only technical safeguards but also sound business and legal practices that respect intellectual property rights. This alignment between technology and rights management is essential for long-term sustainability and for reducing the risk of IP disputes and related reputational harm.

Taken together, these adoption pathways suggest a future in which responsible AI governance is deeply integrated into product design, platform capabilities, and organizational culture. The practical reality for many enterprises will involve reformulating data strategies, governance structures, and vendor relationships to support a trust-centric AI operating model. For Appian and similar platform providers, the opportunity is to embed these principles into the fabric of the software development lifecycle, delivering out-of-the-box capabilities that simplify compliance and governance for customers while enabling faster, safer AI deployment. The resulting competitive differentiation stems not only from technical prowess but from the reputation and reliability associated with trustworthy AI practices.

Industry-wide implications: regulation, jobs, and the ethics of AI deployment

The proposals and the broader discourse around responsible AI governance have broad implications for regulators, enterprises, workers, and society at large. As AI technologies become more capable and more deeply embedded in business processes, the demand for clear standards, enforceable rules, and measurable governance grows. The questions regulators are grappling with include how to define and enforce data provenance, what constitutes fair use in AI systems, and how to balance innovation with rights protection and privacy. Calkins’ framework adds a practical, industry-tested set of guardrails that can help shape policy discussions by demonstrating how guidelines can be implemented in real-world systems, not just described in abstract terms. If policymakers adopt or adapt these principles into regulatory requirements, enterprises would benefit from a clearer, more predictable environment in which to innovate and invest.

From a societal perspective, the push toward trustworthy AI resonates with concerns about algorithmic bias, automation’s impact on employment, and the potential misuse of AI for harmful purposes. A governance framework that emphasizes transparency, consent, and rights protection can mitigate some of these risks by ensuring that AI systems are designed and deployed with attention to fairness and accountability. Yet these benefits depend on diligent implementation and ongoing oversight. The collaboration between industry leaders, regulators, and civil society becomes essential to maintaining public trust, updating governance mechanisms as technology evolves, and balancing the competing interests at stake. In this context, Appian’s approach—centered on trust, data stewardship, and rights-respecting practices—offers a constructive blueprint that can inform both corporate strategy and policy development.

Appian’s leadership in this space also has implications for customers and partners. Enterprises seeking to deploy AI at scale will increasingly demand platforms and ecosystems that demonstrate robust governance capabilities. The four guiding principles provide a concrete framework that can help organizations assess potential technology providers based on data provenance, consent mechanisms, privacy protections, and licensing practices. For partners and developers working within the Appian ecosystem, this shift creates new opportunities to differentiate through governance features, compliance support, and transparent data practices. It also raises the bar for collaboration, as organizations align with shared standards that support responsible AI across industries. The outcome is a more mature, resilient AI market in which trustworthy practices become a baseline expectation rather than an aspirational goal.

The practical path to trust: measurement, accountability, and continuous improvement

Trust in AI is not a one-time achievement but an ongoing process of measurement, accountability, and improvement. Implementing the four guiding principles requires robust governance structures, clear ownership, and ongoing performance assessments. Organizations will need to establish governance councils or data stewardship bodies responsible for monitoring data provenance, consent, privacy protections, and licensing compliance. Regular internal and external audits can help verify that disclosures are accurate, consent is properly managed, and data handling meets agreed-upon standards. Moreover, organizations should establish feedback loops with users and rights holders to gather input on governance practices and to identify opportunities for improvement. Accountability mechanisms—such as documented decision logs, incident response protocols for data misuse, and public reporting of governance metrics—are essential components of a credible trust-building program.

Operationalizing trust also involves the development of practical use-case criteria that define when and how AI should be deployed within an organization. This includes risk assessments that consider bias, privacy, copyright, and fairness, as well as criteria for exception handling when data provenance or licensing constraints create uncertainties. The governance framework should be designed so that it can scale with the organization’s growth and with the expanding ecosystem of AI-enabled products and services. It should also be adaptable to evolving regulations and market expectations, allowing for timely updates to policies, processes, and technical controls. In this sense, trust is both a policy and a product attribute—an outcome of deliberate design choices, transparent communication, and steadfast commitment to rights protection and user empowerment.

For Appian and its enterprise customers, the path to trust will involve both internal transformation and external collaboration. Internally, organizations must align their data governance, privacy, and IP licensing practices with their AI product development efforts. Externally, they will need to engage with regulators, rights holders, and industry groups to harmonize expectations and establish scalable, interoperable standards. The payoff for those who pursue this path is a more resilient, customer-centric AI capable of delivering tangible business value while reducing risk and preserving the social license to operate. The AI landscape of the coming years will reward those who treat trust as a strategic priority rather than a compliance checkbox, recognizing that the most successful AI systems will be not only powerful but also trustworthy, transparent, and respectful of the rights and preferences of users.

Conclusion

Matt Calkins’ four guiding principles for responsible AI—disclosure of data sources, consent and compensation for private data, anonymization with permission for personally identifiable data, and consent and compensation for copyrighted information—offer a concrete blueprint for building trust in AI systems. His critique of current regulation highlights the importance of addressing data provenance and fair use as foundational governance issues, arguing that the industry must move beyond a simple race for more data toward a race to trust. The proposed framework envisions a future where trust becomes the central currency of AI, enabling more meaningful data collaboration, stronger user engagement, and sustainable value creation for enterprises.

Appian’s position as a leading provider of low-code automation places it well to lead this shift. By integrating governance, privacy, and rights-management into its platform, Appian can help customers deploy AI at scale without compromising data protection or IP rights. The anticipation of broader industry adoption hinges on the willingness of other players to engage in this trust-centered approach and to participate in collaborative efforts that align incentives, standards, and practices across the AI ecosystem. While Calkins has not yet secured formal launch partners, his statement that this marks a launch moment signals a proactive push to build momentum and gather support for a more responsible AI paradigm. If these guidelines gain traction, they could shape not only corporate strategy but also policy development, driving a more transparent, equitable, and trustworthy AI future—one where the pace of innovation is balanced by principled governance and respect for user values.

The next era of AI is poised to redefine what it means for technology to serve people—how data is sourced, who benefits from its use, and how rights are protected in a world of increasingly capable machines. The emphasis on trust reflects a deeper recognition that the true potential of AI lies not merely in unprecedented performance, but in the responsible, rights-respecting deployment that generates durable value for organizations and for society as a whole. As industry leaders, regulators, and communities continue to engage with these questions, the guidance offered by Appian’s approach provides a robust, actionable pathway to realizing that potential. The industry’s outcomes in the coming years will reveal whether trust can keep pace with capability—and whether the most successful AI systems will be the ones that combine power with responsibility.

Companies & Startups