A new wave of AI governance is taking shape as industry leaders push for responsible development that prioritizes trust, user consent, and fair use. In this evolving landscape, Appian’s CEO Matt Calkins has laid out a comprehensive framework aimed at guiding AI providers and their customers toward more transparent, privacy-conscious, and ethically grounded practices. The movement comes amid heightened concerns about data privacy, intellectual property rights, and the rapid pace of AI progress, prompting a reevaluation of how far regulators and market players must go to ensure sustainable innovation. This article delves into Calkins’ proposals, the four guiding principles he champions, and the broader implications for the enterprise AI ecosystem as trust becomes the central currency of value creation.
A Bold Challenge to Industry Practice: Calkins’ Critique and the Push for Change
Matt Calkins, cofounder and CEO of Appian, has publicly challenged the AI industry to rethink its current trajectory toward regulation, governance, and data utilization. He positions himself not as a curmudgeon trying to hinder progress but as an advocate for the maximum flourishing of AI through responsible development. In discussions with industry outlets, he underscored the urgency of aligning AI advancement with principled standards that respect data provenance, user consent, and fair use. His critique centers on what he sees as a regulatory blind spot: policymakers and industry leaders often focus on broad risk and speed while neglecting the granular facts about where data originates, how it is used, and who benefits from reusing intellectual property.
Calkins points to recent high-profile statements from global policymakers as examples of a larger problem—the tendency to discuss AI regulation without adequately addressing core issues surrounding data provenance and fair use. He argues that big technology firms have little incentive to fully disclose data origins or to honor the boundaries of fair use unless the broader ecosystem demands it and provides clear guardrails. In his view, the result is a gray zone in which major players may operate with limited transparency while smaller participants and potential partners hesitate to engage, fearing reputational or legal exposure. This perceived regulatory gap, according to Calkins, creates a friction point for the entire industry: the very entities that might otherwise drive widespread AI adoption and innovation are blocked by ambiguity and inconsistent practices. His stance is that responsible governance should not suppress ambition but should channel it through enforceable rules that protect individuals and organizations while enabling robust AI capabilities.
Calkins emphasizes a balanced approach to policy—one that does not seek to derail progress but rather to create a stable environment in which AI can mature. He argues that the path to systemic value lies in pairing technical prowess with clear, ethical guidelines that govern how data is sourced, processed, and monetized. The crux of his message is that trust is not a byproduct of AI success but a central driver of it. Trust, once established, can unlock access to more sensitive data and more nuanced user information, which in turn fuels more effective, context-aware AI applications. In this framing, regulatory discourse shifts from a binary debate about prohibition versus acceleration to a constructive dialogue about setting guardrails that protect privacy, reduce misuses, and promote accountability without stifling ingenuity. The result, he contends, is a healthier market where customers are confident in the AI tools they adopt and where providers can differentiate themselves through responsible practices rather than opacity or speed alone.
Calkins’ perspective also highlights a practical reality facing enterprises that invest in AI: the value of trust is becoming a competitive differentiator. As organizations increasingly rely on AI to automate critical operations, make strategic decisions, and personalize customer experiences, the willingness of customers and partners to engage with AI systems hinges on assurances about data handling, consent, and intellectual property rights. The implication for vendors is clear: cultivate trust through transparency and consent-driven data practices, and you gain access to richer data streams, stronger customer relationships, and more durable long-term adoption. For Appian, this translates into a strategic emphasis on responsible AI development as a core differentiator in a crowded market. Rather than competing solely on speed or the breadth of capabilities, Appian positions itself as a platform that integrates rigorous governance into the fabric of its AI-enabled solutions, thereby reducing risk for enterprises while preserving the flexibility to innovate. This framing aligns with a broader industry shift toward governance-led AI deployment, where responsible principles become a prerequisite for scale and enterprise-to-enterprise reliability.
Calkins’ remarks have also been contextualized within a broader regulatory and public scrutiny environment. Regulators, lawmakers, and the general public are increasingly attentive to how AI systems are built, tested, and deployed, especially in areas such as employment, housing, and financial services where algorithmic decisions can have pronounced real-world effects. The conversation about responsible AI is no longer theoretical; it now intersects with policy proposals, compliance requirements, and due diligence expectations that shape the roadmap for AI vendors and users alike. Against this backdrop, Appian’s four guiding principles aim to translate ethical concepts into concrete, actionable rules that businesses can implement and audit. In this sense, the company is not merely advocating for a set of best practices but proposing a governance framework designed to be integrated into product development lifecycles, contractual agreements, and risk management processes. The overarching aim is to reduce ambiguity, align incentives, and foster an ecosystem where trust accelerates adoption rather than impedes it.
This critique and the response it has provoked reflect a broader industry pivot: from a nascent phase focused on capability acceleration to a more mature phase centered on sustainable trust, accountability, and value alignment with user interests. The stakes are high. If industry participants fail to establish credible, enforceable standards, the risk is not only regulatory backlash but also a chilling effect on innovation—where fear of liability, potential misuse, and reputational harm dissuade experimentation. Conversely, a widely accepted framework for responsible AI can unlock new opportunities for collaboration, reduce the cost of risk management for enterprises, and accelerate the deployment of AI-powered solutions at scale. Calkins’ approach seeks to catalyze this transition by offering a concrete, principled path forward that enterprises can adopt with confidence while preserving competitive differentiation for providers who demonstrate commitment to ethical AI practices. The next sections elaborate the four foundational principles he proposes and explore their practical implications for developers, enterprises, and the broader AI market.
Four Guiding Principles for Responsible AI
Appian’s proposed governance framework centers on four interrelated principles designed to address core concerns around data sources, consent, privacy, and intellectual property. Each principle serves as a foundational block for building trustworthy AI systems that respect users, protect sensitive information, and acknowledge the rights of content creators. Below, each principle is unpacked in detail, with implications for how enterprises should implement them, potential operational challenges, and the benefits they offer in terms of risk reduction, trust-building, and market differentiation.
Principle 1: Disclosure of Data Sources
The first principle emphasizes transparent disclosure of where data originates. In practical terms, this requires AI developers and providers to clearly identify the datasets, data streams, and information sources that contribute to model training, fine-tuning, and ongoing inference. Disclosure is not limited to generic statements about data; it entails concrete disclosures about provenance, licensing terms, data lineage, and the licensing status of each dataset. For enterprises deploying AI systems, such transparency enables more accurate assessments of data quality, potential biases, and the risk of proprietary content being used in ways that could raise legal or ethical concerns.
Implementing data-source disclosure demands robust data governance practices. Organizations must establish auditable record-keeping that traces data from collection through preprocessing, annotation, and final deployment. This traceability is essential for addressing questions about data provenance, reproducibility, and accountability. It also supports responsible disclosure to stakeholders, including customers, regulators, and internal governance bodies. The practical benefits include improved risk assessment, clearer due diligence during vendor selection, and the ability to respond more swiftly to inquiries about data origins. For enterprise customers, such transparency can reduce the perceived risk of adopting AI solutions and increase confidence that the system’s outputs are grounded in traceable, well-documented data sources.
From an operational perspective, data-source disclosure requires standardized metadata schemas, consistent labeling practices, and rigorous documentation that accompanies AI products. This implies investment in data cataloging, lineage tracking tools, and governance processes that tie data sources to training events, model versions, and performance metrics. It also necessitates cross-functional collaboration among data engineers, data stewards, legal teams, and product developers to ensure that disclosures remain accurate as data sources evolve. In practice, this principle can be challenging to implement, particularly in ecosystems with complex or proprietary datasets, but its value lies in providing a clear, auditable map of data provenance that can be consulted during risk reviews, regulatory discussions, and internal governance audits. For providers, transparent data sourcing can become a competitive differentiator, signaling reliability and accountability to enterprise buyers who are increasingly demanding greater visibility into the origins of AI capabilities.
Principle 2: Use of Private Data Only with Consent and Compensation
The second principle centers on the ethical and legal handling of private data. It asserts that private or sensitive data should be used only with explicit consent and, where applicable, with appropriate compensation to data subjects or rights holders. This principle places consent and fair compensation at the core of data usage agreements, ensuring that individuals retain agency over their information and are appropriately rewarded when their private data contributes to AI value generation.
Operationalizing this principle involves several components. First, consent mechanisms must be explicit, granular, and revocable, enabling individuals to understand precisely how their data will be used and to withdraw permission at any time. Second, compensation models—whether direct monetary payments, access to services, or other value exchanges—must be clearly defined and enforceable, respecting applicable privacy laws and contractual commitments. Third, organizations must implement robust data minimization and access controls to ensure that private data is used only to the extent necessary for the intended purpose and is safeguarded against unauthorized access, leakage, or misuse.
Adopting this principle has broad implications for data partnerships, training workflows, and monetization strategies. For vendors, it may necessitate rethinking data licensing arrangements, establishing transparent compensation terms, and creating clear opt-in/opt-out pathways for data subjects. For enterprises, it means embedding privacy-by-design principles into product development and ensuring that data processing practices align with consent requirements and compensation agreements. While these changes can introduce complexity and potential costs, the payoff is a stronger trust bond with data subjects, reduced regulatory risk, and more predictable cross-border data usage where consent and compensation rules vary across jurisdictions. In addition, consent-centric data practices can foster more robust, consent-based data ecosystems that support personalized AI experiences without compromising individual autonomy or rights.
Principle 3: Anonymization and Permission for Personally Identifiable Data
The third principle focuses on protecting user privacy by advocating for anonymization where possible and requiring explicit permission when handling personally identifiable information (PII). Anonymization techniques reduce the risk of re-identification while preserving the usefulness of datasets for training and evaluation. When PII is necessary for a given use case, organizations must obtain informed, explicit permission from data subjects and ensure that the scope of consent reflects the intended use and potential future applications.
Implementing anonymization involves applying robust de-identification techniques, differential privacy when appropriate, and ongoing monitoring to prevent leakage of sensitive attributes. It also requires a clear policy on when and how identifiable data can be re-identified, under what safeguards, and with whom such re-identification would be permissible. For data handling practices, this principle suggests layering technical safeguards with governance controls: access restrictions, purpose limitation, data retention schedules, and auditability. The challenge lies in balancing privacy protection with the need to maintain data utility for model improvement and personalization. Organizations must continuously assess the trade-offs between privacy and performance, selecting anonymization and privacy-preserving methods that fit their specific contexts.
For enterprises, this principle translates into concrete design choices: prefer non-identifying aggregates and synthetic data for broad training needs; when real identities are necessary, ensure that permission is properly documented and that the data flow complies with consent terms. It also incentivizes the establishment of privacy-by-design cultures across product teams, where privacy risks are identified and mitigated early in the development lifecycle. The outcome is a more resilient AI ecosystem where user privacy is protected, and the risk of data breaches or misuse is significantly reduced. By standardizing anonymization and PII permissions, providers and customers can collaborate on more ambitious AI initiatives with clearer expectations, better governance, and stronger trust assurances.
Principle 4: Consent and Compensation for Copyrighted Information
The fourth principle addresses the use of copyrighted content in AI training, evaluation, and deployment. It asserts that such material should be used with explicit consent or under appropriate licensing arrangements, and that rights holders should be compensated where applicable. This principle seeks to recognize the rights of content creators and to prevent unauthorized use of protected works in model development and downstream applications.
Operational implications include establishing clear licensing terms for copyrighted data, implementing usage restrictions aligned with license provisions, and maintaining auditable records of permissions and licensing agreements. It also calls for ongoing dialogue with rights holders to negotiate fair terms that reflect the value contributed by their works to AI systems. For organizations leveraging copyrighted material to improve AI performance, this principle implies a structured approach to copyright compliance, including proper attribution where required, adherence to license scope, and the removal or retraining of models if license terms are violated.
The adoption of this principle supports a sustainable content ecosystem by ensuring that creators receive recognition and compensation for the inclusion of their works in AI development. This approach can reduce legal exposure for AI developers and users while fostering a collaborative environment where content creators, data providers, and technology firms can work together to unlock AI value. For enterprises, respect for copyrights translates to more predictable licensing costs, transparent vendor agreements, and a clearer path to scalable AI deployment that aligns with industry best practices and legal requirements. The broader effect is to elevate the legitimacy of AI initiatives in the eyes of stakeholders, from creators to policymakers, by demonstrating a commitment to fair dealing, ethical content usage, and robust intellectual property protections.
These four principles collectively form a coherent framework designed to align AI innovation with governance, consent, and respect for rights. They are intended to complement existing regulatory efforts rather than replace them, offering a practical pathway for enterprises and AI providers to operationalize responsible practices. Implementing these principles requires concerted efforts across governance, product development, legal, and security functions, with continuous monitoring, auditing, and adaptation as technology and regulations evolve. The promise of this framework is not merely to reduce risk but to foster a culture of trust that expands the potential for AI to deliver meaningful, user-centered value across industries. As enterprises begin to adopt these guidelines, the industry may see a shift toward more transparent data ecosystems, more consent-driven data practices, and stronger collaboration with content creators and rights holders, ultimately supporting a healthier, more sustainable trajectory for AI advancement.
The Next Phase of AI: Trust as the New Competitive Frontier
If the industry has to choose between a data-hungry race and a trust-based evolution, Matt Calkins argues that the latter is not only more ethical but strategically superior. He describes the upcoming phase of AI development as a “race to trust,” signaling a fundamental shift in what constitutes competitive advantage. The premise is that the current phase—often labeled as phase one—has been characterized by aggressive data accumulation, broad resource gathering, and a focus on scaling capabilities as quickly as possible. That phase has, in his view, reached a saturation point where simply amassing more data yields diminishing returns and heightened risks. The next phase, phase two, is expected to prioritize trust-building measures that facilitate more meaningful engagement with users and more responsible data practices that enhance data utility without compromising privacy or rights.
Trust, in Calkins’ framing, becomes a form of valuable currency. When users and organizations have confidence that an AI system is respecting their data, honoring consent, and safeguarding intellectual property, they are more willing to share richer, more relevant information. This deeper data collaboration can unlock access to more personalized, context-aware insights, enabling AI systems to deliver higher value with greater relevance. In other words, trust expands the scope and quality of data that can be leveraged, thereby enabling more sophisticated and practical AI applications while maintaining ethical and legal guardrails. The implication for AI providers is the need to demonstrate transparent governance, robust privacy protections, and a clear commitment to user autonomy. For enterprises, trust reduces the barriers to adoption, accelerates deployment across lines of business, and enhances the overall return on investment by enabling more precise, user-centric AI outcomes.
Calkins emphasizes that the transition to trust-based AI will require a rethink of organizational incentives and technical architectures. Governance frameworks must be integrated into product roadmaps, contract terms, and risk management processes, rather than treated as afterthoughts or compliance checkboxes. This involves building systems that can account for data provenance, consent states, anonymization status, and licensing terms during model training, evaluation, and deployment. It also demands that AI providers and users collaborate on developing standardized metrics for trust, such as transparency scores, data lineage traceability, sensitivity analyses, and user consent audits. In this vision, trust becomes a measurable attribute of AI systems that customers can evaluate and compare, driving competition beyond capabilities to include governance quality, accountability, and ethical alignment.
The pivot to trust also implies a recalibration of risk management. Enterprises will increasingly assess vendors not only on performance and speed but also on governance maturity, privacy protections, and IP compliance. This shift influences procurement processes, supplier risk assessments, and long-term partnerships, as organizations seek to align with AI providers that demonstrate consistent adherence to responsible practices. The market dynamics may favor companies that invest heavily in transparency, user empowerment tools, and robust privacy protections, creating a differentiator that goes beyond price or feature depth. Yet the move to trust is not without challenges. It requires overcoming legacy systems, reconciling diverse regulatory requirements across geographies, and addressing concerns from stakeholders who may resist tighter controls due to perceived friction or slower innovation cycles. Nonetheless, proponents of this approach contend that the long-term value of trust-based AI—through better user engagement, higher-quality data, and stronger risk management—far outweighs the temporary costs and complexities of implementation.
Appian’s framing of trust as a strategic asset carries implications for product architecture and platform strategy. The company’s low-code automation platform is well-positioned to support rapid deployment of AI-enabled applications while embedding governance and privacy protections into the development lifecycle. By enabling organizations to build and scale AI capabilities with built-in controls over data access, consent management, and IP compliance, Appian can help customers realize the benefits of trust without sacrificing speed or agility. This alignment between platform design and the trust-based paradigm can yield several practical advantages: faster time-to-value for AI initiatives with lower risk; more straightforward compliance with evolving regulations; and a clearer path for customers to demonstrate responsible AI usage to stakeholders, including regulators and auditors. In this sense, Appian’s approach represents a proactive strategy to capture the market shift toward trust-centric AI by offering tools and practices that operationalize the principles of responsible AI in tangible, scalable ways.
The broader industry response to this trust-based shift remains mixed, as some players embrace the principles and begin to integrate them into their product roadmaps, licensing agreements, and customer communications. Others resist, citing concerns about flexibility, licensing complexity, or the potential impact on speed and competitiveness. The tension between openness and control, innovation and governance, is likely to intensify as more organizations adopt formal trust-centric policies. The next phase of AI thus promises a more nuanced balance between innovation and responsibility, where the most successful players will be those who can demonstrate consistent trustworthiness while delivering practical, high-value AI capabilities. In this environment, trust is no longer a vague ethical ideal but a concrete, marketable attribute that can influence customers’ decisions and shape the competitive landscape. The industry’s leaders who invest early in transparent data practices, consent-based data use, privacy protections, and IP respect will likely set the standard for what responsible AI looks like in practice, potentially shaping regulatory expectations and industry norms for years to come.
Appian’s Platform and Strategic Positioning
Appian’s emphasis on responsible AI aligns closely with its core strengths as a leading provider of low-code automation solutions. The platform inherently supports rapid development, deployment, and iteration of AI-powered applications, while also enabling organizations to maintain tight control over data privacy, security, and governance. This combination positions Appian to benefit from the industry-wide pivot toward trusted AI—particularly for enterprises seeking governance-first approaches that do not sacrifice speed or scalability.
From a product perspective, Appian’s strategy centers on delivering a cohesive environment where developers, business users, and IT security teams can collaborate seamlessly to design, build, deploy, and govern AI-enabled processes. The platform’s architecture likely emphasizes modularity, data source tracing, and policy-based controls that make it easier to implement the four guiding principles described above. For instance, integrated data catalogs and lineage tracking can support Principle 1 (data-source disclosure) by automatically recording the provenance of data used in AI workloads. Consent management features can be embedded to satisfy Principle 2 (consent and compensation for data use) and Principle 3 (PII handling and anonymization), while licensing and rights management tools can help enforce Principle 4 (copyright permissions) across data sources and content. This integrated approach reduces the friction organizations face when trying to build responsible AI solutions, enabling faster adoption with built-in governance.
Appian’s market positioning as a trusted partner for enterprises seeking responsible AI is reinforced by its emphasis on user-centric governance. By prioritizing transparency, consent, and IP rights, Appian differentiates itself from competitors that may offer more aggressive data collection practices or less robust governance frameworks. This differentiation can be particularly attractive to organizations in regulated industries or in regions with stringent data protection laws, where demonstrable compliance and auditable governance are critical criteria in vendor selection. The alignment between Appian’s platform capabilities and the four guiding principles helps to create a cohesive value proposition: enterprises can deploy AI-driven processes quickly while maintaining strong governance, reducing risk, and building trust with customers, employees, and regulators.
The leadership’s vision extends beyond product features into a broader narrative about AI’s future. By calling for a shift from a data-saturated race to a trust-based trajectory, Appian signals a long-term commitment to shaping industry norms through principled practices. This stance positions the company as a thought leader and potential standard-bearer in the responsible AI movement. If Appian can demonstrate tangible outcomes—such as measurable improvements in data provenance, consent adherence, and IP compliance across customer deployments—the company could influence market expectations and regulatory dialogues, encouraging broader adoption of trust-centered guidelines. In the face of increasing regulatory scrutiny and heightened public concern about AI risk, Appian’s strategy offers a pragmatic path forward: deliver value to customers through rapid, low-code AI capabilities while embedding the governance required for responsible, sustainable innovation.
The immediate implications for customers are meaningful. Enterprises seeking to accelerate digital transformation with AI are likely to gravitate toward platforms that offer clear governance baked into the development process, enabling them to meet internal risk benchmarks, audit requirements, and regulatory obligations more efficiently. Appian’s approach provides a framework through which customers can implement modern AI solutions with a transparent data supply chain, explicit consent and compensation mechanisms, and IP-aware usage policies. Such capabilities can reduce the friction typically encountered during vendor due diligence, data privacy impact assessments, and licensing negotiations, thereby shortening timelines and lowering total cost of ownership for AI initiatives. Additionally, the emphasis on trust can bolster customer loyalty and satisfaction, as organizations recognize that their AI investments are not only technically capable but also ethically and legally sound.
Beyond the enterprise, Appian’s stance contributes to a broader industry dialogue about responsible AI governance. The guidelines serve as a blueprint that others in the ecosystem can study, adapt, and integrate into their own products and contracts. If more players adopt similar principles, the market could shift toward a common standard for AI governance, which would simplify cross-vendor collaborations, reduce risk for multi-vendor deployments, and foster a healthier, more stable environment for AI innovation. While the path to widespread adoption of such guidelines is not without obstacles—ranging from regulatory divergence to business model constraints—the potential payoff is a more trustworthy AI economy where data handling, consent, and IP rights are consistently respected across the board.
Appian’s announcements come at a moment when the AI industry faces intensified scrutiny from regulators and lawmakers who are increasingly focused on accountability, transparency, and consumer protection. In this context, a proactive, governance-forward approach could serve as a blueprint for how the industry can respond constructively to policy debates while simultaneously accelerating responsible deployment of AI technologies. The company’s leadership recognizes that achieving sustainable AI growth requires more than technical breakthroughs; it requires a governance culture that can be embedded into the fabric of AI products, partnerships, and customer engagements. If successful, Appian’s model could help catalyze a broader movement toward responsible AI that combines practical utility with ethical rigor, ultimately enabling AI to deliver its promised value while maintaining public trust.
Appian’s leadership has signaled openness to collaboration, acknowledging that launching comprehensive governance guidelines is a collaborative endeavor. While the company has not publicly announced formal launch partners yet, the initiative is framed as an invitation to the AI industry to join in shaping a shared standard for responsible AI governance. The emphasis on simple, implementable terms suggests an appeal to organizations of various sizes and stages of maturity to participate in a collective effort to elevate AI stewardship. The success of such a coalition would depend on broad industry participation, credible governance mechanisms, and the willingness of organizations to invest in the necessary infrastructure to operationalize these principles in real-world deployments. If the initiative gains momentum, it could catalyze a virtuous cycle in which trust-based governance accelerates adoption, which in turn reinforces the importance of responsible development, creating a sustainable foundation for AI’s long-term growth.
The stakes in this shift are high. As Calkins notes, the AI landscape has reached a point where phase one—characterized by aggressive data accumulation and rapid scaling—has peaked. The next phase—defined by trust, consent, and governance—could determine which platforms and providers sustain competitive advantage over the long run. Enterprises that can demonstrate reliable governance, transparent data practices, and fair IP handling are likely to win the confidence of stakeholders, including customers, regulators, and business partners. The consequences for players who fail to adapt could be significant, with potential regulatory penalties, reputational damage, and limited opportunities for large-scale, cross-industry AI deployments. In this evolving arena, Appian’s guidelines aim to become more than a theoretical proposition; they seek to establish practical standards that guide behavior, inform contracts, and shape product design across the industry.
Industry Implications, Regulation, and Adoption
As the AI sector evolves toward a trust-centric model, the regulatory and business environment is likely to respond in ways that reinforce or calibrate this shift. Policymakers are increasingly prioritizing accountability, explainability, and user rights, and they may welcome practical frameworks that operationalize responsible AI. The proposed four-principle framework offers a blueprint for how enterprises and vendors can align with emerging expectations while maintaining the agility required to remain competitive. The interplay between market demand and regulatory guidance could create a conducive ecosystem for responsible AI deployment, characterized by clearer accountability structures, auditable governance trails, and well-defined licensing and consent mechanisms.
Within this landscape, the adoption of trust-based governance is influenced by several factors. First, enterprises require scalable governance solutions that can be embedded into existing IT and security architectures, as well as into the broader business processes that leverage AI. This includes integration with data governance platforms, security information and event management systems, and risk management programs. Second, there is the need for standardization of vocabulary, metrics, and processes around data provenance, consent management, anonymization, and IP licensing. Standardization reduces ambiguity, aids interoperability, and simplifies vendor evaluation. Third, education and stakeholder alignment are critical. Business leaders, legal teams, procurement professionals, and technical staff must share an understanding of why these principles matter, how to implement them, and what outcomes to expect. Clear communication helps build trust and accelerates adoption across departments and geographies.
From a market perspective, the trust-based approach can reshape competitive dynamics in several ways. Providers that invest in governance, transparency, and IP protections may command higher trust scores and stronger customer loyalty, enabling premium pricing or differentiated value propositions. Enterprises may, in turn, seek long-term partnerships with vendors who demonstrate responsible practices and track record of compliance and risk management. This could encourage a shift toward more collaborative, multi-vendor arrangements in which governance is standardized and portability across platforms is facilitated by shared frameworks. The ultimate beneficiaries are the end users, whose privacy, rights, and ownership concerns are addressed more directly, leading to higher confidence in AI-powered products and services.
However, achieving broad adoption of the proposed guidelines will require overcoming practical challenges. Some organizations may resist due to perceived increases in complexity, cost, and time-to-delivery, particularly in contexts where speed to market is paramount. Others may worry that stringent consent and compensation requirements could hamper data-driven innovation or complicate cross-border data flows that rely on differing regulatory regimes. To address these concerns, a gradual, phased approach could be employed, starting with pilot programs, clear success criteria, and scalable governance templates that demonstrate the tangible benefits of responsible AI practices. Collaboration with regulators, industry bodies, and customers can help refine the framework and tailor it to diverse sectors and regions while preserving core principles. The potential payoff is a more predictable, resilient, and trustworthy AI ecosystem that supports sustainable growth, reduces risk, and fosters broad-based adoption.
Ultimately, the success of trust-based AI governance hinges on practical implementation. Enterprises and vendors must translate high-level principles into day-to-day processes, product features, and contractual terms. This means embedding consent management into user interfaces, creating transparent data provenance dashboards, ensuring that anonymization and IP protections are testable and auditable, and aligning licensing agreements with actual usage patterns. It also requires ongoing governance oversight, including regular risk assessments, policy updates, and stakeholder engagement to reflect evolving technologies and societal expectations. If the industry can operationalize these elements effectively, the trust-based future of AI could become the norm rather than the exception, driving higher value, broader adoption, and a more equitable distribution of benefits across stakeholders.
The convergence of enterprise needs, regulatory expectations, and responsible innovation creates a unique opportunity for leadership in the AI space. Appian’s framework provides a concrete blueprint for how to progress from aspirational ideals to implementable safeguards. By focusing on data provenance, consent and compensation, privacy-preserving practices, and IP-respecting workflows, the industry can begin to close the gap between potential and responsibility. The path forward involves constructive dialogue, collaborative standard-setting, and rigorous execution that demonstrates the feasibility and desirability of trust-driven AI. In doing so, the AI industry can not only mitigate risks but also unlock deeper, more meaningful value for enterprises and individuals alike.
Path to Industry-wide Trust: Implementation Challenges and Opportunities
Turning principles into practice presents both challenges and opportunities for AI developers, vendors, and customers. The practical implementation of a governance framework built on transparency, consent, anonymization, and IP respect requires careful planning, cross-functional collaboration, and sustained commitment. The journey is not a mere compliance exercise; it represents a strategic shift in how organizations design, deploy, and operate AI systems.
First, implementing data-source disclosure requires robust data governance and technical capabilities to capture, store, and present provenance information. This entails building or purchasing data cataloging tools, metadata standards, and lineage tracing that can be integrated into model training and deployment workflows. It also means establishing governance roles, such as data stewards and governance boards, who oversee data lineage, licensing, and ethical considerations. The operational effort is nontrivial, but the payoff is a more auditable, transparent environment in which stakeholders can assess data quality, bias risks, and licensing terms. In practice, teams must coordinate across data engineering, legal, security, and product development to ensure that disclosures remain accurate as data sources evolve. When data provenance is clear, organizations can better justify model decisions, address regulatory inquiries, and demonstrate accountability to customers and partners.
Second, the consent-and-compensation framework for private data demands sophisticated consent-management capabilities and transparent value exchange mechanisms. This involves designing user-centric consent flows, offering meaningful choices about how data is used, and ensuring that any compensation model is fair and enforceable. It also requires clear documentation of consent status, revocation options, and data-processing boundaries. Operationally, it means embedding consent signals into data pipelines and model training processes, as well as ensuring that any data reuse beyond the original purpose remains within the scope of consent. For data subjects, this approach enhances autonomy and control over personal information, while for organizations, it reduces legal and reputational risk associated with data misuse. The complexity of cross-jurisdictional privacy laws adds another layer, necessitating adaptable, jurisdiction-aware consent mechanisms and data-handling policies that can respond to evolving regulatory landscapes.
Third, implementing anonymization and PII permissions requires selecting privacy-preserving techniques appropriate to the data and use case. This may include differential privacy, k-anonymity, or robust tokenization and masking strategies. In addition, when PII is essential, obtaining explicit permission and maintaining rigorous access controls becomes crucial. This fosters a culture of privacy-by-design and ensures that model outputs cannot be easily reverse-engineered to reveal sensitive information. It also requires ongoing monitoring to detect potential privacy risks and rapid remediation processes. The practical challenge lies in balancing data utility with privacy protections, which often involves trade-offs between model performance and privacy guarantees. However, as privacy technologies mature, organizations can achieve strong safeguards without sacrificing meaningful AI capabilities.
Fourth, respecting copyrighted information demands clear licensing agreements and transparent usage terms. This includes establishing processes to obtain necessary permissions from rights holders and to compensate them fairly when their content contributes to AI training or inference. It also requires institutions to maintain robust documentation to demonstrate compliance with licensing terms, enabling swift resolution of disputes and reducing the likelihood of infringement. Operationally, this means integrating rights management into procurement and product development cycles, aligning licensing terms with deployment scenarios, and ensuring that content creators have visibility into how their works are used. By formalizing these relationships, organizations can reduce litigation risk, strengthen partnerships with creators, and foster a more sustainable content ecosystem that supports innovation while respecting intellectual property rights.
Across these dimensions, the path to trust involves building and sustaining trust through measurable, auditable outcomes. Enterprises must demonstrate that their AI systems operate within the established governance framework, that data handling practices align with consent and IP policies, and that there is accountability for any deviations or misuses. Providers must be prepared to prove compliance through transparent reporting, independent audits, and robust governance controls embedded within product design. The cultural shift required is substantial: it demands that organizations prioritize governance and ethics as core business competencies, not as afterthoughts or compliance checklists. Yet the potential rewards are equally significant. A trust-centered AI ecosystem can unlock newforms of collaboration, drive broader and deeper data sharing under safe and fair terms, and create a foundation for scalable AI adoption that benefits a wide range of stakeholders.
The industry can also expect a reconfiguration of partnerships and ecosystems as trust becomes central to value creation. Companies may seek to align with vendors who demonstrate a track record of responsible AI practices, leading to more standardized procurement criteria and shared governance frameworks. This, in turn, could foster a landscape where interoperability and portability of AI solutions improve, reducing vendor lock-in and enabling enterprises to mix and match components while maintaining consistent governance. At the same time, concerns about the scalability of governance efforts, the cost of implementing robust privacy safeguards, and the complexity of licensing negotiations will need to be addressed. The market will likely respond with a combination of new tooling, standardized contracts, and collaborative industry groups that help streamline the adoption of responsible AI practices across sectors.
An important dimension of this transition is education and communication. Stakeholders—including executives, engineers, legal teams, and policy makers—must understand not only what the principles require but also why they matter. Clear, accessible explanations of data provenance, consent mechanisms, privacy protections, and IP rights can help demystify governance and empower teams to implement responsible AI more effectively. This also involves developing intuitive dashboards, audit trails, and reporting capabilities that executives can interpret to assess risk, measure performance, and communicate progress to boards and regulators. By investing in education and transparency, the industry can create a more informed ecosystem where responsible practices are valued and reinforced by evidence-based outcomes.
Ultimately, the adoption of these four principles is not an end in itself but a means to unlock sustained AI value. When organizations commit to transparency, informed consent, privacy protections, and IP respect, they create trust with users, partners, and regulators. This trust translates into deeper engagement, more robust data collaboration, and a more stable platform for innovation. As the industry navigates the challenges of rapid AI advancement, the principles offer a practical, scalable path to responsible growth that can withstand evolving legal standards, societal expectations, and competitive pressures. The question remains whether the broader AI community will embrace this vision quickly enough to shape a new era of trustworthy, human-centered AI—and whether leading platforms like Appian will catalyze and sustain this transformation through their products, partnerships, and governance commitments.
Conclusion
In a moment when AI promises transformative gains across industries, the emphasis on responsible development, data ethics, and IP rights is increasingly pivotal. Matt Calkins’ call for a trust-centered AI paradigm—anchored by four concrete principles: disclose data sources, require consent and compensation for private data, enforce anonymization with PII permissions, and secure consent and compensation for copyrighted information—captures a pragmatic and principled approach to governance. The next phase of AI, in his view, is a race not to harvest the most data but to cultivate the most trust. This reframing has profound implications for how AI is built, deployed, and regulated, and it offers a pathway for enterprises to unlock deeper value without compromising privacy or rights.
Appian’s strategy aligns closely with this vision. By integrating responsible AI governance into a low-code platform designed for rapid deployment, Appian seeks to empower organizations to build scalable, compliant, and trustworthy AI solutions. The potential impact extends beyond individual deployments to industry standards and regulatory expectations, as the proposed framework provides a tangible, implementable blueprint for responsible AI that others can study, adapt, and adopt. As the AI landscape continues to evolve, the winners are likely to be those who demonstrate not only technical prowess but also credible governance, transparent data practices, and a commitment to fair treatment of data and content creators. The path forward will require collaboration, continuous iteration, and a shared determination to balance innovation with accountability.
To realize this vision, stakeholders across the AI ecosystem must engage in constructive dialogue, invest in governance-enabled architectures, and embrace processes that protect user privacy, uphold rights, and foster trust. The journey toward trust-based AI is neither trivial nor instantaneous, but it offers a durable, scalable foundation for AI to deliver meaningful value—safeguarding society while enabling enterprises to achieve smarter, more responsible outcomes. If the industry embraces these principles and translates them into everyday practices, the AI revolution can mature into a sustainable, inclusive, and beneficial paradigm for all.

