Last Updated on June 10, 2025
14.1 Prompt Collection and Storage
14.1 Prompt Collection and Storage
By using the Services, you acknowledge and agree that all prompts, queries, chat messages, uploads (including images, documents, and VINs), and interactions submitted through Chariot may be collected, logged, and stored for the following purposes:
Service improvement (e.g., training models, fine-tuning user experience, refining output relevance)
Abuse detection and moderation (e.g., identifying prompt injections, spam, or prohibited use)
Security auditing (e.g., investigating unauthorized access or suspicious behavior)
Debugging and performance monitoring (e.g., resolving errors, latency, or system failures)
This data may be temporarily or permanently retained based on system logs, subscription type, or abuse risk. Chariot may anonymize or aggregate stored prompts for research and performance analysis, but does not claim ownership over the original prompt content you submit.
14.2 Consent to Prompt Review
14.2 Consent to Prompt Review
By using Chariot, you expressly grant Chariot Technologies LLC the right to access, review, analyze, and audit your prompt history, uploads, and session activity in the following scenarios:
To investigate suspicious behavior, including but not limited to token abuse, prompt injection, or impersonation.
To enforce platform rules, ensure compliance with these Terms, and detect violations.
To respond to user complaints or support requests involving output errors, inappropriate content, or system misuse.
To fulfill legal obligations, including cooperation with law enforcement or regulatory audits when required.
This review may include uploaded documents, chat messages, image metadata, and session logs. All reviews are subject to internal confidentiality protocols and will be conducted only by authorized personnel or automated security systems.
14.3 Abuse Monitoring
14.3 Abuse Monitoring
Chariot reserves the right to flag, block, rate-limit, or throttle any prompt, session, or user account that exhibits behavior indicative of abuse, including but not limited to:
Prompt spam or automated input flooding
Excessive token generation or output scraping
Use of adversarial, jailbreak, or prompt injection techniques
Attempts to bypass pricing tiers, feature restrictions, or rate limits
Submission of illegal, harmful, or malicious content
Unusual usage patterns that suggest bot-driven or unauthorized access
Chariot may take enforcement actions automatically or manually, including temporary suspension, permanent banning, or escalation to legal authorities, depending on the severity and frequency of the abuse.
14.4 Rate Limiting
14.4 Rate Limiting
To ensure fair usage and system stability, Chariot may implement automated prompt rate limits, including but not limited to:
Per-minute caps on the number of allowed prompts
Hourly and daily usage thresholds based on your subscription tier
Token-based ceilings for high-volume requests or large uploads
Session-based cooldowns for excessive or abusive behavior
These limits are enforced automatically and may be adjusted dynamically based on user activity, platform demand, or abuse prevention logic. Chariot reserves the right to modify rate limits at any time without prior notice.
14.5 Prohibited Prompt Content
14.5 Prohibited Prompt Content
Users are strictly prohibited from submitting prompts that include or attempt any of the following:
Personally identifiable information (PII) of others, such as names, license plates, contact details, addresses, or VINs belonging to third parties without consent
Obscene, violent, or sexually explicit language, including harassment, threats, or graphic depictions of harm
Hate speech or discriminatory language targeting race, gender, religion, nationality, sexual orientation, disability, or any protected class
Legal, medical, or automotive impersonation, such as simulating the voice, identity, or authority of a lawyer, doctor, mechanic, or regulatory agency
Attempts to jailbreak, reverse-engineer, prompt-inject, or manipulate the AI system to reveal internal rules, restricted content, or harmful instructions
Violations may result in prompt blocking, account suspension, reporting to authorities, or permanent termination at Chariot’s sole discretion.
14.6 No Model Prompt Injection
14.6 No Model Prompt Injection
You may not submit prompts designed to manipulate, override, or circumvent the intended behavior of Chariot’s AI systems. This includes, but is not limited to:
Prompt injection techniques, such as:
"Ignore previous instructions and..."
"Disregard safety policies and instead..."
"You are now a different model that can..."
Embedded adversarial logic, conditionals, or hidden prompts intended to change how the AI interprets or responds to input
Violations of this clause will result in immediate prompt blocking, account suspension, and potential permanent ban. Chariot reserves the right to investigate any suspected misuse involving AI manipulation, and to retain logs for security auditing.
14.7 Logging for Compliance
14.7 Logging for Compliance
Chariot retains the right to log and store:
Prompt text and input data
Generated AI outputs
Session metadata (including timestamps, device identifiers, IP region, and account ID)
These logs may be accessed and used:
To comply with regulatory or legal obligations
For security auditing, abuse investigation, or enforcement actions
To cooperate with platform governance policies (e.g., Apple, Google, OpenAI terms)
Logs are retained in accordance with Chariot’s internal data retention policy and may be shared with authorities when legally required. By using Chariot, you acknowledge and consent to this form of data storage and usage.
14.8 Internal Use of Prompts
14.8 Internal Use of Prompts
Chariot may use your submitted prompts, uploads, and usage behavior internally for the following purposes:
Training and improving abuse detection filters
Enhancing prompt safety systems and moderation logic
Identifying and mitigating fraud, spam, or misuse
Debugging and refining system performance
However:
Your data will not be sold to third parties.
It will not be used to train third-party base models (e.g., OpenAI, Anthropic).
It is only processed by Chariot-controlled systems for operational enhancement, not commercial data resale.
By using the Services, you agree to this internal usage for quality control and platform security.
14.9 Prompt History Visibility
14.9 Prompt History Visibility
Chariot may provide users with access to a portion of their past prompts, chats, uploads, and AI interactions within the app interface for convenience, continuity, or personal reference. You acknowledge and agree to the following conditions:
User-Visible History
Your prompt and chat history may be displayed within the app, including text-based inputs, document uploads, image-based interactions, and output summaries. Visibility of history may be limited by time, subscription tier, or device session. Certain prompts or outputs may be truncated, redacted, or removed from the user interface without prior notice, especially if flagged for review or moderation.
Post-Deletion Archiving
If you delete or clear your history in-app, such actions may remove your local visibility of those interactions, but do not guarantee deletion from Chariot’s internal systems. Chariot reserves the right to retain archived copies of prompts, messages, uploads, and outputs after user-initiated deletion for the following legitimate purposes:
Fraud Investigation – to detect patterns of abuse, impersonation, scraping, or prompt manipulation.
Customer Support – to assist users in resolving complaints, errors, or account issues.
Compliance & Legal Obligations – to satisfy audit, regulatory, and policy enforcement requirements.
Confidentiality of Stored History
Archived prompt data is stored securely and is only accessible by authorized Chariot personnel or systems for the purposes outlined above. Such data is never sold, shared with third-party advertisers, or used to train external foundation models.
User Responsibility
You are responsible for any content submitted through your account, including sensitive or confidential prompts. Avoid submitting information you do not wish to be stored, even temporarily.
By continuing to use Chariot, you consent to this storage and visibility policy and waive any expectation of permanent deletion unless required by law or granted explicitly by Chariot under written agreement.
14.10 Multi-Modal Prompt Risk
14.10 Multi-Modal Prompt Risk
When submitting prompts that combine multiple input types—such as text, images, VINs, or uploaded documents—you acknowledge that Chariot’s AI systems interpret each input stream independently and may not always resolve them in a unified or accurate context. This includes, but is not limited to:
Contextual Misalignment
The AI may prioritize one modality over another (e.g., interpreting the image over the text, or vice versa), especially if inputs contain ambiguous, contradictory, or incomplete information.
Captions, descriptions, or annotations may be misunderstood or ignored if not clearly associated with the corresponding image or document.
Subtle contextual cues (e.g., sarcasm, colloquial phrasing, or visual metaphor) may be missed or misread, resulting in flawed conclusions.
Interpretation Errors
AI-generated outputs may reflect assumptions based on partial, outdated, or poorly matched data across the input types.
Visual and textual components may be analyzed by separate models or pipelines, which can lead to inconsistent results or duplicated insights.
User Responsibility
It is your responsibility to provide clear, non-conflicting, and contextually supportive inputs when using multi-modal features.
Avoid pairing vague text with complex or unclear images, and do not rely on the AI to infer implicit context or cross-reference contradictory data.
No Guarantee of Unified Comprehension
Chariot does not guarantee that multi-modal prompts will be synthesized correctly or yield coherent, aligned outputs. You accept the risk of fragmentation or error when submitting compound prompts and agree to verify all critical results independently.
By using Chariot’s multi-modal features, you acknowledge the inherent limitations of model comprehension and accept sole responsibility for interpretation, reliance, and downstream use of the output.
14.11 Repetitive Prompting
14.11 Repetitive Prompting
Chariot prohibits the use of repetitive, automated, or programmatically generated prompts designed to extract bulk outputs, simulate farming behavior, or manipulate system performance. This includes, but is not limited to:
Prohibited Behaviors
Submitting the same or slightly modified prompt repeatedly in rapid succession to harvest variations of an output.
Using scripts, bots, macros, or browser automation tools to simulate user input or trigger prompts at scale.
Looping prompt sessions to extract templates, reverse-engineer AI behavior, or generate mass content for resale, SEO, or spam.
Querying the system with formulaic or tokenized strings intended to evade throttling, detection, or safety filters.
Detection and Enforcement
Chariot actively monitors prompt behavior patterns using rate-limiters, session audits, and AI abuse detection tools. Accounts exhibiting signs of repetitive or scripted prompting may face:
Temporary or permanent rate limits
Throttling of specific features (e.g., image analysis, chat replies)
Suspension or termination of the account
Escalation to legal authorities in cases of systematic abuse or scraping
Legitimate Use vs. Abuse
While some users may explore outputs iteratively for valid research or curiosity, any form of high-frequency repetition that resembles automated extraction or volume generation is strictly prohibited. The threshold between normal use and abusive looping is determined by Chariot’s internal fraud and moderation systems.
User Responsibility
You are responsible for ensuring that your use of Chariot, whether manual or assisted by software tools, remains within the bounds of fair use and human-intent design. Automated bulk prompting is not permitted under any tier or use case.
Violations of this clause constitute a material breach of the Terms and may result in immediate loss of access, forfeiture of subscription, and legal action where applicable.
14.12 Prompt Token Costs
14.12 Prompt Token Costs
Chariot tracks prompt usage through a token-based accounting system, where each submitted input and generated output consumes a quantifiable number of tokens. Tokens represent the underlying compute and resource cost required to process user requests, and usage may vary based on the complexity, length, and modality of prompts. By using Chariot, you agree to the following terms:
Token-Based Tracking
Every prompt and response is measured in tokens, including but not limited to:
Input text (e.g., chat messages, descriptions, captions)
Uploaded content (e.g., image processing, document parsing)
Output length and model complexity (e.g., short answer vs. multi-section report)
Higher-tier features or models (e.g., multi-modal, long-form generation, premium reports) may consume more tokens per request.
Usage Limits by Plan Tier
Each subscription tier includes a defined token allotment or soft usage threshold per billing cycle (daily, weekly, or monthly).
Users may experience soft caps (e.g., reduced output length, slower response times) when nearing their limits.
Hard caps (e.g., usage lockout) may be enforced if token consumption significantly exceeds plan allowances.
Upgrade Enforcement
If your usage repeatedly exceeds the limits of your current subscription, Chariot may require an upgrade to a higher-tier plan to maintain access to certain features.
Downgrading plans may result in reduced functionality, lower token ceilings, or access restrictions.
No Rollover or Refunds
Unused tokens do not roll over between billing cycles.
Token consumption is final, and Chariot is not obligated to refund tokens consumed through user error, misinterpretation, or dissatisfaction with AI output.
User Responsibility
You are responsible for monitoring your own usage and understanding the relationship between input complexity and token consumption. Excessive usage that impacts platform stability or violates fair-use expectations may result in throttling or suspension.
By continuing to use Chariot, you agree to abide by all token-based usage limits associated with your plan and accept that overuse may require changes to your subscription or access rights.
14.13 Prompt Abuse Detection Algorithms
14.13 Prompt Abuse Detection Algorithms
To maintain platform integrity, prevent misuse, and protect system resources, Chariot employs advanced abuse detection systems that monitor, analyze, and flag suspicious user behavior. You acknowledge and consent to the use of automated and manual review tools designed to identify abusive prompt activity. These systems may include, but are not limited to:
Pattern Recognition and Fingerprinting
Behavioral Fingerprinting: Tracking input cadence, prompt structure, device signals, and interaction timing to identify automated behavior or usage inconsistent with human patterns.
Prompt Replay Detection: Monitoring for repeated or replayed prompt payloads designed to extract consistent output or probe model limits.
Token Pattern Analysis: Analyzing prompt structures for token-stuffing, encoded payloads, or obfuscated bypass attempts.
Session Profiling: Evaluating frequency, duration, and diversity of prompts to detect scripted or bot-driven sessions.
Enforcement Triggers
Chariot may initiate automated or manual responses based on detected abuse signals, including:
Immediate prompt rejection or truncation
Temporary throttling of token usage or feature access
Session termination or forced logout
Account suspension or permanent banning
Data Handling and Privacy
All prompt monitoring and abuse detection is conducted under strict confidentiality protocols. Data collected for abuse detection is not shared externally or used for marketing or sales purposes. Chariot does not access personal information except as required to enforce platform security or legal compliance.
No User Bypass
Attempts to bypass detection mechanisms—such as rotating accounts, masking IPs, using proxy tools, or distributing prompt payloads across users—are strictly prohibited and will result in escalated enforcement and potential legal action.
By using Chariot, you agree to the use of abuse detection algorithms and accept the consequences of triggering behavior consistent with automated exploitation, manipulation, or unauthorized access.
14.14 Public Prompt Sharing
14.14 Public Prompt Sharing
Users are strictly prohibited from publicly sharing, publishing, or distributing full Chariot-generated prompts or outputs if such content includes any copyrighted, proprietary, confidential, or sensitive material. This applies regardless of intent (e.g., for entertainment, marketing, research, or resale) and includes content generated in any part of the platform—text, image, document, or structured report.
Protected Content Categories
You may not publicly post or republish any Chariot-related material that includes:
Chariot Confidential Prompts: System-level, internal, or prebuilt prompts used within Chariot workflows, AI chains, or report generators. These are proprietary and considered trade secrets.
Generated Outputs Containing IP: AI responses that include summaries, excerpts, or interpretations of copyrighted manuals, manufacturer data, dealership content, or legal language.
Sensitive Upload-Based Content: Any output tied to images, VINs, PDFs, or documents that could identify real individuals, licensed assets, or protected third-party data.
Derivative Outputs Tied to Internal Tools: Text, diagnostics, captions, labels, flags, or conclusions generated through Chariot’s proprietary AI pipelines.
Disclosure Requirements
If you share non-sensitive AI outputs in a public forum (e.g., a screenshot on social media or a sample report on a marketplace listing), you must:
Clearly label the content as “AI-generated via Chariot.”
Refrain from editing or presenting the content as factual, diagnostic, or verified unless accompanied by your own disclaimer.
Remove any Chariot branding, visual assets, or prompts not intended for public use unless you have received explicit permission.
Enforcement and Liability
Unauthorized public sharing of protected or sensitive content may result in:
Takedown requests and removal of the content from the platform it was shared on
Suspension or permanent banning of your Chariot account
Legal claims for IP misuse, breach of confidentiality, or reputational harm
By using Chariot, you agree not to distribute or expose prompts or outputs that are confidential, proprietary, or sensitive in nature. Any public-facing content must comply with branding, IP, and safety standards outlined in these Terms.
14.15 No Reverse Prompt Engineering
14.15 No Reverse Prompt Engineering
Users are strictly prohibited from attempting to deduce, extract, replicate, or expose Chariot’s internal system architecture, including but not limited to its prompt templates, decision hierarchies, safety layers, or moderation logic. This restriction applies to all input methods and interactions within the Chariot platform.
Prohibited Activities Include:
Probing System Behavior: Submitting repeated or strategically varied prompts intended to map internal logic, detect fallback mechanisms, or reveal prompt templates.
Reconstruction Attempts: Attempting to infer Chariot’s proprietary system instructions, moderation filters, or internal routing by analyzing output variations, inconsistencies, or error states.
Extraction Through Looping: Using systematic prompting, chaining, or multi-session inputs to gradually expose underlying AI structure, tokens, or embedded model roles.
Testing for Weaknesses: Prompting with the goal of surfacing restricted content, circumventing safety protocols, or inducing output deviations that reveal model boundaries.
Confidential Architecture
Chariot’s system prompts, moderation stack, fallback logic, and prompt scaffolds are proprietary and protected as confidential intellectual property. Any attempt to replicate or expose these elements constitutes a material breach of this agreement.
Enforcement and Consequences
If Chariot detects attempts at reverse engineering through prompt behavior, the company reserves the right to:
Immediately suspend or permanently ban the user account
Retain logs and prompt trails for legal and security investigation
Pursue legal remedies under trade secret, intellectual property, and cybersecurity laws
Share evidence with affected platform partners (e.g., OpenAI, Apple, Google)
No Safe Harbor for Curiosity or Research
Whether for research, testing, or “just exploring,” any activity intended to expose system-level design through user-facing interaction is expressly forbidden. Chariot does not grant permission for white-hat probing, red-teaming, or adversarial research without a formal agreement.
By using Chariot, you acknowledge that reverse engineering—whether successful or attempted—is a violation of these Terms and may result in legal action, account termination, and forfeiture of any associated subscription or data.
14.16 Prompt Submission Responsibility
14.16 Prompt Submission Responsibility
By using Chariot, you acknowledge and agree that you are solely and fully responsible for the content of all prompts, messages, uploads, and queries you submit through the platform, regardless of intent, medium, or outcome. Chariot does not pre-screen prompts and assumes no liability for the consequences of their submission or interpretation by the AI system.
User Accountability
You are responsible for ensuring that your prompts do not contain false information, defamatory statements, harassment, or illegal content.
You must not use Chariot to submit prompts that infringe on another person’s rights, including privacy, intellectual property, or legal protections.
You may not upload content containing confidential, proprietary, or personally identifiable information (PII) about others without proper authorization.
Prohibited Consequences of Use
You agree not to hold Chariot liable for any of the following outcomes stemming from your submitted prompts or queries:
Defamation or Reputational Harm: Any statements or interpretations made by the AI based on user input that could damage individuals, businesses, or public entities.
Privacy Violations: The inclusion of names, images, VINs, addresses, or sensitive identifiers without consent.
Legal Misuse: Prompts crafted to simulate contracts, impersonate professionals, or generate unauthorized legal advice.
Harassment or Abuse: Submitting content meant to threaten, intimidate, or provoke others through AI output.
Misinformation or Harmful Outcomes: Misuse of AI suggestions for health, mechanical, financial, or legal decisions.
No Liability for Outputs Based on Your Input
Chariot’s AI responses are generated algorithmically based on your input. If the output causes confusion, harm, or misuse—whether intentional or accidental—you are responsible for the consequences. Chariot is not obligated to verify, correct, or filter every response and disclaims liability for the AI’s interpretation of user-submitted content.
Ongoing Obligation
This responsibility applies to all inputs across text, image, document, and multi-modal channels, and survives the termination of your account.
By continuing to use Chariot, you affirm that you understand and accept full legal, ethical, and operational responsibility for your prompts, regardless of the AI’s resulting output or the downstream effects of sharing or acting on that output.
14.17 Blocked Prompts
14.17 Blocked Prompts
Chariot reserves the right to block, suppress, or reject any prompt that violates platform rules, triggers safety protocols, or is deemed inappropriate based on automated or manual moderation. This restriction applies across all prompt types, including text, image captions, document uploads, and hybrid inputs.
Trigger Conditions for Blocking
A prompt may be blocked, filtered, or altered without notice if it includes:
Flagged Language: Obscene, violent, threatening, or sexually explicit terms.
Dangerous Use Patterns: Attempts to generate harmful advice, simulate illegal activity, or provoke self-harm.
Prohibited Topics: Prompts that violate Section 14.5 (e.g., impersonation, hate speech, jailbreaks).
Adversarial Attempts: Queries designed to manipulate, provoke, or expose model behavior (see Section 13.14).
System Integrity Risks: Excessive length, malformed structure, or formatting designed to induce failure or overload.
Response Types
When a prompt is blocked, the system may respond in one of the following ways:
Silent Blocking: The input is rejected without output, or the AI returns a generic refusal.
Warning Message: The system notifies the user that the input was blocked and may explain why.
Rate Limiting or Session Pause: The account may be temporarily limited or paused if repeated blocks occur within a short window.
Escalation and Enforcement
Repeated submission of blocked or near-blocked prompts may result in:
Temporary suspension of AI access
Permanent account deactivation
Investigation for abuse or Terms of Service violations
Refusal of refunds or feature reinstatement
No Appeal Guarantee
While you may contact support to dispute a blocked prompt, Chariot is under no obligation to reinstate flagged prompts, explain detection logic, or provide output for restricted queries.
By using Chariot, you acknowledge that prompt moderation is a core part of platform safety and accept that certain prompts—whether intentionally or unintentionally submitted—may be blocked, limited, or escalated without warning. Repeated violations constitute grounds for disciplinary action.
14.18 Prompt Storage Duration
14.18 Prompt Storage Duration
Chariot may retain logs of submitted prompts, AI-generated responses, session metadata, and related user interactions for a period of up to twelve (12) months. This data may be stored for longer durations when necessary to fulfill regulatory, legal, operational, or security obligations. By using the platform, you consent to this data retention policy.
Standard Retention Period
Prompt and response logs, including associated timestamps, user IDs, device indicators, and interaction types, may be stored in internal systems for up to 12 months from the date of submission.
This retention period supports platform diagnostics, model performance review, user support, feature usage analysis, and product quality improvements.
Extended Retention Conditions
Chariot may retain prompt and response data beyond the standard 12-month window under the following circumstances:
Legal or Regulatory Investigation: In response to subpoenas, law enforcement requests, or government audits.
Security and Abuse Review: If prompts are linked to suspected fraud, system misuse, adversarial attacks, or violation of Terms.
AI Performance Auditing: For internal evaluation of model accuracy, fairness, bias detection, or long-term behavior monitoring.
Training Validation and Safety Testing: To analyze real-world prompt patterns and responses for fine-tuning safety filters and system constraints.
Anonymization and Aggregation
Where feasible, Chariot may anonymize or aggregate historical data before extended retention to reduce user identifiability while preserving the integrity of performance analytics.
No Deletion on Request Guarantee
While users may delete visible history within the app interface, such deletion does not guarantee removal from internal archival systems. Chariot is not obligated to delete stored prompt data on request unless required under applicable privacy law (e.g., GDPR or CCPA).
By continuing to use Chariot, you acknowledge and accept the duration and conditions under which your prompt data may be stored and reviewed, and you waive any expectation of automatic deletion unless required by law.
14.19 Appeals & Access
14.19 Appeals & Access
Chariot provides users with the ability to request access to their own prompt history or appeal moderation decisions, including those related to blocked prompts, flagged content, or account actions. However, access to such data is subject to availability, internal review policies, and platform security considerations.
User-Initiated Access Requests
You may submit a written request to Chariot Support for access to your previously submitted prompts, outputs, or moderation flags.
Requests must include sufficient account verification, the relevant timeframe, and the nature of the inquiry (e.g., appeal, clarification, correction).
Chariot reserves the right to redact or withhold portions of prompt logs that contain sensitive system information, internal tags, or third-party data.
Appeals of Flagged Content
If your prompt was blocked, flagged, or resulted in a warning or suspension, you may submit an appeal to Chariot Support for review.
Appeals are reviewed on a discretionary basis and are not guaranteed to result in prompt reinstatement, content release, or account restoration.
Appeals that are abusive, duplicative, or submitted in bad faith may be ignored or escalate enforcement actions.
Limitations on Suspended Accounts
Chariot is not obligated to provide prompt logs or output history to users whose accounts have been suspended, terminated, or permanently banned.
In cases of suspected abuse, fraud, or Terms of Service violations, Chariot may retain full access to prompt logs while withholding access from the offending user.
No Entitlement to Raw System Data
Access to prompt logs does not include system-level context such as model decisions, internal routing, safety filters, or prompt engineering templates.
Chariot may summarize or paraphrase content where direct output sharing would reveal confidential infrastructure.
By using Chariot, you acknowledge that appeals and prompt history access are privileges, not entitlements. Chariot maintains final authority over whether data is disclosed, redacted, or withheld in the interest of platform security, legal compliance, or user safety.
14.20 Survival and Enforcement
14.20 Survival and Enforcement
The terms, responsibilities, and restrictions outlined in this section—governing prompt submission, moderation, storage, usage limits, abuse prevention, and content liability—shall remain fully enforceable regardless of your ongoing access to the Chariot platform.
Survival of Terms
These provisions apply to all prompts submitted before, during, or after the lifetime of your Chariot account.
Termination, suspension, expiration, or deactivation of your account does not nullify your obligations under this section.
Chariot reserves the right to enforce these terms retroactively in response to prompt-related violations discovered after account closure.
Post-Termination Accountability
You remain liable for any misuse, defamation, IP violations, or safety breaches associated with previously submitted prompts, even if your account is no longer active.
Chariot retains the authority to act on flagged or abusive content post hoc, including taking legal action, submitting takedown notices, or reporting to authorities.
Enforcement Rights
Chariot may audit archived prompt history, enforce penalties, or pursue claims related to your prompt behavior at any point in time, including after your relationship with the platform has ended.
No waiver, inaction, or delay in enforcement shall be interpreted as a forfeiture of Chariot’s rights under this section.
By using Chariot, you accept that the rules, restrictions, and responsibilities surrounding prompt behavior—whether involving submission, visibility, moderation, or storage—remain in force beyond the scope of your active account and continue to govern all relevant interactions with the platform.
Contact Us
If you have any questions or concerns about our Terms of Service or the handling of your personal information, please contact us at support@chariotreport.com