Skip to content

Responsible AI

Last updated: February 2026

At GroupMap, we believe artificial intelligence should enhance collaboration, decision-making, and group facilitation while keeping people, ethics, and trust at the forefront.

GroupMap does not use your data to build or train AI models.

AI capabilities

GroupMap integrates generative AI to help facilitators organize, analyze, and act on session content more effectively.

Feature
Provider
Model
AI-suggested grouping
AWS Bedrock
Anthropic Claude Sonnet 4.x
AI assistant chat
AWS Bedrock
Anthropic Claude Sonnet 4.x

AI-suggested grouping

Analyzes submitted ideas and automatically suggests thematic groupings based on underlying patterns and systemic themes rather than superficial similarities. This helps facilitators quickly organize large volumes of input into meaningful categories.

AI assistant chat

A conversational AI assistant available to facilitators that can:

  • Summarize session content
  •  Identify themes and patterns
  • Suggest actions and next steps
  • Generate discussion prompts
  • Provide meeting recaps

The AI assistant operates in **read-only mode** and cannot modify map content directly.

Governance principles

GroupMap is committed to AI that is fair, transparent, and accountable. We strive to:

  • Avoid bias and discrimination by using neutral, well-tested models
  • Use data ethically with appropriate consent and transparency
  • Maintain human oversight by keeping facilitators in control of all decisions
  • Protect privacy by processing data in compliance with regional regulations

Safety measures

AWS bedrock guardrails

GroupMap employs AWS Bedrock Guardrails to monitor inputs and moderate outputs. These guardrails help prevent generation of:

  • Harmful or inappropriate content
  • Sensitive personal information (PII)
  • Content violating topic and word policies
  • Outputs failing contextual grounding checks

When guardrails intervene, the system returns a safe response rather than potentially problematic content.

Content moderation

All AI interactions are subject to content policy filtering including:

  • Topic policy enforcement
  • Content filters (harmful content detection)
  • Sensitive information detection
  • Word policy enforcement

Data handling

No Training on Your Data

GroupMap does not use your session data, ideas, or conversations to train or fine-tune AI models. All data processing occurs through standard API calls to AWS Bedrock, subject to their data processing terms which also prohibit using customer data for model training.

Regional data processing

GroupMap maintains data residency compliance through region-specific deployments:

Region
AI Processing Location
US
US East (Virginia), US East (Ohio), US West (Oregon)
EU
Europe (Frankfurt), Europe (Ireland), Europe (Paris)

Data from EU customers is processed exclusively in EU regions.

Data retention

AI invocation data is stored for operational purposes including:

  • Response caching (30-minute window)
  • Analytics and quality monitoring
  • Feedback collection and improvement

User control

Facilitator controls

  • AI Assistant: Available only to facilitators (not participants)
  • On-demand usage: AI features are triggered explicitly by user action
  • Read-only access: AI cannot modify session content directly

Feedback

Users can provide feedback on AI responses to help improve quality. If you encounter problematic AI-generated content, please contact info@groupmap.com.

Observability

GroupMap uses Langfuse for AI observability, tracking:

  • Response quality metrics
  • Usage patterns
  • User feedback
  • System health

This helps us continuously improve AI feature quality and reliability.

Questions?

If you have questions about GroupMap’s AI features or this policy, please contact us at info@groupmap.com.