Switching from Cognigy: Migration checklist and risk mitigation
Switching from Cognigy requires 8-16 weeks for data export, logic rebuild, and compliance validation. Use this migration checklist.

TL;DR: Leaving Cognigy is not a software swap. It's a structured data extraction, logic rebuild, and compliance re-validation project that can take 8-16 weeks for a full enterprise migration (distinct from standard GetVocal deployment, which runs 4-8 weeks for core use cases). Your NLU training data (intents, entities, lexicons) exports cleanly as CSV or JSON. Your flow logic does not port automatically and must be rebuilt from business requirements. The strategic opportunity is to stop replicating complex node logic and start building around auditable decision boundaries, which positions you for EU AI Act compliance before the August 2026 enforcement deadline. You can use this checklist to move fast without losing service quality.
You'll need more than just a contract cancellation to leave Cognigy. Your migration demands a forensic approach to data export and a strategic shift in how you architect conversation logic. Cognigy is a low-code development platform built for developers who want flexibility. If your team has spent three years building flows that only one developer fully understands, we built this guide as your exit map.
#Why enterprise CX leaders are moving off Cognigy
Developer-centric low-code platforms don't fail because they stop working. They fail because they create an operational dependency that only compounds over time. Every change to a billing inquiry flow, every policy update, every new escalation path requires a developer ticket.
On Cognigy's low-code platform, operations teams are responsible for designing, building, and testing every agent flow from scratch, then maintaining those flows as policy changes require updates and regression testing across dependent paths. That maintenance burden compounds as deployments grow: each change requires a developer ticket, and the decision logic distributed across hundreds of nodes creates documentation gaps that can complicate compliance audits.
Compliance is one migration driver, but operations teams in telecom, banking, insurance, healthcare, retail/ecommerce, and hospitality/tourism are moving for the same reason: they need changes to go live in days, not developer sprints.
EU AI Act Article 13 requires that high-risk AI systems operate with sufficient transparency so deployers can interpret outputs and use them appropriately. Article 14 mandates that high-risk systems are designed for effective human oversight. A complex node-based flow chart with embedded CognigyScript does not make it easy to answer an auditor's question: "Show me every decision the AI made in this conversation and why." The EU AI Act transparency framework treats this as a core requirement, not a nice-to-have.
The market is responding. Operations leaders are moving from "building bots" to managing hybrid workforces where AI and human agents collaborate in real time, with full auditability built in from day one.
#The Cognigy migration hub: Planning your exit strategy
Before you export a single file, you need an honest assessment of what you're working with. The scope of your Cognigy footprint determines how long migration takes and what it costs.
#Assessing your current Cognigy footprint (v3 vs v4)
The version of Cognigy you're running affects what you can export and how. Resources that typically migrate between versions include Flows, Flow Intents, Lexicons, Playbooks, NLU Connectors, and Endpoints. Components that commonly require manual rebuilding include Custom Modules (now Extensions in v4), original Connections, and Secrets.
If you're running Cognigy v3, you'll need an intermediate migration step before moving to a third platform. The v3-to-v4 CLI tool processes v3 exports and imports them via API. Skipping this step creates data loss risk. Audit your version first, catalog your Custom Modules, and flag anything that requires manual rebuilding before you begin.
A practical pre-migration audit covers four areas:
- Flows: Count active flows and map which ones touch live customer interactions vs. internal testing environments.
- Integrations: List every API connection, webhook, and external system your flows call (CRM, billing, ticketing, knowledge base).
- Locales: Document all languages and regional variants configured in your NLU models.
- Custom Modules: Identify every Custom Module your flows depend on, since these require complete rebuilding and cannot be ported.
#Calculating the true cost of migration (TCO)
The sunk cost of three years of Cognigy development is real, but continuing to maintain a platform that blocks operational agility carries its own cost.
Build your TCO comparison across these line items:
| Cost category | Cognigy (stay) | Migration (one-time) |
|---|---|---|
| Developer dependency for flow changes | High (ongoing) | None post-migration |
| Compliance audit preparation | High manual effort | Automated via Context Graph |
| EU AI Act readiness work | Compliance positioning varies | Native to new architecture |
| Data extraction and transformation | N/A | Analyst time, weeks 1-2 |
| NLU retraining and validation | N/A | Ops time, weeks 2-4 |
| Integration rebuild | N/A | Dev time, weeks 3-6 |
| Parallel run infrastructure | N/A | Overlap costs, 2-4 weeks |
Operations teams that have lived through low-code deployments report the same pattern: governance gaps and technical debt in custom-built environments create ongoing drag that compounds over time, often outweighing the one-time cost of migration.
#Step-by-step: How to export Cognigy data for migration
You can export your NLU intelligence cleanly. Your flow logic stays locked in Cognigy's proprietary format. Focus your export efforts accordingly.
#Exporting conversation logs and NLU models
NLU model export (intents and entities):
Cognigy supports CSV and JSON export for intents directly from the Flow editor. Navigate to NLU > Intents in the Flow editor, or use the Cognigy API. The CSV format requires three columns: `intentName` (the intent name), the type (e.g., `exampleSentence`, `defaultReply`, `confirmationSentence`), and `exampleSentence` (the actual text value). The CSV requires no header row. The JSON format provides a comprehensive spec including intent names, example sentences, rules, tags, and disambiguation data. Keep your operating system's regional settings consistent throughout the export process, since locale settings affect the CSV delimiter and a mismatch can break the file on import.
Conversation log export (analytics):
Cognigy's OData v4 analytics endpoint provides raw data access via GET requests. The eight collections available are Analytics (formerly Inputs), Conversations (formerly ChatHistory), Steps, ExecutedSteps, Goals, GoalEvents, LiveAgentEscalations, and Sessions. Collection names vary by endpoint version. If your dashboard shows different names, check your Cognigy API documentation for the version you're running. Conversations contains every input and output from the bot, user, and Human Agent, making it the primary source for transcript reconstruction.
Access the endpoint via the OData URL and pipe it into your analytics platform. Data is logged per session and saved after session close (this applies to the Analytics/Inputs collection; the Conversations collection writes immediately per message), and queryable for historical analysis. Activate the "Collect Analytics" setting in your Endpoint configuration before exporting if it isn't already enabled.
#Handling Lexicons and Packages during transfer
Lexicons are Cognigy's structured vocabulary sets for recognizing specific keyphrases (product names, zip codes, account types). You export them from the Lexicon editor using the export button in the upper-right corner, which generates a CSV file in the format: `Keyphrase, tags, synonyms, data`.
You've invested months tuning domain-specific vocabulary in these Lexicons. Export them before touching anything else. They transfer cleanly and form the foundation for retraining on the new platform.
Packages are Cognigy's bundled resource exports that ZIP together Flows, Lexicons, NLU Connectors, Endpoints, and Knowledge Stores. A Package export gives you a structured archive of your entire project. Packages export as ZIP archives containing the configuration for each included resource. These files are the raw material for your migration, not a magic import button. GetVocal's Agent Builder converts these structures into Context Graph decision boundaries without requiring you to rebuild logic node-by-node.
Step-by-step Package export:
- Open your Cognigy project and navigate to the Resources panel.
- Select the resources you want to bundle (Flows, Lexicons, Connectors).
- Use the Package export function to generate the ZIP file.
- Store the archive in a secure location with access controls, since this file contains your conversation logic and training data.
- Extract and document the JSON structure before beginning import attempts on the new platform.
#Addressing data sovereignty and GDPR retention
Your exported files (conversation logs, NLU training sets, customer interaction records) contain personal data under GDPR. Apply your existing retention schedule, mask personal identifiers before using logs as training data on the new platform, and implement proper data transfer safeguards (adequacy decisions or Standard Contractual Clauses) if storing data outside the EU. Document the export date, scope, and storage location for your DPA audit trail. Under the EU AI Act, data provenance documentation requirements apply differently depending on your system classification: high-risk AI systems face data governance obligations covering the source and characteristics of training data, while general-purpose AI models carry separate training data transparency requirements. Document your exported data accordingly and confirm your classification with legal counsel before deployment.
#Executing the switch: Integration and architecture
#Migrating Voice Gateway and Webchat endpoints
Voice (SIP trunk) repointing:
Changing your voice gateway means updating the SIP trunk (Session Initiation Protocol) destination. Your telephony platform (including Genesys Cloud, Five9, NICE CXone, and others) routes calls to a SIP URI, so you'll update that URI to point at GetVocal's voice gateway rather than Cognigy's. Do not cut over SIP trunks in a single step. Configure the new destination as a secondary route and test with a non-production number before touching live traffic.
For your technical implementation team: Cognigy v4.75.0 and later require the URLToken parameter for Socket.IO and Webchat Endpoints. If your current webchat widget uses an older Cognigy client library, update your implementation documentation before decommissioning. The new platform's webchat will require its own script tag replacement across all customer-facing pages.
**SIP trunk migration** follows this sequence:
- Build new platform infrastructure and obtain the new SIP URI.
- Create a test inbound trunk on GetVocal.
- Validate call routing with internal test numbers.
- Run parallel call flows for 30 to 60 days before cutting over.
- Update your production SIP trunk destination.
- Monitor call completion rates for 48 hours post-cutover.
#Rebuilding logic: From flow charts to Context Graph
This is where most migrations stall. Teams try to replicate their Cognigy flow node-for-node in the new platform and spend weeks on a migration that should take days.
Don't try to replicate your Cognigy flow node-for-node. Each node is a technical implementation step: an API call, a variable assignment, a condition check, a CognigyScript block. Flow logic in Cognigy is brittle and difficult to manage programmatically even within the platform itself, which is precisely what makes it so opaque to compliance auditors.
We built GetVocal's Context Graph to work differently. Instead of technical nodes, you model business-level decision points: "Is the customer authenticated?", "Does this request require supervisor approval?", "Has the customer attempted this action in the last 30 days?". Each decision boundary in the Context Graph abstracts the implementation details while keeping business logic fully visible and auditable for any compliance review.
The Context Graph combines deterministic conversational governance and generative AI as equal components: deterministic logic defines every decision boundary and escalation path with full auditability, while generative AI handles natural language understanding, response generation, and edge case reasoning that rigid rule trees can't manage.
Don't bring your flow diagram into GetVocal as a blueprint. Bring your business rules document. Map your decision boundaries first, then configure the logic behind each boundary. The platform converts your business processes into a graph that transparently breaks interactions into interconnected, measurable steps where you define what gets handled by AI and what escalates to human agents.
#Risk mitigation: Running parallel systems
Don't cut over your entire contact center at once. You'll create unacceptable downtime risk. A parallel run routes a percentage of live traffic to GetVocal while Cognigy remains the fallback for the remainder.
Suggested parallel run progression:
- Early phase: Start with a small percentage of traffic on your lowest-risk use case (password reset, balance inquiry). Monitor deflection rate, CSAT, escalation rate, and compliance incidents daily and compare against your Cognigy baseline on identical use cases.
- Mid phase: Once KPIs consistently match your baseline, increase traffic gradually. If they diverge, pause and investigate before expanding.
- Late phase: Increase to the majority of validated use cases. Begin cutover planning for remaining flows.
The Control Center's Supervisor View surfaces live conversations, flags escalations, and gives your supervisors the tools to intervene without disrupting the customer interaction. During a parallel run, this visibility is critical: you need to see exactly how the new platform handles edge cases before you commit full traffic. Track escalation reasons, not just escalation volume. The pattern of why customers reach humans tells you whether your decision boundaries are calibrated correctly, and that's the data you feed back into the agent stress testing process before expanding traffic.
Three conditions that should pause a parallel run immediately:
- Deflection rate drops materially below your Cognigy baseline on comparable use cases.
- CSAT score falls below your pre-migration minimum threshold on new-platform interactions.
- A compliance incident occurs (AI contradicts policy, personal data handling error, unauthorized disclosure).
#Top Cognigy alternatives for European enterprises
The migration effort is real. Choosing the right destination platform determines whether you rebuild once or twice.
| Criterion | Cognigy | Parloa | Genesys native bots | GetVocal |
|---|---|---|---|---|
| EU AI Act readiness | Compliance positioning varies | Developer-led build | Limited governance | Native (Articles 13/14/50) |
| Human-in-the-loop | Configurable handover via contact center connectors | Automated escalation with developer configuration | AI-assisted routing with escalation detection | Active two-way collaboration |
| Operator control | Low-code with developer tooling for advanced use cases | Developer-managed | GUI-based configuration | Ops-managed Control Center |
| NLU complexity supported | High (with dev support) | Medium | Low-medium | High (transactional + complex) |
| Integration approach | API build-out | API build-out | Genesys-native + select third-party bots via Bot Connector | Pre-built connectors + API |
| Reported time to first agent | 8-16 weeks | 8 weeks–6 months (per vendor FAQ) | Not publicly specified | 4-8 weeks core use case (company-reported) |
| Audit trail for AI decisions | Limited | Limited | Platform-native audit logging | Full decision path logging |
Cognigy delivers genuine value if you need infinite customization and have a dedicated bot development team. Parloa follows a similar developer-led approach. Genesys native bots handle FAQ and standard conversational use cases.
We built GetVocal to address the space between simple FAQ deflection and genuinely complex CX: billing disputes, eligibility checks, multi-step service requests that require data from three systems and a conditional approval step. The Context Graph's transparent decision logic and the Control Center's real-time human collaboration model create a meaningful operational difference in this territory. For a detailed head-to-head on human-in-the-loop governance, the PolyAI vs. GetVocal comparison covers similar considerations that apply here.
#How GetVocal simplifies the migration path
Migration complexity comes from two sources: data portability and logic rebuild. GetVocal's architecture reduces friction on both.
NLU data import via Agent Builder: Our Agent Builder accepts standard NLU export formats (CSV/JSON) from Cognigy directly. You don't start from scratch. Your intent definitions and example sentences load into the Agent Builder as the starting point for retraining, and the graph-based interface lets your ops team shape those intents into decision boundaries without writing code.
Control Center monitoring from day one: The Control Center gives operators and supervisors the visibility to run AI-assisted customer conversations with confidence. The Operator View is where your team builds and manages the AI's decision logic directly, setting the boundaries of autonomous AI behavior before any customer interaction takes place. The Supervisor View surfaces live interactions in real time, with the tools to step in, redirect, or take over without disrupting the conversation. Every conversation on the new platform is visible, auditable, and correctable.
Supervisors can validate a decision and return control to the AI, approve an escalation path, or take over the conversation entirely, matching the interaction to its actual complexity rather than forcing a binary handoff.
Migration timeline: 8-16 weeks for full enterprise migration
Context Graph vs. Cognigy flow nodes, the migration shortcut: Rather than translating each Cognigy node, map your top 10 use cases by volume and identify the decision boundary at the center of each. For a billing inquiry, the boundary is typically: "Does the customer want to dispute a charge, check a balance, or update payment method?" Each branch then maps to a Context Graph path. One boundary replaces a page of nodes, and every path is transparent to your compliance team.
| Step | Typical duration | Key activities |
|---|---|---|
| Step 1: Discovery and data export | 1-2 weeks | Footprint audit, export NLU/lexicons/logs, compliance documentation |
| Step 2: Context Graph build | 3-4 weeks | Map decision boundaries, configure agent logic, import NLU data |
| Step 3: Integration and testing | 3-4 weeks | API connections (CRM, telephony, ticketing), UAT, performance testing |
| Step 4: Parallel run | 3-4 weeks | Phased traffic routing, KPI validation against Cognigy baseline |
| Step 5: Cutover and stabilization | 1-2 weeks | Full traffic migration, Cognigy decommission, ongoing monitoring |
Glovo's deployment demonstrates what's achievable: the team delivered its first agent within one week and scaled to 80 agents in under 12 weeks, achieving a 5x increase in uptime and a 35% increase in deflection rate (company-reported). That result included integration work, Context Graph creation, agent training, and phased rollout.
Ready to plan your migration? Request a Migration Architecture Review from GetVocal's solutions team. The review covers your current Cognigy footprint, integration feasibility with your CCaaS and CRM, and a realistic implementation timeline. No commitment required.
For operations leaders who've been through the Sierra AI migration process, the Cognigy migration follows a similar pattern with the added complexity of the NLU export step and the flow-to-Context-Graph translation work.
#Frequently asked questions about leaving Cognigy
Can I export my entire Cognigy flow logic?
No, flow logic is proprietary to the Cognigy platform. You can export NLU models and Lexicons as JSON/CSV, but you'll need to rebuild the flow node structure (CognigyScript, conditions, API calls) in GetVocal from your business requirements, not from the exported flow diagram.
How long does a typical migration take?
A full enterprise migration can take 8 to 16 weeks, depending on the number of active flows, integration complexity, and how many locales your NLU models cover. Core use cases with pre-built integrations can be live within 4-8 weeks.
Does GetVocal support Cognigy's NLU data formats?
Yes. GetVocal's Agent Builder is designed to ingest standard NLU export formats (CSV/JSON) to retrain agents using your existing intent definitions and example sentences, so you're not starting from zero.
Can I migrate just some flows and keep Cognigy running for others?
Yes. A parallel run strategy lets you migrate use cases incrementally while Cognigy remains active for flows you haven't yet rebuilt. Most operations teams migrate their highest-volume, lowest-complexity flows first to validate the new platform before tackling complex transactional use cases. See the agent stress testing metrics guide for the KPIs to monitor during this phase.
How does the new platform handle EU AI Act compliance post-migration?
Article 14 oversight requirements are addressed by the Control Center's active governance layer. Every AI decision generates a log showing conversation flow, data accessed, logic applied, and escalation trigger. These records are available for audit without manual extraction. For mid-market contact centers specifically, the alternatives comparison for mid-market covers the compliance architecture in more detail.
#Key migration terminology
NLU (Natural Language Understanding): The component of a conversational AI system that interprets user intent from text or voice input. NLU models contain intents, entities, and example training sentences that are fully exportable from Cognigy as CSV or JSON.
Intent: A defined category of user goal (e.g., "check_balance," "dispute_charge"). Cognigy stores intents with associated example sentences that train the NLU model.
Lexicon: Cognigy's term for a structured vocabulary set containing key phrases, synonyms, and associated data. Equivalent to an entity dictionary in other platforms. Exports as CSV.
Flow Node: A single unit of logic within a Cognigy flow (API call, condition, variable assignment, CognigyScript block). Not portable between platforms. Must be rebuilt from business requirements.
CognigyScript: A JavaScript superset built into the platform for custom logic within nodes. Cannot be exported or reused in any other platform.
Context Graph: GetVocal's protocol-driven architecture that models business-level decision points as a transparent, auditable graph. One decision boundary in a Context Graph replaces multiple Cognigy flow nodes, with full decision path visibility for compliance audit.
SIP trunk: The voice protocol connection between your telephony platform (Genesys, Five9, NICE) and the conversational AI platform. Repointing a SIP trunk redirects inbound voice traffic from Cognigy to the new platform.
Parallel run: A migration risk mitigation strategy where a defined percentage of live traffic routes to the new platform while the legacy system handles the remainder, allowing KPI validation before full cutover.
OData endpoint: Cognigy's analytics data access protocol. Provides raw conversation data (inputs, chat history, steps) via GET requests for export to analytics platforms before decommissioning.