Frequently Asked Questions
Everything you need to know about our AI-powered data transformation tools
About Visual Data Transformation
Core concepts behind our data transformation tool
What is Visual Data Transformation?
Visual Data Transformation is a modern approach to data mapping that combines drag-and-drop simplicity with powerful custom logic capabilities. DataFlowMapper pioneered this approach to make complex data transformations accessible without coding, allowing technical teams to create sophisticated mappings visually.
How does DataFlowMapper compare to traditional ETL tools?
Unlike complex ETL platforms that can be overwhelming and require coding knowledge, or basic mapping tools that lack flexibility, DataFlowMapper offers a perfect balance of power and simplicity. Traditional ETL tools often require specialized knowledge and significant setup time, while DataFlowMapper provides immediate productivity with its visual interface and Custom Logic Builder, making it ideal for teams handling CSV, Excel, and JSON transformations.
Who is DataFlowMapper designed for?
DataFlowMapper is specifically designed for techincal teams and data specialists who work with CSV, Excel, and JSON data. It's particularly valuable for companies that have dedicated teams handling client data onboarding, migration, and conversion where data needs to be transformed with conditional logic applied. Our tool bridges the gap between expensive enterprise ETL solutions and basic data cleaning tools.
AI Capabilities
How our AI-powered features accelerate your workflow
How does DataFlowMapper's AI mapping suggestion work?
Our AI mapping suggestion feature analyzes your source and destination fields to intelligently recommend the most logical field mappings. The system examines field names, data patterns, and common mapping conventions to suggest appropriate connections. You maintain full control by approving or declining each suggestion, allowing you to leverage AI assistance while ensuring accuracy. This feature significantly reduces the time spent on initial mapping setup, especially for files with numerous fields.
What is the 'Map All' feature and how does it save time?
The 'Map All' feature is an advanced AI data mapping tool that transforms your plain English requirements into a complete mapping configuration. Simply describe your overall mapping requirements, and our AI will analyze your source and destination structures to create appropriate field mappings and determine which fields need custom logic or validations. This powerful feature can reduce hours of manual mapping work to minutes, while still giving you full control to review and adjust the results before finalizing.
How does AI Logic Assist help with complex transformations?
AI Logic Assist allows you to describe complex transformations in plain English, and the system automatically generates the appropriate Python code that integrates with our visual logic builder. For example, you might type 'Convert date from MM/DD/YYYY to YYYY-MM-DD and validate it's not in the future,' and the AI will create the necessary logic. This feature bridges the gap between no-code and code-based approaches, making advanced transformations accessible to all users regardless of their programming expertise.
How accurate are the AI-generated mappings and transformations?
Our AI capabilities are designed to provide high-quality starting points that you can review and refine. The accuracy depends on several factors including the clarity of field names, the complexity of the transformation, and the specificity of your requirements. The system excels at standard transformations and recognizing common patterns, achieving over 90% accuracy in many scenarios. However, we always ensure you maintain control by reviewing suggestions before applying them, especially for business-critical transformations that may require domain-specific knowledge. In the event a transformation's logic is complex and can't be visualized in the no-code logic builder, you can still view it in manual tab and as long as it's valid Python, it will work during transformation.
Features & Technical Capabilities
Detailed information about our data transformation tools
What types of data transformations can I perform?
DataFlowMapper supports comprehensive data transformations through our visual interface, including: 1) String operations (UPPER, LOWER, TRIM, REPLACE, CONCAT, etc.), 2) Mathematical functions (SUM, MULTIPLY, DIVIDE, ROUND, etc.), 3) Date formatting and manipulations, 4) Type conversions and data cleaning, 5) List operations, and 6) Custom python snippets for power users and flexibility. Our tools for data transformation are designed to handle both simple mappings and complex business logic, making them suitable for a wide range of onboarding scenarios.
How does the Custom Logic Builder work?
The Custom Logic Builder features an intuitive multi-tab interface that lets you: 1) Define variables and create conditional logic using a visual IF/THEN builder, 2) Use AND/OR logic with support for multiple conditions and nesting (one level deep), 3) Apply pre-built functions for common transformations, 4) Preview transformations in real-time, and 5) Use the Manual tab to write Python snippets for advanced transformations. Each transformation is applied row by row to your data, giving you precise control over how each field is processed.
What file formats and sizes are supported?
DataFlowMapper supports CSV, Excel (xlsx, xls), and JSON formats with drag-and-drop file upload capabilities. The platform works optimally with files containing hundreds of thousands of rows, and all formats are fully supported for both input and output. We currently have a 200mb limit per file, which accommodates most onboarding and data import scenarios.
How does DataFlowMapper handle nested JSON?
DataFlowMapper features a specialized syntax for working with nested JSON structures. Using our dot notation with array indexing (e.g., 'variable[*].field1'), you can easily reference and map nested elements within JSON data. This powerful capability allows you to transform between flat tabular structures and complex nested JSON, and vice versa. The system recursively expands JSON objects, enabling you to reference specific indices or iterate through arrays, making complex JSON transformations straightforward even for users without extensive JSON experience. The translation and abstraction is done by the system, so you can upload a source JSON file and a template for the destination JSON file, and the system will extract each field with proper syntax which you can then reference.
What validation capabilities does DataFlowMapper offer?
DataFlowMapper includes a robust validation system that uses the same visual logic builder interface as our transformations. You can create sophisticated validation rules that check for data integrity, business logic compliance, format requirements, and more. When validation fails, the system provides detailed error messages configured by the user for each affected cell, highlighting issues in the data table view. This allows technical teams to quickly identify and address data quality issues before proceeding with downstream system integration.
Advanced Integration Features
Connect with external systems and enhance your data workflows
How do the API connectivity features work?
DataFlowMapper's API connectivity allows you to both pull data from and push data to external APIs. The system supports various API types with configurable headers, parameters, JSON or form data bodies, multiple authentication methods, and automatic pagination detection. After transforming and validating your data, you can push it to an API endpoint with a single click. This feature is particularly valuable for teams that need to integrate with client systems, third-party services, and destination applications as part of their data transformation workflow.
What database integration capabilities are available?
Our database integration supports connections to PostgreSQL, MySQL, SQL Server, and other major database systems. You can write queries to pull data or call stored procedures, and configure destinations for insert, update, or stored procedure operations. The system automatically maps columns by name or column order with intelligent key matching for update functions. After transformation and validation, you can preview the exact SQL that will be executed before committing changes, giving you complete confidence in your database operations.
What is the Remote Lookup function and how can it be used?
The Remote Lookup function acts as a powerful XLOOKUP equivalent that works with your connected API or database sources. During transformations, you can use this function to pull reference data or validate against external systems. The system will make the API call and flatten the response to a tabular format to be referenced. For example, you might use Remote Lookup to check if a customer ID exists in your CRM, retrieve current product pricing from your database, or validate that a transaction code is valid according to business rules. This feature is essential for teams that need to enrich or validate their transformations with data from external systems.
Implementation & Usage
Practical information for using DataFlowMapper
How does the visual mapping interface work?
Our visual interface lets you map fields through drag-and-drop actions. You can either create direct field-to-field mappings or use the Custom Logic Builder for complex transformations. The interface shows source fields, destination fields, and a real-time preview of your transformation logic. Each field can have either a direct mapping or custom logic, but not both. Within the Custom Logic Builder, users can input a row on Return Results tab to test what the transformation will return for that row to help validate your mapping on the fly.
Can I save and reuse my data transformation mappings?
Yes! You mapping files are downloaded as CSV files, which define your source fields, destination fields, and transformation logic. These mapping files can be saved locally and reused for future transformations. Simply upload your saved mapping file when you need to perform the same transformation again or if you'd like to edit it. This feature is particularly valuable for implementation teams that perform similar transformations repeatedly or need to maintain transformation templates for different client scenarios. This mapping file also serves as documentation for how data is transformed, making it easier to share knowledge within your organization.
How can I use DataFlowMapper in my onboarding workflow?
DataFlowMapper integrates seamlessly into onboarding workflows by providing a dedicated environment for handling data transformations. A typical workflow involves: 1) Uploading source data, 2) Creating or selecting a mapping template, 3) Configuring transformations using our visual tools, 4) Validating the results, and 5) Downloading the transformed data or pushing it directly to a destination system. The platform's flexibility allows it to fit into various data onboarding processes, whether you're handling one-time migrations or uploads on the fly or recurring data transformations.
Security & Data Handling
How we protect your sensitive information
How does DataFlowMapper handle data security?
DataFlowMapper prioritizes data security by never storing your transformation data. All transformations are processed on our servers hosted by a third party service. Basic user information is stored in our database for authentication. All credentials are encrypted at rest and transit. Your data is processed in memory and immediately discarded after the transformation is complete. Payments and authorization are handled by trusted third-party services with industry-standard security practices. This approach ensures that sensitive client data remains protected throughout the transformation process.
Is my data stored on DataFlowMapper servers?
No, your transformation data is never stored persistently on our servers. DataFlowMapper processes your data in memory during the transformation operation, and it's immediately discarded once the operation is complete and results are delivered to you. This approach minimizes security risks and ensures that your sensitive client data doesn't remain on external systems. The only information we retain is your user account details and API/DB credentials if you choose to save them.