CUBE are a global RegTech business defining and implementing the gold standard of regulatory intelligence for the financial services...">
Back to Jobs

Data Engineer

Remote, USA Full-time Posted 2025-07-27

CUBE are a global RegTech business defining and implementing the gold standard of regulatory intelligence for the financial services industry. We deliver our services through intuitive SaaS solutions, powered by AI, to simplify the complex and everchanging world of compliance for our clients.

Why us?

CUBE is a globally recognized brand at the forefront of Regulatory Technology. Our industry-leading SaaS solutions are trusted by the world’s top financial institutions globally.

In 2024, we achieved over 50% growth, both organically and through two strategic acquisitions. We’re a fast-paced, high-performing team that thrives on pushing boundaries—continuously evolving our products, services, and operations. At CUBE, we don’t just keep up we stay ahead.

We believe our future is built by bold, ambitious individuals who are driven to make a real difference. Our “make it happen” culture empowers you to take ownership of your career and accelerate your personal and professional development from day one.

With over 700 CUBERs across 19 countries spanning EMEA, the Americas, and APAC, we operate as one team with a shared mission to transform regulatory compliance. Diversity, collaboration, and purpose are the heartbeat of our success.

We were among the first to harness the power of AI in regulatory intelligence, and we continue to lead with our cutting-edge technology. At CUBE, You will work alongside some of the brightest minds in AI research and engineering in developing impactful solutions that are reshaping the world of regulatory compliance.

Role Mission

As a Data Engineer, your mission is to architect, build, and optimize scalable and secure data pipelines and infrastructure to support advanced analytics, business intelligence, and data science initiatives. Leveraging technologies like Microsoft Fabric, Apache Spark, Python, and SSIS, you will be instrumental in transforming raw data into actionable insights that drive business performance.


Key Responsibilities
  • Design & Development: Build robust ETL pipelines and scalable data solutions using Microsoft Fabric (Data Engineering, Data Factory, OneLake), Python, Apache Spark, and SSIS.

  • Data Integration: Develop reliable data integration frameworks that consolidate structured and unstructured data from various sources, ensuring high-quality and consistent datasets.

  • Data Processing & Transformation: Create and optimize data transformation logic using Python, Spark SQL, and PySpark to support complex analytical workloads.

  • Infrastructure Optimization: Monitor, troubleshoot, and enhance the performance, scalability, and resilience of data pipelines and infrastructure.

  • Collaboration: Work closely with data scientists, analysts, and business stakeholders to gather data requirements and deliver efficient and secure data solutions.

  • Data Modeling: Design data models (relational and dimensional) that support operational processes and business reporting needs.

  • Governance & Compliance: Implement and uphold data governance, quality, and security standards across all systems and processes.

  • Documentation & Mentoring: Maintain thorough documentation of data architecture, pipelines, and workflows. Mentor junior data engineers and contribute to knowledge-sharing across the team.

Required Skills & Qualifications
  • Education: Bachelor’s degree in Computer Science, Engineering, or a related discipline (Master’s degree preferred).

  • Experience: Minimum 5 years in data engineering or related roles.

  • Core Technologies:

    • Expertise in Microsoft Fabric ecosystem (Data Factory, OneLake, Azure Synapse).

    • Strong programming experience with Python for data manipulation and scripting.

    • Proven experience in developing and maintaining ETL workflows using SSIS.

    • Solid experience with Apache Spark (Spark SQL, PySpark).

  • Database Proficiency:

    • Strong SQL skills with hands-on experience in SQL Server.

    • Exposure to both SQL and NoSQL databases.

  • DevOps & Version Control: Experience with Git and CI/CD tools and practices.

  • Data Modeling: Proficiency in designing relational and dimensional data models.

  • Governance: Understanding of data governance, privacy, and security principles.

  • Soft Skills: Strong analytical thinking, problem-solving ability, and effective communication in cross-functional teams.

Preferred Qualifications
  • Experience with real-time streaming technologies (e.g., Kafka).

  • Familiarity with data visualization tools such as Power BI.

  • Hands-on experience with containerization (Docker, Kubernetes).

  • Cloud platform experience, especially with Azure.

  • Microsoft/Azure certifications in data engineering or analytics.

  • Experience working on ML data pipelines or in support of data science teams.

Interested?

If you are passionate about leveraging technology to transform regulatory compliance and meet the qualifications outlined above, we invite you to apply. Please submit your resume detailing your relevant experience and interest in CUBE.​

CUBE is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Apply to this Job

Similar Jobs