Senior Data Engineer

Lahore, Punjab, Pakistan
Full Time
ACE-Senior Data Engineer
Mid Level
About Us:

We are a dynamic and fast-growing financial technology group headquartered in Manchester, United Kingdom, with an established presence across the UK, Europe, Canada, Australia, Pakistan, and Bangladesh. Our group, encompassing ACE Money Transfer and ACE Union, operates across two regulated verticals: Electronic Money Services (EMI) and Cross-Border Remittances.

Through our digital platforms, including mobile apps, web portals, and APIs, we deliver a comprehensive range of regulated financial services. These include digital payments, e-wallets, electronic money issuance, prepaid cards, mobile top-ups, bill payments, and international money transfers, designed to serve both individual and business customers in over 100 countries. By leveraging cutting-edge technologies such as AI for fraud detection, automation for seamless transactions, and partnerships with global leaders like Mastercard, we provide secure, user-friendly, and accessible financial solutions that adapt to the evolving needs of a global customer base.

As we scale our EMI and remittance operations amid 2025's fintech landscape—marked by real-time payments, embedded finance, blockchain integration, and sustainability initiatives—we are building a leadership team to drive innovation, regulatory compliance, and excellence across our dual-regulated ecosystem.

Job Overview:
This role is focused on designing and delivering scalable, secure, and high-performance data architectures to support the organization’s expanding analytical and operational needs. The Senior Data Engineer will work closely with cross-functional teams to understand complex data requirements, define and implement robust data transformation rules, and lead the development and optimization of end-to-end data pipelines and data warehouse infrastructure.
The ideal candidate will have hands-on experience in API based data integrations with third-party platforms, as well as a strong grasp of data governance principles. This includes implementing policies to ensure data quality, privacy, security, and compliance with regulatory standards. The role also involves optimizing workflows for cost efficiency and performance, while providing technical leadership to engineering and analytics teams in executing complex, data driven initiatives.

Key Responsibilities:
  • Architecting Scalable Data Systems: Designing robust data infrastructures capable of supporting increasing data volume and complexity.
  • Design Data Pipelines: Develop efficient data ingestion pipelines and ETL processes capable of handling both structured and unstructured data effectively.
  • Data Warehouse Management: Manage and optimize the data warehouse, ensuring efficient data storage, retrieval, and transformation. Implement best practices for organizing and managing raw, processed, and curated data, with scalability and future growth in mind
  • Optimize Data Modelling: Write, optimize, and maintain complex SQL queries and Python scripts to ensure efficient data processing.
  • Streamlining Data Workflows: Enhancing data storage, processing, and access methods to improve efficiency, strengthen data observability, and reduce operational costs through targeted infrastructure optimizations.
  • Governance & Quality: Develop and maintain data quality checks to ensure data is accurate, consistent, and regulatory compliant.
  • Develop Integration Procedures: Design and implement processes to integrate data warehouse solutions into operational IT environments.
  • Driving Technical Leadership: Guiding engineering and analytics teams in the execution of data-driven initiatives and innovation.
  • Research & Development: Keep up with the latest technological trends and identify innovative solutions to address customer challenges and company priorities.
Skills and Qualifications:
  • Bachelor’s or Master’s degree in Engineering, Computer Science, or a related field, or equivalent professional experience.
  • Minimum of 4 years of hands-on experience in a Data Engineering or Data Architecture role.
  • Proven experience with cloud platforms such as AWS, Azure, or GCP, with practical knowledge of their core data and infrastructure services.
  • Proficient in using ETL tools and frameworks such as AWS Glue, Apache Airflow, Apache Spark, Kafka, Informatica, DataStage, or Talend for integrating data from multiple sources.
  • Demonstrated expertise in setting up and managing scalable, high-performance data warehouse environments.
  • Strong understanding of dimensional data modeling and mapping techniques, including Star, Snowflake, and Galaxy schemas.
  • Solid understanding of data privacy regulations, including GDPR, and experience applying best practices for data security, compliance, and governance.
  • Working knowledge of data observability principles and the full data lifecycle management.
  • Extensive experience in designing and developing ETL architectures to support large-scale, distributed data processing.
  • Deep knowledge of relational database concepts and advanced proficiency in SQL and Python for data manipulation and transformation.
  • Experience working in distributed environments, including clustering, partitioning, and sharding strategies.
  • Familiarity with building semantic models and preparing reporting and mapping documentation.
  • Strong analytical and problem-solving skills, with a logical and solution-oriented mindset.
  • Prior experience in the financial services sector is a plus.
Share

Apply for this position

Required*
Apply with Indeed
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file