Data Engineer – Microsoft Fabric & Power BI

Qualification: BE/BS/MTech/MS or equivalent work experience
Experience: 5+ Years
Notice: Immediate to 15 Days.
Timings: 04:30 PM IST to 12:30 AM IST
Location: Hyderabad (Work from office)

Job Description:

We are looking for a Data Engineer with experience in Microsoft Fabric and Power BI to design, build, and maintain modern data solutions. This role involves creating automated data pipelines, developing data models, and supporting reporting and analytics across domains such as Student, Finance, HR, Learning, and Advising.

Responsibilities

  • Design and develop Fabric Pipelines for automated ingestion from APIs, files into OneLake.
  • Implement medallion architecture (Bronze → Silver → Gold) using Fabric Lakehouses and Warehouses, applying Delta Lake best practices, partitioning, and incremental load patterns.
  • Build dimensional (star schema) models in the Gold layer aligned to Student, Finance, HR, Learning, and Advising domains.
  • Develop and maintain the master orchestration framework, including dependency management, retry logic, error handling, and scheduled execution windows.
  • Implement data quality checks, validation rules, and reconciliation logic across layers; support certification of enterprise KPIs.
  • Configure Row-Level Security (RLS) and Object-Level Security (OLS) in collaboration with the BI and security teams.
  • Integrate with Microsoft Purview for metadata capture, lineage, classification, and business glossary alignment.
  • Support deployment pipelines across Dev → Test → Prod Fabric workspaces, including CI/CD via Git integration.
  • Collaborate with NWTC technical data stewards during sprint ceremonies, UAT, and hypercare.
  • Produce technical documentation: source-to-target mappings, pipeline runbooks, and operational handover materials.

Must Have Skills

  • 2+ years of experience working with Microsoft Fabric, Azure Synapse, or Azure Data Factory
  • Practical experience building solutions with OneLake, Lakehouse, Delta Lake, and Fabric Pipelines or Dataflows Gen2
  • Strong proficiency in SQL and PySpark / Spark SQL
  • Proven experience implementing medallion architecture and dimensional data modeling

Apply for this position

Allowed Type(s): .pdf, .doc, .docx, .rtf