Data Engineer

Pune

Full Time

1 position

3 to 5 years

Pune

Full Time

1 position

3 to 5 years

×

    Apply Now!

    Please fill out the form and our team will get in touch with you.

    Personal Details






    Professional Profile






    Resume / Portfolio




    Availability & Notice




    Compensation




    Willingness to relcate

    About the Role

     “We are seeking a skilled Data Engineer to join our growing Data team and be instrumental in building the backbone of our data ecosystem. You will be responsible for designing, constructing, and maintaining scalable data pipelines and robust data architectures (like data lakes/warehouses) that efficiently collect, process, and transform vast amounts of raw data from disparate sources. This role is crucial for delivering high-quality, accessible data that fuels our business intelligence dashboards, advanced analytics, and machine learning initiatives, enabling data-driven decisions across the organization. If you’re passionate about data infrastructure, solving complex data challenges, and enabling data science, we want to hear from you

    Key Responsibilities

    • Develop and maintain robust, scalable data pipelines for batch and real-time data.
    • Design, build, and optimize data warehouses and data lakes.
    • Implement ETL/ELT processes to transform raw data into usable formats.
    • Ensure data quality, integrity, security, and governance
    • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions.
    • Automate data processes and build data models. 

    Skills & Experience: Key Requirements

    • Minimum 2 years of end-to-end Data Engineering Development experience – Total professional experience does not matter much.  
    • 2 to 3 years of experience in working on data engineering/ETL tools like Azure Data Factory (ADF) or SSIS or AWS Glue.  
    • Microsoft Certified: Azure Data Engineer Associate (DP-203)  
    • Should have worked with Azure Function and Python to perform serverless code execution.   
    • Experience in handling large data sets using ADF and Pyspark / Scala code.   
    • Extensive hands-on experience implementing data migration and data processing using Azure services: ADLS, Azure Data Factory, Databricks Azure Functions, Synapse/DW, Azure SQL DB,  Event Hub, Azure Stream Analytics, Databricks    
    • Experience in working with different source/target systems like Oracle Database, SQL Server Database, Blob Storage, Azure Data Lake Storage using Azure Data Factory.   
    • Experience on working with different Azure services for storage and data transfer.   
    • Should have working experience on Data Pipeline and Data flow within ADF.   
    • Ability to read data from sources using APIs/Web Services.  
    • Having worked with API to write data to target systems.  
    • Working experience on performing transformation on data required during an ETL process.  
    • Experience with Data Cleanup, Data Cleansing, and optimization related exercises.   

    Nice to Have

    • Experience working with non-structured data sets in Azure.  
    • Experience with other analytics tools like Power BI, Azure Analysis service.  
    • Experience with private and public cloud architectures. 

    Tech Stack

    • MS Azure Data Factory, Python Coding  
    • Azure Function Apps, Azure Databricks  
    • Min 3 Full Cycle Data Engineering Implementations  
    • Certification – Microsoft Certified: Azure Data Engineer Associate (DP-203)  

    Qualifications

    • M.Tech / B.E. / B.Tech (Computer Science, Information systems, IT) / MCA / MCS
    • Excellent written and verbal communication skills.