Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
10 - 20 Lacs
Kochi
Hybrid
Skills and attributes for success 3 to 7 years of Experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, etc. Hands on experience in programming like python/pyspark Need to have good knowledge on DWH concepts and implementation knowledge on Snowflake Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems Please apply on the below link for further interview process. https://careers.ey.com/job-invite/1537161/
Posted 2 months ago
8.0 - 13.0 years
8 - 18 Lacs
Pune
Remote
Data Engineer with good experience in Azure Data Engineer :- Pyspark: Python : Azure Data Bricks: Azure Data Factory :- SQL:- No Hyderabad & Bangalore Candidates
Posted 2 months ago
7.0 - 12.0 years
18 - 30 Lacs
Chennai
Hybrid
Hi, We have vacancy for Sr. Data engineer. We are seeking an experienced Senior Data Engineer to join our dynamic team. The ideal candidate will be responsible for Design and implement the data engineering framework. Responsibilities Strong Skill in Big Query, GCP Cloud Data Fusion (for ETL/ELT) and PowerBI. Need to have strong skill in Data Pipelines Able to work with Power BI and Power BI Reporting Design and implement the data engineering framework and data pipelines using Databricks and Azure Data Factory. Document the high-level design components of the Databricks data pipeline framework. Evaluate and document the current dependencies on the existing DEI toolset and agree a migration plan. Lead on the Design and implementation an MVP Databricks framework. Document and agree an aligned set of standards to support the implementation of a candidate pipeline under the new framework. Support integrating a test automation approach to the Databricks framework in conjunction with the test engineering function to support CI/CD and automated testing. Support the development teams capability building by establishing an L&D and knowledge transition approach. Support the implementation of data pipelines against the new framework in line with the agreed migration plan. Ensure data quality management including profiling, cleansing and deduplication to support build of data products for clients Skill Set Experience working in Azure Cloud using Azure SQL, Azure Databricks, Azure Data Lake, Delta Lake, and Azure DevOps. Proficient in Python, Pyspark and SQL coding skills. Profiling data and data modelling experience on large data transformation projects creating data products and data pipelines. Creating data management frameworks and data pipelines which are metadata and business rules driven using Databricks. Experience of reviewing datasets for data products in terms of data quality management and populating data schemas set by Data Modellers Experience with data profiling, data quality management and data cleansing tools. Immediate joining or short notice is required. Pls Call varsha 7200847046 for more info Thanks, varsha 7200847046
Posted 2 months ago
7.0 - 9.0 years
14 - 15 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
7+ years of Azure Data Engineering experience with proficiency in SQL and at least one programming language (e.g., Python) for data manipulation and scripting: Strong experience with PySpark, ADF, Databricks, Data Lake, and SQL Preferable experience with MS Fabric. Proficiency in data warehousing concepts and methodologies and implementation. Strong knowledge of Azure Synapse and Azure Databricks. Hands-on experience with data warehouse platforms and ETL tools (e.g Apache Spark). Deep understanding of data modelling principles, data integration techniques, and data governance best practices. Preferrable experience with Power BI, domain knowledge of Finance, Procurement, Human Capital. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote.
Posted 2 months ago
8.0 - 12.0 years
15 - 22 Lacs
Pune, Bengaluru
Work from Office
Job Title: Senior Data Engineer Company: NAM Info Private Limited Location : Bangalore Experience : 6-8 Years Responsibilities : Develop and optimize data pipelines using Azure Databricks and PySpark. Write SQL/Advanced SQL queries for data transformation and analysis. Manage data workflows with Azure Data Factory and Azure Data Lake. Collaborate with teams to ensure high-quality, efficient data solutions. Required Skills: 6-8 years of experience in Azure Databricks and PySpark. Advanced SQL query skills. Experience with Azure cloud services, ETL processes, and data optimization. Please send profiles for this role to narasimha@nam-it.com.
Posted 2 months ago
5.0 - 10.0 years
7 - 12 Lacs
Ahmedabad
Work from Office
Project Role :Application Lead Project Role Description :Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills :Microsoft Azure Analytics Services Good to have skills :NA Minimum 5 year(s) of experience is required Educational Qualification :BE Summary:As an Application Lead for Packaged Application Development, you will be responsible for designing, building, and configuring applications using Microsoft Azure Analytics Services. Your typical day will involve leading the effort to deliver high-quality applications, acting as the primary point of contact for the project team, and ensuring timely delivery of project milestones. Roles & Responsibilities: Lead the effort to design, build, and configure applications using Microsoft Azure Analytics Services. Act as the primary point of contact for the project team, ensuring timely delivery of project milestones. Collaborate with cross-functional teams to ensure the successful delivery of high-quality applications. Provide technical guidance and mentorship to team members, ensuring adherence to best practices and standards. Professional & Technical Skills: Must To Have Skills:Strong experience with Microsoft Azure Analytics Services. Good To Have Skills:Experience with other Azure services such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. Experience in designing, building, and configuring applications using Microsoft Azure Analytics Services. Must have databricks and pyspark Skills. Strong understanding of data warehousing concepts and best practices. Experience with ETL processes and tools such as SSIS or Azure Data Factory. Experience with SQL and NoSQL databases. Experience with Agile development methodologies. Additional Information: The candidate should have a minimum of 5 years of experience in Microsoft Azure Analytics Services. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality applications. This position is based at our Bengaluru office. Qualifications BE
Posted 2 months ago
3.0 - 6.0 years
5 - 15 Lacs
Kochi, Thiruvananthapuram
Hybrid
Hiring for Azure Data Engineer in Kochi Location Experience - 3 to 6 years Location - Kochi JD Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF Interested candidates please share your updated resume with below details at Smita.Dattu.Sarwade@gds.ey.com Total Experience - Relevant Experience - Current Location - Preferred Location - Current Ctc Expected Ctc – Notice period -
Posted 2 months ago
4.0 - 7.0 years
7 - 12 Lacs
Gurugram
Hybrid
Role & responsibilities Design and build effective solutions using the primary key skills required for the profile Support the Enterprise Data Environment team particularly for Data Quality and production support. Collaborate on a data migration strategy for existing systems that need to migrate to a next generation Cloud / AWS application software platform. Collaborate with teams as a key contributor of data architecture directives & documentation: including data models, technology roadmaps, standards, guidelines, and best practices. Focus on data quality throughout the ETL & data pipelines, driving improvements to data management processes, data storage, and data security to meet the needs of the business customers. Preferred candidate profile EDUCATION: Bachelor's FIELD OF STUDY: Information Technology EXPERIENCE: 4+ years of total experience into IT industry as a developer/senior developer/data engineer. 3+ years of experience of working extensively with Azure services such as Azure Data Factory, Azure Synapse and Azure Datalake. 3+ years of experience working extensively with Azure SQL, MS SQL Server and good exposure into writing complex SQL queries. 1+ years of experience working with the production support operations team as a production support engineer. Good knowledge and exposure into important SQL concepts such as Query optimization, Data Modelling and Data Governance. Working Knowledge of CI/CD process using Azure DevOps and Azure Logic Apps • Very good written and verbal communication skills. Perks and Benefits Transportation Services : Convenient and reliable commute options to ensure a hassle-free journey to and from work. Meal Facilities : Nutritious and delicious meals provided to keep you energized throughout the day. Career Growth Opportunities : Clear pathways for professional development and advancement within the organization. Captive Unit Advantage : Work in a stable, secure environment with long-term projects and consistent workflow. Continuous Learning : Access to training programs, workshops, and resources to support your personal and professional growth. Link to apply : https://encore.wd1.myworkdayjobs.com/externalnew/job/Gurgaon---Candor-Tech-Space-IT---ITES-SEZ/Senior-Data-Engineer_HR-18537 Or Share your CV at Anjali.panchwan@mcmcg.com
Posted 2 months ago
6.0 - 11.0 years
20 - 25 Lacs
Pune
Hybrid
Proficient in Power BI and related technology including MSFT Fabric, Azure SQL Database, Azure Synapse, Databricks and other visualization Hands-on experience with Power BI, machine learning and AI services in Azure. Expertise in DAX
Posted 2 months ago
4.0 - 8.0 years
4 - 8 Lacs
Gurugram
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Proficiency in MS Fabric,Azure Data Factory, Azure Synapse Analytics, Azure Databricks Extensive knowledge of MS Fabriccomponents Lakehouses, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, Semantic Model. Integrate Fabric capabilities for seamless data flow, governance, and collaborationacross teams. Strong understanding of Delta Lake, Parquet, and distributed data systems. Strong programming skills in Python, PySpark,Scalaor SparkSQL/TSQLfor data transformations. Your Profile Strong experience in implementation and management of lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL) . Proficiencyin data integration techniques, ETL processes and data pipeline architectures. Understanding of Machine Learning Algorithms & AI/ML frameworks (i.e TensorFlow, PyTorch)and Power BIis an added advantage MS Fabric and PySpark is must. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 2 months ago
10.0 - 15.0 years
12 - 17 Lacs
Chennai
Work from Office
Job Purpose: We are looking for a Senior Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required, coupled with strong communication skills. Requirements: We are looking for a Senior Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required, coupled with strong communication skills. The ideal candidate will possess Experience with Azure Data Services, including Azure Data Factory, Azure Synapse or similar tools,Experience of creating DAG's, implementing activities, and running Apache Airflow and Familiarity with DevOps practices, CI/CD pipelines and Azure DevOps. The ideal candidate should have: Key Responsibilities: Design, develop, and maintain ETL Notebook orchestration pipelines using PySpark and Microsoft Fabric. Working with Apache Delta Lake tables, Change Data Feed (CDF), Lakehouses and custom libraries Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver efficient data solutions. Migrate and integrate data from legacy SQL Server environments into modern data platforms. Optimize data pipelines and workflows for scalability, efficiency, and reliability. Provide technical leadership and mentorship to junior developers and other team members. Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability. Debugging of code, breaking down to test components, identify issues and resolve Develop, maintain, and enforce data engineering best practices, coding standards, and documentation. Conduct code reviews and provide constructive feedback to improve team productivity and code quality. Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms. Qualifications : Bachelor s or Master s degree in Computer Science, Data Science, Engineering, or a related field. 10+ years of experience in data engineering, with a strong focus on ETL development using PySpark or other Spark-based tools. Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling. Experience with Microsoft Fabric or similar cloud-based data integration platforms is a plus. Strong knowledge of data warehousing concepts, ETL frameworks, and big data processing. Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is an advantage. Experience working with both structured and unstructured data sources. Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues. Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools. Experience of creating DAG's, implementing activities, and running Apache Airflow Familiarity with DevOps practices, CI/CD pipelines and Azure DevOps.
Posted 2 months ago
6.0 - 9.0 years
5 - 14 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Data Bricks skillset with Pyspark , SQL Strong proficiency in pyspark and SQL Understanding of data warehousing concepts ETL processes/ Data pipeline building with ADB/ADF Experience with Azure cloud platform, knowledge of data manipulation techniques Experience working with business teams to convert the requirements into technical stories for migration Leading the technical discussions and implementing the solution Experience will multi tenant architecture and have delivered projects in Databricks + Azure combination Experience to Unity catalogue is useful
Posted 2 months ago
10.0 - 15.0 years
30 - 36 Lacs
Thiruvananthapuram
Work from Office
* Manage Azure data infrastructure using DevOps practices. * Ensure security compliance through automation and collaboration. * Develop IAC tools for efficient data management. Immediate joiners preferred
Posted 2 months ago
10.0 - 17.0 years
9 - 19 Lacs
Bengaluru
Remote
Azure Data Engineer Skills Req: Azure Data Engineer Big Data , hadoop Develop and maintain Data Pipelines using Azure services like Data Factory PysparkSynapse, Data Bricks Adobe,Spark Scala etc
Posted 2 months ago
2.0 - 5.0 years
8 - 12 Lacs
Chennai
Work from Office
Role: Data Scientist Experience: 2 - 5 Years Qualification: B.Tech /BE Location: Chennai Employment: Full-time Engage with clients to understand business problems and come up with approaches to solve them. Good communication skills, both verbal and written, to understand data needs and report results Create dashboards & Presentations to tell compelling stories about customers and their business Stay up to date with the latest technology, techniques, and methods in AI & ML An ideal candidate would have the below skill sets. Good understanding of statistical and data mining techniques. Hands-on Experience in building Machine Learning models in Python Ability to collect, clean, and engineer large amounts of data using SQL techniques. Data analysis and visualization skills in Power BI or Tableau to bring out insights from data Good understanding of GPTs/LLMs and its use in the field Experience working in at least one of the cloud platforms preferred: Azure, AWS, or GCP.
Posted 2 months ago
8.0 - 12.0 years
0 Lacs
Bengaluru
Work from Office
We are looking for Microsoft BI & Data Warehouse Lead to design, develop, and maintain robust data warehouse & ETL solutions using the Microsoft technology stack. The ideal candidate will have extensive expertise in SQL Server development, (ADF), Health insurance Provident fund
Posted 2 months ago
8.0 - 13.0 years
16 - 31 Lacs
Pune, Chennai, Bengaluru
Hybrid
No of years experience Min 8 + years (8 to 10 years) – ADF min. 5 years must to have Detailed job description - Skill Set: Experience of MS Azure platform and Strong knowledge in Azure Data factory and Azure Synapse Fluency in SQL with software development experience. Mandatory Skills: Azure Data Factory, Azure Synapse, Strong SQL
Posted 2 months ago
5.0 - 6.0 years
12 - 18 Lacs
Indore, Hyderabad, Pune
Hybrid
Min. 5+ years of experience with good work experience in Banking domain • Strong experience in Azure Databricks, Pyspark - Both these skills are mandatory
Posted 2 months ago
10.0 - 16.0 years
27 - 37 Lacs
Hyderabad
Work from Office
Data Architect Microsoft Fabric, Snowflake & Modern Data Platforms Location: Hyderabad Employment Type: Full-Time Position Overview: We are seeking a seasoned Data Architect with strong consulting experience to lead the design and delivery of modern data solutions across global clients. This role emphasizes hands-on architecture and engineering using Microsoft Fabric and Snowflake, while also contributing to internal capability development and practice growth. The ideal candidate will bring deep expertise in data modeling, modern data architecture, and data engineering, with a passion for innovation and client impact. Key Responsibilities: Client Delivery & Architecture (75%) Serve as the lead architect for client engagements, designing scalable, secure, and high-performance data solutions using Microsoft Fabric and Snowflake. Apply modern data architecture principles including data lakehouse, ELT/ETL pipelines, and real-time streaming. Collaborate with cross-functional teams (data engineers, analysts, architects) to deliver end-to-end solutions. Translate business requirements into technical strategies with measurable outcomes. Ensure best practices in data governance, quality, and security are embedded in all solutions. Deliver scalable data modeling solutions for various use cases leveraging a modern data platform. Practice & Capability Development (25%) Contribute to the development of reusable assets, accelerators, and reference architectures. Support internal knowledge sharing and mentoring across the India-based consulting team. Stay current with emerging trends in data platforms, AI/ML integration, and cloud-native architectures. Collaborate with global teams to align on delivery standards and innovation initiatives. Qualifications: 10+ years of experience in data architecture and engineering, preferably in a consulting environment. Proven experience with Microsoft Fabric and Snowflake platforms. Strong skills in data modeling, data pipeline development, and performance optimization. Familiarity with Azure Synapse, Azure Data Factory, Power BI, and related Azure services. Excellent communication and stakeholder management skills. Experience working with global delivery teams and agile methodologies. Preferred Certifications: SnowPro Core Certification (preferred but not required) Microsoft Certified: Fabric Analytics Engineer Associate Microsoft Certified: Azure Solutions Architect Expert
Posted 2 months ago
1.0 - 5.0 years
5 - 6 Lacs
Bengaluru
Hybrid
Role & responsibilities Ensure the consistency of the data by controlling the definitions and adapting to changes in the business and its environment Work collaboratively with FinOps, and IT teams as well as functional departments to support business projects Develop, visualize, maintain regular or ad-hoc Operational and Financial reporting, related data models and automations Bring existing excel reporting into Power BI Uphold a strict branding style, high level of visual standard Help create business metrics and KPIs Preferred candidate profile Working knowledge with Power BI Working knowledge with SQL programming language Working knowledge of Excel (Excel Power Pivot, Excel Power Query) Working knowledge with Fabric, Azure Synapse, Power Automate will be considered as advantage Willingness to learn new systems and tools
Posted 2 months ago
3.0 - 5.0 years
10 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Technical Requirements: 3 to 6 Years of IT & Azure Data engineering technologies experience Prior experience in ETL, data pipelines, and data flow techniques using Azure Data Services Working experience in Python, PySpark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse, and file formats like JSON & Parquet. Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform, and enrich data sets. Development experience in the orchestration of pipelines Good understanding of SQL, Databases, and data warehouse systems, preferably Teradata Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge of Data Warehouse concepts and Data Warehouse modelling. Working knowledge of SNO, M, S, including resolving incidents, handling Change requests /Service requests, and reporting on metrics to provide insights. Collaborate with the project team to understand tasks to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Non-technical requirement: Work with project leaders to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Think and work agile, from estimation to development, including testing, continuous integration, and deployment. Manage numerous project tasks concurrently and strategically, prioritizing when necessary. Proven ability to work as part of a virtual team of technical consultants working from different locations (including onsite) around project delivery goals. Technologies: Azure data factory Azure Data bricks Azure Synapse PySpark/SQL ADLS, BLOB Azure DevOps with CI/CD implementation. Nice to have skill sets: Business Intelligence tools (preferred Power BI) DP-203 Certified. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 2 months ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Gurugram
Work from Office
We Are Hiring! Sr. Azure Data Engineer at GSPANN Technologies - 5 + years of experience . Application Process: If you are ready to take the next step in your career and be a part of a leading IT services company, please send your updated CV to heena.ruchwani@gspann.com. 5 + years of experience . Location: Pune, Hyderabad, Gurgaon, Noida Key Skills & Experience: Azure Synapse Analytics Azure Data Factory (ADF) PySpark Databricks Expertise in developing and maintaining Stored Procedures Proven experience in designing and implementing scalable data solutions in Azure Preferred Qualifications: Minimum 6 years of hands-on experience working with Azure Data Services Strong analytical and problem-solving skills Excellent communication skills, both verbal and written Ability to collaborate effectively in a fast-paced, cross-functional environment Immediate Joiners Only: We are looking for professionals who can join immediately and contribute to dynamic projects. Application Process: If you are ready to take the next step in your career and be a part of a leading IT services company, please send your updated CV to heena.ruchwani@gspann.com. Join GSPANN Technologies and accelerate your career with exciting opportunities in data engineering!
Posted 2 months ago
9.0 - 14.0 years
27 - 40 Lacs
Hyderabad
Remote
Experience Required: 8+Years Mode of work: Remote Skills Required: Azure DataBricks, Azure Data Factory, Pyspark, Python, SQL, Spark Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within June) Responsibilities Design, develop, and maintain scalable and robust data solutions in the cloud using Apache Spark and Databricks. Gather and analyse data requirements from business stakeholders and identify opportunities for data-driven insights. Build and optimize data pipelines for data ingestion, processing, and integration using Spark and Databricks. Ensure data quality, integrity, and security throughout all stages of the data lifecycle. Collaborate with cross-functional teams to design and implement data models, schemas, and storage solutions. Optimize data processing and analytics performance by tuning Spark jobs and leveraging Databricks features. Provide technical guidance and expertise to junior data engineers and developers . Stay up to date with emerging trends and technologies in cloud computing, big data, and data engineering. Contribute to the continuous improvement of data engineering processes, tools, and best practices. Requirements: Bachelors or masters degree in computer science, engineering, or a related field. 10+ years of experience as a Data Engineer with a focus on building cloud-based data solutions. Strong experience with cloud platforms such as Azure or AWS. Proficiency in Apache Spark and Databricks for large-scale data processing and analytics. Experience in designing and implementing data processing pipelines using Spark and Databricks. Strong knowledge of SQL and experience with relational and NoSQL databases. Experience with data integration and ETL processes using tools like Apache Airflow or cloud-native orchestration services. Good understanding of data modelling and schema design principles. Experience with data governance and compliance frameworks . Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills to work effectively in a cross-functional team. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Posted 2 months ago
5.0 - 8.0 years
15 - 20 Lacs
Mohali, Pune
Hybrid
In this Role, Your Responsibilities Will Be: Develop, test, and maintain high-quality software products by using cutting-edge technologies and best programming practices. Design, develop, and maintain scalable and high-performance data pipelines using Azure Data Factory, Azure Synapse Analytics, and other Azure services. Develop and maintain ETL/ELT processes to ensure the quality, consistency, and accuracy of data. Implement data integration solutions across on-premises, hybrid, and cloud environments. Ensure the security, availability, and performance of enterprise data platforms. Work with relational (SQL Server, Azure SQL) and non-relational databases (Azure Cosmos DB, etc.). Build, test, and deploy data solutions using Azure DevOps, version control, and CI/CD pipelines. Develop and enforce data governance policies and practices to ensure data integrity and security. Perform code reviews and ensure the quality of deliverables from the other developers are as per the standards. Collaborate with multi-functional teams, including designers, developers, and quality assurance engineers to build, refine, and enhance software products. Optimize and troubleshoot large datasets using Python and Azure cloud-native technologies. Create and maintain comprehensive technical documentation.
Posted 2 months ago
6.0 - 8.0 years
32 - 37 Lacs
Pune
Work from Office
: Job TitleAFC Transaction Monitoring - Senior Engineer, VP LocationPune, India Role Description You will be joining the Anti-Financial Crime (AFC) Technology team and will work as part of a multi-skilled agile squad, specializing in designing, developing, and testing engineering solutions, as well as troubleshooting and resolving technical issues to enable the Transaction Monitoring (TM) systems to identify Money Laundering or Terrorism Financing. You will have the opportunity to work on challenging problems, with large complex datasets and play a crucial role in managing and optimizing the data flows within Transaction Monitoring. You will have the opportunity to work across Cloud and BigData technologies, optimizing the performance of existing data pipelines as well as designing and creating new ETL Frameworks and solutions. You will have the opportunity to work on challenging problems, building high-performance systems to process large volumes of data, using the latest technologies. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities As a Vice President, your role will include management and leadership responsibilities, such as: Leading by example, by creating efficient ETL workflows to extract data from multiple sources, transform it according to business requirements, and load it into the TM systems. Implementing data validation and cleansing techniques to maintain high data quality and detective controls to ensure the integrity and completeness of data being prepared through our Data Pipelines. Work closely with other developers and architects to design and implement solutions that meet business needs whilst ensuring that solutions are scalable, supportable and sustainable. Ensuring that all engineering work complies with industry and DB standards, regulations, and best practices Your skills and experience Good analytical problem-solving capabilities with excellent communication skills written and oral enabling authoring of documents that will support a technical team in performing development work. Experience in Google Cloud Platform is preferred but other the cloud solutions such as AWS would be considered 5+ years experience in Oracle, Control M, Linux and Agile methodology and prior experience of working in an environment using internally engineered components (database, operating system, etc.) 5+ years experience in Hadoop, Hive, Oracle, Control M, Java development is required whilst experience in OpenShift, PySpark is preferred Strong understanding of designing and delivering complex ETL pipelines in a regulatory space How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France