Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
5 - 8 Lacs
Ahmedabad
Work from Office
Roles and Responsibility : Develop and implement a reusable architecture of data pipelines to make data available for various purposes including Machine Learning (ML), Analytics and Reporting Work collaboratively as part of team engaging with system architects, data scientists and business in a healthcare context Define hardware, tools and software to enable the reusable framework for data sharing and ML model productionization Work comfortably with structured and unstructured data in a variety of different programming languages such as SQL, R, python, Java etc Understanding of distributing programming and advising data scientists on how to optimally structure program code for maximum efficiency Build data solutions that leverage controls to ensure privacy, security, compliance and data quality Understand meta-data management systems and orchestration architecture in the designing of ML/AI pipelines. Deep understanding of cutting edge cloud technology and frameworks to enable Data Science System integration skills between Business Intelligence and source transactional Improving overall production landscape as required Define strategies with Data Scientists to monitor models post production Write unit tests and participate in code reviews
Posted 1 week ago
18.0 - 25.0 years
60 - 100 Lacs
Gurugram
Hybrid
Our client is a global aviation powerhouse and one of the most recognized names in the airline industry. As a marquee player in the aviation space, the company is undergoing a bold digital transformationreimagining its operations, customer experience, and business strategy through the lens of data, AI, and cloud innovation. From predictive maintenance and personalized travel experiences to dynamic pricing and intelligent crew scheduling, data is at the core of every strategic initiative. We are seeking to identify a senior technology leader to spearhead the companys Data and AI Engineering of a high-impact global team that supports critical digital and operational functions across the enterprise. This role will be responsible for technology and team leadership to a cross-functional group of engineers and data scientists, partnering with global stakeholders across Digital Technology, Operations, Commercial, and corporate functions. The Job Provide leadership for high-impact data and machine learning engineering programs across global business functions. Drive development and management of scalable data platforms, pipelines, and AI/ML solutions supporting key enterprise priorities. Guide cross-functional teams in requirement gathering, architecture design, deployment, and change management. Manage a growing team of data and AI professionals and foster a culture of innovation and excellence. Represent the Data & AI Engineering team in global forums and leadership presentations. Your Profile A Bachelors degree in Computer Science, Engineering, or related fields (Master’s preferred). 18+ years of overall experience and 10+ years of digital and data technology experience, with at least 5 years in cloud-based data or AI systems. Expertise in data engineering, database management, and machine learning lifecycle with tools such as SQL, Python, Redshift, AWS Sagemaker, and LLMs. Deep understanding of cloud-native data platforms and modern ML/GenAI applications. Demonstrated ability to manage large-scale technical teams and mentor top talent. Experience in working with global teams and cross-cultural collaboration. Strong communication and stakeholder management skills. Passion for leveraging data and AI to solve complex business challenges and deliver measurable impact. This role is an ideal opportunity to be a part of a global transformation journey for one of the world’s most reputed service companies that relies on data as the core of its excellence and to work with best-in-class technologies and digital platforms across AI, cloud, and enterprise data.
Posted 1 week ago
2.0 - 6.0 years
4 - 8 Lacs
Faridabad
Work from Office
Job Summary We are looking for a highly skilled Data Engineer / Data Modeler with strong experience in Snowflake, DBT, and GCP to support our data infrastructure and modeling initiatives. The ideal candidate should possess excellent SQL skills, hands-on experience with Erwin Data Modeler, and a strong background in modern data architectures and data modeling techniques. Key Responsibilities Design and implement scalable data models using Snowflake and Erwin Data Modeler. Create, maintain, and enhance data pipelines using DBT and GCP (BigQuery, Cloud Storage, Dataflow). Perform reverse engineering on existing systems (e.g., Sailfish/DDMS) using DBeaver or similar tools to understand and rebuild data models. Develop efficient SQL queries and stored procedures for data transformation, quality, and validation. Collaborate with business analysts and stakeholders to gather data requirements and convert them into physical and logical models. Ensure performance tuning, security, and optimization of the Snowflake data warehouse. Document metadata, data lineage, and business logic behind data structures and flows. Participate in code reviews, enforce coding standards, and provide best practices for data modeling and governance. Must-Have Skills Snowflake architecture, schema design, and data warehouse experience. DBT (Data Build Tool) for data transformation and pipeline development. Strong expertise in SQL (query optimization, complex joins, window functions, etc.). Hands-on experience with Erwin Data Modeler (logical and physical modeling). Experience with GCP (BigQuery, Cloud Composer, Cloud Storage). Experience in reverse engineering legacy systems like Sailfish or DDMS using DBeaver. Good To Have Experience with CI/CD tools and DevOps for data environments. Familiarity with data governance, security, and privacy practices. Exposure to Agile methodologies and working in distributed teams. Knowledge of Python for data engineering tasks and orchestration scripts. Soft Skills Excellent problem-solving and analytical skills. Strong communication and stakeholder management. Self-driven with the ability to work independently in a remote setup. Skills: gcp,erwin,dbt,sql,data modeling,dbeaver,bigquery,query optimization,dataflow,cloud storage,snowflake,erwin data modeler,data pipelines,data transformation,datamodeler
Posted 1 week ago
3.0 - 5.0 years
19 - 25 Lacs
Bengaluru
Work from Office
Role & responsibilities JOB DESCRIPTION Strong on programming languages like Python , Java Must have one cloud hands-on experience ( GCP preferred) Must have: Experience working with Dockers Must have: Environments managing (e.g venv, pip, poetry, etc.) Must have: Experience with orchestrators like Vertex AI pipelines, Airflow, etc Must have: Data engineering, Feature Engineering techniques Proficient in either Apache Spark or Apache Beam or Apache Flink Must have: Advance SQL knowledge Must be aware of Streaming concepts like Windowing , Late arrival , Triggers etc Should have hands-on experience on Distributed computing Should have working experience on Data Architecture design Should be aware of storage and compute options and when to choose what Should have good understanding on Cluster Optimisation/ Pipeline Optimisation strategies Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources). Should have a Business mindset to understand data and how it will be used for BI and Analytics purposes. Should have working experience on CI/CD pipelines, Deployment methodologies, Infrastructure as a code (eg. Terraform) Good to have, Hands-on experience on Kubernetes Good to have Vector based Database like Qdrant Experience in working with GCP tools like: Storage: CloudSQL , Cloud Storage, Cloud Bigtable, BigQuery, Cloud Spanner, Cloud DataStore, Vector database Ingest : Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform
Posted 1 week ago
4.0 - 7.0 years
25 - 27 Lacs
Bengaluru
Remote
4+ YOE as a Data Engineer/Scientist, hands-on experience working on Data Warehousing, Data ingestion, Data processing, Data Lakes Must have strong development experience using Python. and SQL, understanding of data orchestration tools like Airflow Required Candidate profile Experience with data extraction techniques - CDC, batch-based, Debezium, Kafka Connect, AWS DMS, queuing/messaging systems - SQS, RabbitMQ, Kinesis, AWS, Data/ML - AWS Glue, MWAA, Athena, Redshift
Posted 1 week ago
3.0 - 6.0 years
12 - 22 Lacs
Noida
Work from Office
About CloudKeeper CloudKeeper is a cloud cost optimization partner that combines the power of group buying & commitments management, expert cloud consulting & support, and an enhanced visibility & analytics platform to reduce cloud cost & help businesses maximize the value from AWS, Microsoft Azure, & Google Cloud. A certified AWS Premier Partner, Azure Technology Consulting Partner, Google,Cloud Partner, and FinOps Foundation Premier Member, CloudKeeper has helped 400+ global companies save an average of 20% on their cloud bills, modernize their cloud set-up and maximize value all while maintaining flexibility and avoiding any long-term commitments or cost. CloudKeeper hived off from TO THE NEW, digital technology services company with 2500+ employees and an 8-time GPTW winner. Position Overview: We are looking for an experienced and driven Data Engineer to join our team. The ideal candidate will have a strong foundation in big data technologies, particularly Spark, and a basic understanding of Scala to design and implement efficient data pipelines. As a Data Engineer at CloudKeeper, you will be responsible for building and maintaining robust data infrastructure, integrating large datasets, and ensuring seamless data flow for analytical and operational purposes. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to collect, process, and store data from various sources. Work with Apache Spark to process large datasets in a distributed environment, ensuring optimal performance and scalability. Develop and optimize Spark jobs and data transformations using Scala for large-scale data processing. Collaborate with data analysts and other stakeholders to ensure data pipelines meet business and technical requirements. Integrate data from different sources (databases, APIs, cloud storage, etc.) into a unified data platform. Ensure data quality, consistency, and accuracy by building robust data validation and cleansing mechanisms. Use cloud platforms (AWS, Azure, or GCP) to deploy and manage data processing and storage solutions. Automate data workflows and tasks using appropriate tools and frameworks. Monitor and troubleshoot data pipeline performance, optimizing for efficiency and cost-effectiveness. Implement data security best practices, ensuring data privacy and compliance with industry standards. Required Qualifications: 4- 6 years of experience required as a Data Engineer or an equivalent role Strong experience working with Apache Spark with Scala for distributed data processing and big data handling. Basic knowledge of Python and its application in Spark for writing efficient data transformations and processing jobs. Proficiency in SQL for querying and manipulating large datasets.ing technologies. Experience with cloud data platforms, preferably AWS (e.g., S3, EC2, EMR, Redshift) or other cloud-based solutions. Strong knowledge of data modeling, ETL processes, and data pipeline orchestration. Familiarity with containerization (Docker) and cloud-native tools for deploying data solutions. Knowledge of data warehousing concepts and experience with tools like AWS Redshift, Google BigQuery, or Snowflake is a plus. Experience with version control systems such as Git. Strong problem-solving abilities and a proactive approach to resolving technical challenges. Excellent communication skills and the ability to work collaboratively within cross-functional teams.
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Ahmedabad
Work from Office
Roles and Responsibility : Collaborate with stakeholders to understand business requirements and data needs. Translate business requirements into scalable and efficient data engineering solutions. Design, develop, and maintain data pipelines using AWS serverless technologies. Implement data modeling techniques to optimize data storage and retrieval processes. Develop and deploy data processing and transformation frameworks for real-time and batch processing. Ensure data pipelines are scalable, reliable, and performant for large-scale data sizes. Implement data documentation and observability tools and practices to monitor...
Posted 1 week ago
6.0 - 8.0 years
15 - 22 Lacs
Mumbai
Work from Office
Strong Python programming skills with expertise in Pandas, Lxml, ElementTree, File I/O operations, Smtplib, and Logging libraries Basic understanding of XML structures and ability to extract key parent & child xml tag elements from XML data structure Required Candidate profile Java Spring Boot API Microservices - 8+ years of exp + SQL 5+ years of exp + Azure 3+ years of experience
Posted 1 week ago
3.0 - 5.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Position summary: We are seeking a Senior Software Development Engineer – Data Engineering with 3-5 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. Key Responsibilities: Work with cloud-based data solutions (Azure, AWS, GCP). Implement data modeling and warehousing solutions. Developing and maintaining data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Designing and optimizing data storage solutions, including data warehouses and data lakes. Ensuring data quality and integrity through data validation, cleansing, and error handling. Collaborating with data analysts, data architects, and software engineers to understand data requirements and deliver relevant data sets (e.g., for business intelligence). Implementing data security measures and access controls to protect sensitive information. Monitor and troubleshoot issues in data pipelines, notebooks, and SQL queries to ensure seamless data processing. Develop and maintain Power BI dashboards and reports. Work with DAX and Power Query to manipulate and transform data. Basic Qualifications Bachelor’s or master’s degree in computer science or data science 3-5 years of experience in data engineering, big data processing, and cloud-based data platforms. Proficient in SQL, Python, or Scala for data manipulation and processing. Proficient in developing data pipelines using Azure Synapse, Azure Data Factory, Microsoft Fabric. Experience with Apache Spark, Databricks and Snowflake is highly beneficial for handling big data and cloud-based analytics solutions. Preferred Qualifications Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub). Experience in BI and analytics tools (Tableau, Power BI, Looker). Familiarity with data observability tools (Monte Carlo, Great Expectations). Contributions to open-source data engineering projects.
Posted 1 week ago
2.0 - 5.0 years
6 - 13 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Software Developer II: Oracle Data Integrator (ODI) Bangalore, Hyderabad, Chennai, Mumbai, Pune, Gurgaon, Kolkata LOCATION EXPERIENCE 2-5 years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. WHY SHOULD YOU JOIN US? With the agility of a start-up and the opportunities of an enterprise, every day at HashedIn, your work will make an impact that matters. So, if you are a problem solver looking to thrive in a dynamic fun culture of inclusion, collaboration, and high performance HashedIn is the place to be! From learning to leadership, this is your chance to take your software engineering career to the next level. So, what impact will you make? Visit us @ https://hashedin.com JOB TITLE: Software Developer II: Oracle Data Integrator (ODI) OVERVIEW OF THE ROLE: We are looking for an experienced Oracle Data Integrator (ODI) and Oracle Analytics Cloud (OAC) Consultant to join our dynamic team. You will be responsible for designing, implementing, and optimizing cutting-edge data integration and analytics solutions. Your contributions will be pivotal in enhancing data-driven decision-making and delivering actionable insights across the organization. HASHEDIN BY DELOITTE 2025 Key Responsibilities: ¢ Develop robust data integration solutions using Oracle Data Integrator (ODI). Create, optimize, and maintain ETL/ELT workflows and processes. Configure and manage Oracle Analytics Cloud (OAC) to provide interactive dashboards and advanced analytics. ¢ ¢ Integrate and transform data from various sources to generate meaningful insights using OAC. Monitor and troubleshoot data pipelines and analytics solutions to ensure optimal performance. ¢ ¢ ¢ Ensure data quality, accuracy, and integrity across integration and reporting systems. Provide training and support to end-users for OAC and ODI solutions. Analyze, design develop, fix and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Technical Skills: ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ Expertise in ODI components such as Topology, Designer, Operator, and Agent. Experience in Java and weblogic development. Proficiency in developing OAC dashboards, reports, and KPIs. Strong knowledge of SQL and PL/SQL for advanced data manipulation. Familiarity with Oracle databases and Oracle Cloud Infrastructure (OCI). Experience in data modeling and designing data warehouses. Strong analytical and problem-solving abilities. Excellent communication and client-facing skills. Hands-on, end to end DWH Implementation experience using ODI. Should have experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. Should be able to implement reusability, parameterization, workflow design, etc. ¢ ¢ ¢ ¢ Expertise in the Oracle ODI tool set and Oracle PL/SQL,ODI knowledge of ODI Master and work repository Knowledge of data modelling and ETL design Setting up topology, building objects in Designer, Monitoring Operator, different type of KMs, Agents etc ¢ ¢ Packaging components, database operations like Aggregate pivot, union etc. using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ Design and develop complex mappings, Process Flows and ETL scripts Experience of performance tuning of mappings Ability to design ETL unit test cases and debug ETL Mappings Expertise in developing Load Plans, Scheduling Jobs Ability to design data quality and reconciliation framework using ODI Integrate ODI with multiple Source / Target Experience on Error recycling / management using ODI,PL/SQL Expertise in database development (SQL/ PLSQL) for PL/SQL based applications. © HASHEDIN BY DELOITTE 2025 ¢ Experience of creating PL/SQL packages, procedures, Functions , Triggers, views, Mat Views and exception handling for retrieving, manipulating, checking and migrating complex datasets in oracle ¢ ¢ ¢ ¢ ¢ Experience in Data Migration using SQL loader, import/export Experience in SQL tuning and optimization using explain plan and SQL trace files. Strong knowledge of ELT/ETL concepts, design and coding Partitioning and Indexing strategy for optimal performance. Should have experience of interacting with customers in understanding business requirement documents and translating them into ETL specifications and High and Low level design documents. ¢ Ability to work with minimal guidance or supervision in a time critical environment. Experience: ¢ ¢ ¢ 4-6 Years of overall experience in Industry 3+ years of experience with Oracle Data Integrator (ODI) in data integration projects. 2+ years of hands-on experience with Oracle Analytics Cloud (OAC). Preferred Skills: ¢ Knowledge of Oracle Autonomous Data Warehouse (ADW) and Oracle Integration Cloud (OIC). ¢ ¢ ¢ Familiarity with other analytics tools like Tableau or Power BI. Experience with scripting languages such as Python or shell scripting. Understanding of data governance and security best practices. Educational Qualifications: ¢ Bachelors degree in Computer Science, Information Technology, Engineering, or related field. © HASHEDIN BY DELOITTE 2025
Posted 1 week ago
6.0 - 10.0 years
8 - 12 Lacs
Chennai, Bengaluru
Work from Office
KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 1 week ago
6.0 - 8.0 years
8 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Senior Data Engineer (Remote, Contract 6 Months) Databricks, ADF, and PySpark. We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background #ContractDetails Role: Senior Data Engineer Mode: Remote Duration: 6 Months Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune
Posted 1 week ago
4.0 - 9.0 years
6 - 11 Lacs
Mumbai, Maharastra
Work from Office
S&P Global Ratings is looking for a Java/Angular full stack solid engineering technologist/individual contributor to join Ingestion Pipelines Engineering team within Data Services group, a team of data and technology professionals who define and execute the strategic data roadmap for S&P Global Ratings. The successful candidate will participate in the design and build of S&P Ratings cutting edge Ingestion pipelines solutions. The Team : You will be an expert contributor and part of the Rating Organizations Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organizations critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities and Impact : Architect, design, and implement innovative software solutions to enhance S&P Ratings' cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. What Were Looking For: Basic Required Qualifications : Bachelors degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development Designing/developing enterprise products, modern tech stacks and data platforms 4+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 4+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Additional Preferred Qualifications : Experience working AWS Experience with SAFe Agile Framework Bachelor's/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor
Posted 1 week ago
6.0 - 11.0 years
18 - 33 Lacs
Pune, Bengaluru
Work from Office
Role: Data Engineer Experience: 6-8 Years Relevant Experience in Data Engineer : 6+ Years Notice Period: Immediate Joiners Only Job Location: Pune and Bangalore Key Responsibilities: Mandate Skills Strong - Pyspark (Programming) Databricks Technical and professional skills: We are looking for a flexible, fast learning, technically strong Data Engineer. Expertise is required in the following fields: Proficient in Cloud Services Azure Architect and implement ETL and data movement solutions. Design and implement data solutions using medallion architecture, ensuring effective organization and flow of data through bronze, silver, and gold layers. Optimize data storage and processing strategies to enhance performance and data accessibility across various stages of the medallion architecture. Collaborate with data engineers and analysts to define data access patterns and establish efficient data pipelines. Develop and oversee data flow strategies to ensure seamless data movement and transformation across different environments and stages of the data lifecycle. Migrate data from traditional database systems to Cloud environment Strong hands-on experience for working with Streaming dataset Building Complex Notebook in Databricks to achieve business Transformations. Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Reach us:If you are interested in this position and meet the above qualifications, please reach out to me directly at swati@cielhr.com and share your updated resume highlighting your relevant experience.
Posted 1 week ago
5.0 - 10.0 years
18 - 25 Lacs
Bengaluru
Remote
Job Title: Data Engineer ETL & Spatial Data Expert Locations: Bengaluru / Gurugram / Nagpur / Remote Department: Data Engineering / GIS / ETL Experience: As per requirement (CTC capped at 3.5x of experience in years) Notice Period: Max 30 days Role Overview: We are looking for a detail-oriented and technically proficient Data Engineer with strong experience in FME, spatial data handling , and ETL pipelines . The role involves building, transforming, validating, and automating complex geospatial datasets and dashboards to support operational and analytical needs. Candidates will work closely with internal teams, local authorities (LA), and HMLR specs. Key Responsibilities: 1. Data Integration & Transformation Build ETL pipelines using FME to ingest and transform data from Idox/CCF systems. Create Custom Transformers in FME to apply reusable business rules. Use Python (standalone or within FME) for custom transformations, date parsing, and validations. Conduct data profiling to assess completeness, consistency, and accuracy. 2. Spatial Data Handling Manage and query spatial datasets using PostgreSQL/PostGIS . Handle spatial formats like GeoPackage, GML, GeoJSON, Shapefiles . Fix geometry issues like overlaps or invalid polygons using FME or SQL . Ensure proper coordinate system alignment (e.g., EPSG:27700). 3. Automation & Workflow Orchestration Use FME Server/FME Cloud to automate and monitor ETL workflows. Schedule batch processes via CI/CD, Cron, or Python . Implement audit trails and logs for all data processes and rule applications. 4. Dashboard & Reporting Integration Write SQL views and aggregations to support dashboard visualizations. Optionally integrate with Power BI, Grafana, or Superset . Maintain metadata tagging for each data batch. 5. Collaboration & Communication Interpret validation reports and collaborate with Analysts/Ops teams. Translate business rules into FME logic or SQL queries. Map data to LA/HMLR schemas accurately. Preferred Tools & Technologies: CategoryToolsETLFME (Safe Software), Talend (optional), PythonSpatial DBPostGIS, Oracle SpatialGIS ToolsQGIS, ArcGISScriptingPython, SQLValidationFME Testers, AttributeValidator, SQL viewsFormatsCSV, JSON, GPKG, XML, ShapefilesCollaborationJira, Confluence, Git Ideal Candidate Profile: Strong hands-on experience with FME workflows and spatial data transformation . Proficient in scripting using Python and working with PostGIS . Demonstrated ability to build scalable data automation pipelines. Effective communicator capable of converting requirements into technical logic. Past experience with LA or HMLR data specifications is a plus. Required Qualifications: B.E./B.Tech. (Computer Science, IT, or ECE) B.Sc. (IT/CS) or Full-time MCA Strict Screening Criteria: No employment gaps over 4 months. Do not consider candidates from Jawaharlal Nehru University. Exclude profiles from Hyderabad or Andhra Pradesh (education or employment). Reject profiles with BCA, B.Com, Diploma, or open university backgrounds. Projects must detail technical tools/skills used clearly. Max CTC is 3.5x of total years of experience. No flexibility on notice period or compensation. No candidates from Noida for Gurugram location.
Posted 1 week ago
2.0 - 5.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Company Overview Accordion works at the intersection of sponsors and management teams throughout every stage of the investment lifecycle, providing hands-on, execution-focused support to elevate data and analytics capabilities. So, what does it mean to work at Accordion? It means joining 1,000+ analytics, data science, finance & technology experts in a high-growth, agile, and entrepreneurial environment while transforming how portfolio companies drive value. It also means making your mark on Accordions futureby embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Headquartered in New York City with 10 offices worldwide, Accordion invites you to join our journey. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) team delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team delivers data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Location: Hyderabad, Telangana Role Overview: Accordion is looking for Senior Data Engineer with Database/Data Warehouse/Business Intelligence experience. He/she will be responsible for the design, development, configuration/deployment, and maintenance of the above technology stack. He/she must have in depth understanding of various tools & technologies in the above domain to design and implement robust and scalable solutions which address client current and future requirements at optimal costs. The Senior Data Engineer should be able to understand various architecture and recommend right fit depending on the use case of the project. A successful Senior Data Engineer should possess strong working business knowledge, familiarity with multiple tools and techniques along with industry standards and best practices in Business Intelligence and Data Warehousing environment. He/she should have strong organizational, critical thinking, and communication skills. What You will do: Understand the business requirements thoroughly to design and develop the BI architecture. Determine business intelligence and data warehousing solutions that meet business needs. Perform data warehouse design and modelling according to established standards. Work closely with the business teams to arrive at methodologies to develop KPIs and Metrics. Work with Project Manager in developing and executing project plans within assigned schedule and timeline. Develop standard reports and functional dashboards based on business requirements. Ensure to develop and deliver high quality reports in timely and accurate manner. Conduct training programs and knowledge transfer sessions to junior developers when needed. Recommend improvements to provide optimum reporting solutions. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. 2 - 5 years of experience in related field. Proven expertise in SSIS, SSAS and SSRS (MSBI Suite). In-depth knowledge of databases (SQL Server, MySQL, Oracle etc.) and data warehouse (Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.). In-depth knowledge of business intelligence tools (any one of Power BI, Tableau, Qlik, DOMO, Looker etc.). Good understanding of Azure (Data Factory & Pipelines, SQL Database & Managed Instances, DevOps, Logic Apps, Analysis Services), AWS (Glue, Aurora Database, Dynamo Database, Redshift, QuickSight). Proven abilities to take on initiative and be innovative. Analytical mind with problem solving attitude. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment : Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctors consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Cab reimbursement for women employees beyond a certain time of the day. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.
Posted 1 week ago
12.0 - 20.0 years
35 - 50 Lacs
Bengaluru
Hybrid
Data Architect with Cloud Expert, Data Architecture, Data Integration & Data Engineering ETL/ELT - Talend, Informatica, Apache NiFi. Big Data - Hadoop, Spark Cloud platforms (AWS, Azure, GCP), Redshift, BigQuery Python, SQL, Scala,, GDPR, CCPA
Posted 1 week ago
3.0 - 5.0 years
6 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
About the Role Were seeking a Data Engineering expert with a passion for teaching and building impactful learning experiences. This role goes beyond traditional instructionit's about designing engaging, industry-relevant content and delivering it in a way that sparks curiosity and problem-solving among young professionals. If youre someone who thrives in a startup-like, hands-on learning environment and loves to simplify complex technical concepts, we want you on our team. Qualification: B.E / M.Tech in Computer Science, Data Engineering, or related fields Key Skills & Expertise Strong practical experience with data engineering tools and frameworks (e.g., SQL, Python, Spark, Kafka, Airflow, Hadoop). Ability to design course modules that emphasize application, scalability, and problem-solving. Demonstrated experience in mentoring, teaching, or conducting technical workshops. Passion for product thinkingguiding students to go beyond code and build real solutions. Excellent communication and leadership skills. Adaptability and a growth mindset. Your Responsibilities at Inunity Design and deliver an industry-relevant Data Engineering curriculum with a focus on solving complex, real-world problems. Mentor students through the process of building product-grade data solutions, from identifying the problem to deploying a prototype. Conduct hands-on sessions, coding labs, and data engineering workshops. Assess student progress through assignments, evaluations, and project reviews. Encourage innovation and entrepreneurship by helping students transform ideas into structured products. Continuously improve content based on student outcomes and industry trends. Be a role model who inspires, supports, and challenges learners to grow into capable tech professionals. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India.
Posted 1 week ago
5.0 - 9.0 years
4 - 7 Lacs
Gurugram
Work from Office
Primary Skills SQL (Advanced Level) SSAS (SQL Server Analysis Services) Multidimensional and/or Tabular Model MDX / DAX (strong querying capabilities) Data Modeling (Star Schema, Snowflake Schema) Secondary Skills ETL processes (SSIS or similar tools) Power BI / Reporting tools Azure Data Services (optional but a plus) Role & Responsibilities Design, develop, and deploy SSAS models (both tabular and multidimensional). Write and optimize MDX/DAX queries for complex business logic. Work closely with business analysts and stakeholders to translate requirements into robust data models. Design and implement ETL pipelines for data integration. Build reporting datasets and support BI teams in developing insightful dashboards (Power BI preferred). Optimize existing cubes and data models for performance and scalability. Ensure data quality, consistency, and governance standards. Top Skill Set SSAS (Tabular + Multidimensional modeling) Strong MDX and/or DAX query writing SQL Advanced level for data extraction and transformations Data Modeling concepts (Fact/Dimension, Slowly Changing Dimensions, etc.) ETL Tools (SSIS preferred) Power BI or similar BI tools Understanding of OLAP & OLTP concepts Performance Tuning (SSAS/SQL) Skills: analytical skills,etl processes (ssis or similar tools),collaboration,multidimensional expressions (mdx),power bi / reporting tools,sql (advanced level),sql proficiency,dax,ssas (multidimensional and tabular model),etl,data modeling (star schema, snowflake schema),communication,azure data services,mdx,data modeling,ssas,data visualization
Posted 1 week ago
5.0 - 9.0 years
13 - 17 Lacs
Pune
Work from Office
Diacto is looking for a highly capable Data Architect with 5 to 9 years of experience to lead cloud data platform initiatives with a primary focus on Snowflake and Azure Data Hub. This individual will play a key role in defining the data architecture strategy, implementing robust data pipelines, and enabling enterprise-grade analytics solutions. This is an on-site role based in our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design and implement enterprise-level data architecture with a strong focus on Snowflake and Azure Data Hub Define standards and best practices for data ingestion, transformation, and storage Collaborate with cross-functional teams to develop scalable, secure, and high-performance data pipelines Lead Snowflake environment setup, configuration, performance tuning, and optimization Integrate Azure Data Services with Snowflake to support diverse business use cases Implement governance, metadata management, and security policies Mentor junior developers and data engineers on cloud data technologies and best practices Experience and Skills Required: 5?9 years of overall experience in data architecture or data engineering roles Strong, hands-on expertise in Snowflake, including design, development, and performance tuning Solid experience with Azure Data Hub and Azure Data Services (Data Lake, Synapse, etc.) Understanding of cloud data integration techniques and ELT/ETL frameworks Familiarity with data orchestration tools such as DBT, Airflow, or Azure Data Factory Proven ability to handle structured, semi-structured, and unstructured data Strong analytical, problem-solving, and communication skills Nice to Have: Certifications in Snowflake and/or Microsoft Azure Experience with CI/CD tools like GitHub for code versioning and deployment Familiarity with real-time or near-real-time data ingestio Why Join Diacto Technologies Work with a cutting-edge tech stack and cloud-native architectures Be part of a data-driven culture with opportunities for continuous learning Collaborate with industry experts and build transformative data solutions
Posted 1 week ago
7.0 - 10.0 years
17 - 32 Lacs
Bengaluru
Remote
Remote - Expertise in Snowflake development &coding Azure Data Factory (ADF) Knowledge of CI/CD Proficient in ETL design across platforms like Denodo, Data Services &CPI-DS React, Node.js, &REST API integration Solid understanding of cloud platform
Posted 1 week ago
1.0 - 3.0 years
1 - 2 Lacs
Nagercoil
Work from Office
Job Overview: We are looking for a skilled Python and Data Science Programmer to develop and implement data-driven solutions. The ideal candidate should have strong expertise in Python, machine learning, data analysis, and statistical modeling. Key Responsibilities: Data Analysis & Processing: Collect, clean, and preprocess large datasets for analysis. Machine Learning: Build, train, and optimize machine learning models for predictive analytics. Algorithm Development: Implement data science algorithms and statistical models for problem-solving. Automation & Scripting: Develop Python scripts and automation tools for data processing and reporting. Data Visualization: Create dashboards and visual reports using Matplotlib, Seaborn, Plotly, or Power BI/Tableau. Database Management: Work with SQL and NoSQL databases for data retrieval and storage. Collaboration: Work with cross-functional teams, including data engineers, business analysts, and software developers. Research & Innovation: Stay updated with the latest trends in AI, ML, and data science to improve existing models.
Posted 1 week ago
13.0 - 18.0 years
45 - 50 Lacs
Bengaluru
Work from Office
Job Title- Name List Screening and Transaction Screening Model Strats, AS Role Description Group Strategic Analytics (GSA) is part of Group Chief Operation Office (COO) which acts as the bridge between the Banks business and infrastructure functions to help deliver the efficiency, control, and transformation goals of the Bank. You will work within the Global Strategic Analytics Team as part of a global model strategy and deployment of Name List Screening and Transaction Screening. To be successful in that role, you will be familiar with the most recent data science methodologies and have a delivery-centric attitude, strong analytical skills, and a detail-oriented approach to breaking down complex matters into more understandable details. The purpose of Name List Screening and Transaction Screening is to identify and investigate unusual customer names and transactions and behavior, to understand if that activity is considered suspicious from a financial crime perspective, and to report that activity to the government. You will be responsible for helping to implement and maintain the models for Name List Screening and Transaction Screening to ensure that all relevant criminal risks, typologies, products, and services are properly monitored. We are looking for a high-performing Associate in financial crime model development, tuning, and analytics to support the global strategy for screening systems across Name List Screening (NLS) and Transaction Screening (TS). This role offers the opportunity to work on key model initiatives within a cross-regional team and contribute directly to the banks risk mitigation efforts against financial crime. You will support model tuning and development efforts, support regulatory deliverables, and help collaborate with cross-functional teams including Compliance, Data Engineering, and Technology. Your key responsibilities Support the design and implementation of the model framework for name and transaction screening including coverage, data, model development and optimisation. Support key data initiatives, including but not limited to, data lineage, data quality controls, and data quality issues management. Document model logic and liaise with Compliance and Model Risk Management teams to ensure screening systems and scenarios adhere to all model governance standards Participate in research projects on innovative solutions to make detection models more pro-active Assist in model testing, calibration and performance monitoring. Ensure detailed metrics & reporting are developed to provide transparency and maintain effectiveness of name and transaction screening models. Support all examinations and reviews performed by regulators, monitors, and internal audit Your skills and experience Advanced degree (Masters or PhD) in a quantitative discipline (Mathematics, Computer Science, Data Science, Physics or Statistics) 13 years experience in data analytics or model development (internships included). Proficiency in designing, implementing (python, spark, cloud environments) and deploying quantitative models in a large financial institution, preferably in Front Office. Hands-on approach needed. Experience utilizing Machine Learning and Artificial Intelligence Experience with data and the ability to clearly articulate data requirements as they relate to NLS and TS, including comprehensiveness, quality, accuracy and integrity Knowledge of the banks products and services, including those related to corporate banking, investment banking, private banking, and asset management
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Hyderabad, Bengaluru
Hybrid
Key Skills & Responsibilities Hands-on experience with AWS services: S3, Lambda, Glue, API Gateway, and SQS. Strong data engineering expertise on AWS, with proficiency in Python, PySpark, and SQL. Experience in batch job scheduling and managing data dependencies across pipelines. Familiarity with data processing tools such as Apache Spark and Airflow. Ability to automate repetitive tasks and build reusable frameworks for improved efficiency. Provide RunOps DevOps support, and manage the ongoing operation and monitoring of data services. Ensure high performance, scalability, and reliability of data workflows in cloud environments. Skills: aws,s3,glue,apache spark,lambda,airflow,sql,s3, lambda, glue, api gateway, and sqs,api gateway,pyspark,sqs,python,devops support
Posted 1 week ago
4.0 - 7.0 years
6 - 9 Lacs
Hyderabad, Bengaluru
Hybrid
Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6462 Jobs | Ahmedabad
Amazon
6351 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane